No description
Find a file
2013-03-31 22:03:22 -05:00
analysis added nico's cross-correlation script to a new directory, analysis/ 2013-03-22 11:46:20 -05:00
build_tools Updated Copyright/License 2013-03-02 20:23:33 -05:00
c_tools now switched to accepting only top-level voids; harmonized ancillary files (barycenters_, voidDesc_) to the output of centers_ 2013-03-31 22:03:22 -05:00
crossCompare added script to compare stacks and radial profiles of matched voids 2013-03-23 11:50:17 -05:00
external Made SHARP building optional in CosmoTool. Disabled in VIDE building makefiles 2013-03-19 09:29:05 -04:00
pipeline added nico's cross-correlation script to a new directory, analysis/ 2013-03-22 11:46:20 -05:00
python_tools pruneVoids now cleans out non-leaf voids; adjusted stackVoidsZero launcher to accept custom stacks 2013-03-30 15:24:45 -05:00
zobov Early check for Qhull overflow 2013-03-30 16:17:34 -04:00
.gitignore Dump all modifications 2010-09-14 03:12:04 -05:00
README Put the name in README 2013-03-02 15:27:27 -06:00
run_env.sh.in Added another script to setup the environment prior to run a given executable 2012-11-01 21:30:44 -04:00
run_python.sh.in Fixed cmake scripts to run python build/install. New script to run python scripts. 2012-11-01 17:03:25 -04:00

\        /   /   |-\    -----
 \      /    |   |  \   |
  \    /    /    |   |  |--
   \  /     |    |  /   |
    \/      /    |-/    -----

*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*

This is VIDE. The Void IDEntifier pipeline.


License/Copyright information
-----------------------------

Copyright (C) 2010-2013 Guilhem Lavaux, 2011-2013 Paul M. Sutter.
This software is put under the GNU Public License. Please see LICENSE
for further information.


Parts of the pipeline includes ZOBOV. See zobov/zobov_readme.txt for 
copyright/license information

Building
--------


After compiling, go to the pipeline directory.


Using the pipeline
------------------


Create a dataset parameter file. Look at datasets/multidark.py for 
an example. Describe the simulation, where to put outputs, how many 
redshift slices, subvolumes, etc. etc.

prepareCatalogs will produce a pipeline script for each
subsampling you choose. If you have multiple redshift particle files,
and choose multiple slices and/or subdivisions, they will be packaged
in the same pipeline script.

Run "./generateCatalog.py [name of pipeline script]" for each script
written by prepareGadgetCatalog. This will run generateMock, zobov,
and pruneVoids. At the end of it, you should have a void catalog for
each redshift, slice, and subdivision.

Check the logfiles for any error messages.

See the README of the public void catalog for the format of the
outputs.

Please do not change the outputs of pruneVoids etc. without
discussion, since further analysis relies on the current formats.

If you're wondering why these scripts are rather complex, it's because
it can also support A-P analysis, which is much more complicated :)

Good luck!

Important directories:

pipeline: scripts to set up and generate void catalogs
crossCompare: analysis and plotting tools designed to work with disparate catalogs