No description
Find a file
2012-12-12 10:26:12 -06:00
c_tools void overlap analysis appears to work 2012-12-12 10:26:12 -06:00
external Enforce NetCDF4 file format in loadArray/saveArray functions 2012-11-24 17:53:22 -05:00
pipeline better handing of HOD 2012-12-09 20:26:54 -06:00
plotting some bug fixes to the a-p analysis scripts; fixed major bug in prunevoids; updated catalog release scripts 2012-11-29 11:14:59 -06:00
python_tools better handing of HOD 2012-12-09 20:26:54 -06:00
zobov Slight re-organization of C/C++ tools. Significant modifications to support observational data. Python and pipeline scripts added 2012-10-31 10:43:15 -05:00
.gitignore Dump all modifications 2010-09-14 03:12:04 -05:00
README more bug fixes for the minimum halo mass cuts...sigh 2012-11-18 22:47:49 -06:00
run_env.sh.in Added another script to setup the environment prior to run a given executable 2012-11-01 21:30:44 -04:00
run_python.sh.in Fixed cmake scripts to run python build/install. New script to run python scripts. 2012-11-01 17:03:25 -04:00

After compiling, go to the pipeline directory.

Create a dataset parameter file. Look at datasets/multidark.py for 
an example. Describe the simulation, where to put outputs, how many 
redshift slices, subvolumes, etc. etc.

prepareCatalogs will produce a pipeline script for each
subsampling you choose. If you have multiple redshift particle files,
and choose multiple slices and/or subdivisions, they will be packaged
in the same pipeline script.

Run "./generateCatalog.py [name of pipeline script]" for each script
written by prepareGadgetCatalog. This will run generateMock, zobov,
and pruneVoids. At the end of it, you should have a void catalog for
each redshift, slice, and subdivision.

Check the logfiles for any error messages.

See the README of the public void catalog for the format of the
outputs.

Please do not change the outputs of pruneVoids etc. without
discussion, since further analysis relies on the current formats.

If you're wondering why these scripts are rather complex, it's because
it can also support A-P analysis, which is much more complicated :)

Good luck!