No description
Find a file
2012-11-01 13:51:51 -04:00
c_tools Slight re-organization of C/C++ tools. Significant modifications to support observational data. Python and pipeline scripts added 2012-10-31 10:43:15 -05:00
external Build python dependencies: Python, NetCDF4 2012-10-31 14:50:41 -04:00
pipeline added README and cleaned up prepareGadgetCatalog script 2012-11-01 08:39:21 -05:00
python_tools Ported python code to netCDF4 package (support most recent netcdf4 on-disk fornat) 2012-10-31 15:00:43 -04:00
zobov Slight re-organization of C/C++ tools. Significant modifications to support observational data. Python and pipeline scripts added 2012-10-31 10:43:15 -05:00
.gitignore Dump all modifications 2010-09-14 03:12:04 -05:00
GetQhull.cmake Changed the default libqhull to libqhull_p otherwise voz1b1 segv 2012-10-29 10:23:45 -05:00
README added README and cleaned up prepareGadgetCatalog script 2012-11-01 08:39:21 -05:00

After compiling, go to the pipeline directory.

Edit the parameters at the top of prepareGadgetCatalog.py: decide where to put outputs, how many redshifts to do, how many slices, subdivisions, subsamples, etc. etc.

Note that eventually prepareGadgetCatalog will be replaced by the more general and flexible prepareCatalogs.

prepareGadgetCatalogs will produce a pipeline script for each subsampling you choose. If you have multiple redshift particle files, and choose multiple slices and/or subdivisions, they will be packaged in the same pipeline script.

Run "./generateCatalog.py [name of pipeline script]" for each script written by prepareGadgetCatalog. This will run generateMock, zobov, and pruneVoids. At the end of it, you should have a void catalog for each redshift, slice, and subdivision.

Check the logfiles for any error messages.

See the README of the public void catalog for the format of the outputs.

I'm also working on incorporating plotting into the pipeline script, so that you can immediately get some basic info about the voids.

Please do not change the outputs of pruneVoids etc. without discussion, since further analysis relies on the current formats.

If you're wondering why these scripts are rather complex, it's because it can also support A-P analysis, which is much more complicated :)

We can talk about ways to incorporate your analysis into the pipline and to have your tools under this umbrella.

Good luck!