mirror of
https://github.com/Richard-Sti/csiborgtools.git
synced 2024-12-23 03:38:03 +00:00
8a56c22813
* add listing of snapshots * change distance to comoving * ignore cp files * rename nb * add str to list * add NFW profile shapes * add fits imports * Rename to Nsnap * in clumps_read only select props * make clumpid int * expand doc * add import * edit readme * distribute halos * add profile & posterior * add import * add import * add documentation * add rvs and init guess * update todo * update nb * add file * return end index too * change clump_ids format to int32 * skeleton of dump particle * update nb * add func to drop 0 clump indxs parts * add import * add halo dump * switch to float32 * Update TODO * update TODO * add func that loads a split * add halo object * Rename to clump * make post work with a clump * add optimiser * add Nsplits * ignore submission scripts * ignore .out * add dumppath * add job splitting * add split halos script * rename file * renaem files * rm file * rename imports * edit desc * add pick clump * add number of particles * update TODO * update todo * add script * add dumping * change dumpdir structure * change dumpdir * add import * Remove tqdm * Increase the number of splits * rm shuffle option * Change to remove split * add emojis * fix part counts in splits * change num of splits * rm with particle cut * keep splits * fit only if 10 part and more * add min distance * rm warning about not set vels * update TODO * calculate rho0 too * add results collection * add import * add func to combine splits * update TODO * add extract cols * update nb * update TODO
420 KiB
420 KiB
Matching of haloes to clusters¶
In [144]:
import numpy as np
import matplotlib.pyplot as plt
from astropy.cosmology import FlatLambdaCDM
import pandas as pd
%load_ext autoreload
%autoreload 2
%matplotlib inline
Load in the data¶
Harry: the exact routine may not work, I had to edit the raw .txt file a little
In [124]:
cosmo = FlatLambdaCDM(H0=100, Om0=0.307)
data0 = pd.read_csv("/mnt/users/rstiskalek/csiborgtools/data/top10_pwave_coords.txt", sep='\s+')
data = {}
data["id"] = np.array([int(x.split("_")[-1]) for x in data0["halo_ID"].values])
data["l"] = data0["l[deg]"].values
data["b"] = data0["b[deg]"].values
data["dist"] = cosmo.comoving_distance(data0["z"].values).value
print(data["dist"])
data0
Out[124]:
Load in the halo catalogue¶
In [126]:
boxsize = 677.7
halos = np.load("/users/hdesmond/Mmain/Mmain_9844.npy")
names = ["id", "x", "y", "z", "M"]
halos = {k: halos[:, i] for i, k in enumerate(names)}
halos["id"] = halos["id"].astype(int)
# Coordinates are in box units. Convert to Mpc/h
for p in ("x", "y", "z"):
halos[p] = halos[p] * boxsize
halos["dist"] = np.sqrt((halos["x"] - boxsize/2)**2 + (halos["y"] - boxsize/2)**2 + (halos["z"] - boxsize/2)**2)
In [142]:
# Find which item in the catalogue matches the nth halo in the .txt file
n = 0
k = np.where(data["id"][n] == halos["id"])[0][0]
print(k)
In [143]:
data["dist"][n], halos["dist"][k]
Out[143]: