csiborgtools/notebooks/match_observation/harry_clusters.ipynb
Richard Stiskalek 779f2e76ac
Calculate upglade redshifts (#128)
* Update redshift reading

* Add helio to CMB redshift

* Update imports

* Update nb

* Run for Quijote

* Add script

* Update

* Update .gitignore

* Update imports

* Add Peery estimator

* Add bulk flow scripts

* Update typs

* Add comment

* Add blank space

* Update submission script

* Update description

* Add barriers

* Update nb

* Update nb

* Rename script

* Move to old

* Update imports

* Add nb

* Update script

* Fix catalogue key

* Update script

* Update submit

* Update comment

* Update .gitignore

* Update nb

* Update for stationary obsrevers

* Update submission

* Add nb

* Add better verbose control

* Update nb

* Update submit

* Update nb

* Add SN errors

* Add draft of the script

* Update verbosity flags

* Add submission script

* Debug script

* Quickfix

* Remove comment

* Update nb

* Update submission

* Update nb

* Processed UPGLADE
2024-06-20 14:33:00 +01:00

12 KiB

Matching of haloes to clusters

In [144]:
import numpy as np
import matplotlib.pyplot as plt
from astropy.cosmology import FlatLambdaCDM

import pandas as pd

%load_ext autoreload
%autoreload 2
%matplotlib inline
The autoreload extension is already loaded. To reload it, use:
  %reload_ext autoreload

Load in the data

Harry: the exact routine may not work, I had to edit the raw .txt file a little

In [124]:
cosmo = FlatLambdaCDM(H0=100, Om0=0.307)

data0 = pd.read_csv("/mnt/users/rstiskalek/csiborgtools/data/top10_pwave_coords.txt", sep='\s+')
data = {}

data["id"] = np.array([int(x.split("_")[-1]) for x in data0["halo_ID"].values])
data["l"] = data0["l[deg]"].values
data["b"] = data0["b[deg]"].values
data["dist"] = cosmo.comoving_distance(data0["z"].values).value

print(data["dist"])
data0
[34.14606699  9.28692846 11.11279198 16.85641068  3.71636485 33.43032196
 24.65595192 29.91007806 40.31604335  9.346801  ]
Out[124]:
halo_ID KIND l[deg] b[deg] d[kpc] z Rdelta[kpc] rhos[Msol/kpc^3] rs[kpc] prof[keywrd] #1 #2 #3 FOVDiam[deg]
0 halo_647110 CLUSTER -47.66 29.96 -1 0.01142 4523.71 643822.01573 663.92743 kZHAO 1.0 3.0 1.0 37.74875
1 halo_10128802 CLUSTER -90.76 25.97 -1 0.00310 1975.98 611499.11522 294.56973 kZHAO 1.0 3.0 1.0 60.00762
2 halo_1338057 CLUSTER -35.42 -6.15 -1 0.00371 2738.50 593219.40743 413.73423 kZHAO 1.0 3.0 1.0 69.19885
3 halo_20419495 CLUSTER 87.42 88.09 -1 0.00563 3082.73 596355.51967 465.36921 kZHAO 1.0 3.0 1.0 51.83070
4 halo_20355327 CLUSTER -94.48 75.29 -1 0.00124 1644.94 635981.23866 240.83028 kZHAO 1.0 3.0 1.0 119.66328
5 halo_2503990 CLUSTER -35.24 -12.06 -1 0.01118 3164.45 600754.77263 478.21044 kZHAO 1.0 3.0 1.0 27.04669
6 halo_21251859 CLUSTER 31.15 44.45 -1 0.00824 2731.34 595715.63380 413.32640 kZHAO 1.0 3.0 1.0 31.59913
7 halo_745775 CLUSTER -47.49 36.08 -1 0.01000 2878.14 597011.32933 435.71418 kZHAO 1.0 3.0 1.0 27.49495
8 halo_3003328 CLUSTER 39.76 -46.41 -1 0.01349 3172.97 602266.09230 479.82208 kZHAO 1.0 3.0 1.0 22.50342
9 halo_4200359 CLUSTER -28.67 -22.77 -1 0.00312 1826.88 621061.51007 270.55674 kZHAO 1.0 3.0 1.0 55.35257

Load in the halo catalogue

In [126]:
boxsize = 677.7

halos = np.load("/users/hdesmond/Mmain/Mmain_9844.npy")
names = ["id", "x", "y", "z", "M"]
halos = {k: halos[:, i] for i, k in enumerate(names)}
halos["id"] = halos["id"].astype(int)
# Coordinates are in box units. Convert to Mpc/h
for p in ("x", "y", "z"):
    halos[p] = halos[p] * boxsize

halos["dist"] = np.sqrt((halos["x"] - boxsize/2)**2 + (halos["y"] - boxsize/2)**2 + (halos["z"] - boxsize/2)**2)
In [142]:
# Find which item in the catalogue matches the nth halo in the .txt file
n = 0
k = np.where(data["id"][n] == halos["id"])[0][0]
print(k)
5791
In [143]:
data["dist"][n], halos["dist"][k]
Out[143]:
(34.14606698507264, 149.24046381654668)