-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Start wprp rpbins at 0.0 #22
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you so much for putting this together. I am sorry I did not get a chance to do it myself.
I made some small comments on code documentation and removing some dead/duplicated code.
@@ -13,8 +13,9 @@ | |||
@pytest.mark.mpi_skip | |||
def test_wprp_serial_cpu_smoke(): | |||
data = gen_mstar_data(seed=42) | |||
data["rp_bins"] = np.concatenate([np.array([0]), data["rp_bins"]], dtype=np.float64) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should put this change directly in the gen_mstar_data function.
@@ -37,6 +38,7 @@ def test_wprp_serial_cpu_smoke(): | |||
@pytest.mark.mpi_skip | |||
def test_wprp_serial_cpu_derivs(): | |||
data = gen_mstar_data(seed=42) | |||
data["rp_bins"] = np.concatenate([np.array([0]), data["rp_bins"]], dtype=np.float64) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above.
n_grads = w1_jac.shape[0] | ||
n_rp = rpbins_squared.shape[0] - 1 | ||
n_rp = rpbins_squared.shape[0] - 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So a comment in the code that the -2 is from the fact that rp_bins are the bin edges so it is +1 relative to the number of bins, and then we add zero so that is another +1, which results in us having to subtract 2, would be good to add.
@@ -26,8 +26,9 @@ def test_wprp_serial_cuda_smoke(): | |||
xp = get_array_backend() | |||
|
|||
data = gen_mstar_data(seed=42) | |||
data["rp_bins"] = np.concatenate([np.array([0]), data["rp_bins"]], dtype=np.float64) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again this should be in gen_mstar_data.
@@ -56,6 +57,7 @@ def test_wprp_serial_cuda(): | |||
xp = get_array_backend() | |||
|
|||
data = gen_mstar_data(seed=42) | |||
data["rp_bins"] = np.concatenate([np.array([0]), data["rp_bins"]], dtype=np.float64) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above.
|
||
nrp = data["rp_bins"].shape[0] - 1 | ||
nrp = data["rp_bins"].shape[0] - 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A comment here on the -2 as above would be helpful.
|
||
n_grads = w1_jac.shape[0] | ||
n_rp = _rpbins_squared.shape[0] - 1 | ||
n_rp = rpbins_squared.shape[0] - 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need a comment here on why this line is not -2.
diffsmhm/diff_stats/cuda/wprp.py
Outdated
@@ -201,14 +200,14 @@ def wprp_serial_cuda( | |||
sums = np.array(sums.get()) | |||
|
|||
# convert to differential | |||
n_rp = rpbins_squared.shape[0] - 1 | |||
# n_rp = rpbins_squared.shape[0] - 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here and below there is commented out code that needs to be removed.
diffsmhm/diff_stats/cuda/wprp.py
Outdated
assert not xp.allclose(rpbins_squared[0], 0) | ||
for rpb in rpbins_squared: | ||
assert xp.allclose(rpb[0], 0) | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can remove this commented code block if we do not need it.
mostly a note to self
We're doing this so that sigma and wprp kernels have consistent input
So, I think that we can have serial take in 0.0 and just ignore it, and then have mpi kernels take it in and return rpbins without 0.0 for the normalization math in mpi/wprp.py
However, this means that our wprp shape is
len(rpbins) - 2
which seems questionable