In putting together my changes to the diffraction code I've found what looks like a bug in the existing code in main. In
|
d = geometry.circles[min_circle_idx[~line_mask]][..., :2] - points[~line_mask] |
|
n[~line_mask] = d / np.linalg.norm(d) |
photons which are closer to a circle than a line (masked by
~line_mask) have the vectors pointing from them to the circle centers normalized. I think this is meant to be doing a simple normalization of all the vectors between the photons and the circle centers: the vectors are first calculated as
d, then they're being normalized by division by
np.linalg.norm(d).
The issue is that when no ord or axis arguments are provided for multi-rank arrays (as here) it calculates the 2-norm of d.ravel(). So, what np.linalg.norm(d) returns is the root of the sum of all photons' squared x and y coordinates, just one (large, typically 10^3+) scalar by which all the elements of d are divided. The result is that the vectors are not normalized.
Standalone example:
import numpy as np
import imsim.diffraction
# The geometry set in diffraction.py contains two circles
geometry = imsim.diffraction.RUBIN_SPIDER_GEOMETRY
# xy coords of photons, normalized vectors should be identical
points = np.array([[1, 1], [2, 2], [3, 3]])
# What's happening in main just now:
# Vector from points to circle[0]'s origin (0, 0) is just -1*points
d = geometry.circles[0, :2] - points
# Calc norm of d.ravel()
norm = np.linalg.norm(d)
print(norm) # 5.291502622129181
# Normalize incorrectly
n = d / norm
print(n)
# Vectors of different lengths:
# [[-0.18898224 -0.18898224]
# [-0.37796447 -0.37796447]
# [-0.56694671 -0.56694671]]
# What I think should be happening:
# Vector from points to circle origin (0, 0), same as before
d = geometry.circles[0, :2] - points
# Calc norm across axis=1 i.e. for each point separately
norm = np.linalg.norm(d, axis=1)
print(norm) # [1.41421356 2.82842712 4.24264069]
# Stack norm across columns to allow d/norm
norm = np.column_stack([norm, norm])
# Normalize correctly
n = d / norm
print(n)
# Identical normalized vectors:
# [[-0.70710678 -0.70710678]
# [-0.70710678 -0.70710678]
# [-0.70710678 -0.70710678]]
Before making this change, I could never see the disks of photons caused by the circles in the spider geometry -- only the spikes. This is because n was so small after dividing d by the huge norm. Making this change gives much larger values in n, giving much stronger disk diffraction. I can see this in test images, where the discs become quite distinctive; it also causes failures in test_rubin_diffraction_shows_field_rotation and test_rubin_diffraction_produces_spikes which use the extract_spike_angles() function (mask central disk and return angle of the remaining photons, which should be the spikes). Any idea if the visible disks are expected or not?
As a side note: the reason I spotted this is that my new code which improves the memory usage in diffraction does some very simple loops across the points, calculating the normalized vector for each as it goes. If I implement the fix suggested above in main, then it produces images which are pixel matches for my new code.
In putting together my changes to the diffraction code I've found what looks like a bug in the existing code in main. In
imSim/imsim/diffraction.py
Lines 221 to 222 in d8d7f14
photons which are closer to a circle than a line (masked by
~line_mask) have the vectors pointing from them to the circle centers normalized. I think this is meant to be doing a simple normalization of all the vectors between the photons and the circle centers: the vectors are first calculated asd, then they're being normalized by division bynp.linalg.norm(d).The issue is that when no
ordoraxisarguments are provided for multi-rank arrays (as here) it calculates the 2-norm ofd.ravel(). So, whatnp.linalg.norm(d)returns is the root of the sum of all photons' squared x and y coordinates, just one (large, typically 10^3+) scalar by which all the elements ofdare divided. The result is that the vectors are not normalized.Standalone example:
Before making this change, I could never see the disks of photons caused by the circles in the spider geometry -- only the spikes. This is because
nwas so small after dividingdby the huge norm. Making this change gives much larger values inn, giving much stronger disk diffraction. I can see this in test images, where the discs become quite distinctive; it also causes failures intest_rubin_diffraction_shows_field_rotationandtest_rubin_diffraction_produces_spikeswhich use theextract_spike_angles()function (mask central disk and return angle of the remaining photons, which should be the spikes). Any idea if the visible disks are expected or not?As a side note: the reason I spotted this is that my new code which improves the memory usage in diffraction does some very simple loops across the points, calculating the normalized vector for each as it goes. If I implement the fix suggested above in main, then it produces images which are pixel matches for my new code.