-
Notifications
You must be signed in to change notification settings - Fork 3
feat: docstrings and overall design for expectation maximization iterative refinement #21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
jedyeo
wants to merge
175
commits into
master
Choose a base branch
from
itr_ref_docstrings
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 3 commits
Commits
Show all changes
175 commits
Select commit
Hold shift + click to select a range
a2629da
Init
jedyeo ccc4359
precommit yaml
jedyeo 186fddc
remove ref
jedyeo 62b9f42
Format code with black and isort
deepsource-autofix[bot] ee5adab
Docstrings, precommit broken?
jedyeo b027987
Format code with black and isort
deepsource-autofix[bot] 26dcf79
Update docstrings
jedyeo 2110a8a
Format code with black and isort
deepsource-autofix[bot] f493da6
Dummy tests
00b8afb
docstrings and tests
jedyeo 387c8ed
Format code with black and isort
deepsource-autofix[bot] be31e4c
Minor ds change. Fixed tests w/ fixture
jedyeo bb3b232
Minor ds change. Fixed tests w/ fixture
jedyeo 7d42322
Format code with black and isort
deepsource-autofix[bot] e3becc4
Minor docstring changes
jedyeo e12673d
minor ds changes
jedyeo dd4d35b
ds changes
jedyeo 2429d1b
Change class docstring
jedyeo e8503c2
Remove todo's breaking deepsource
jedyeo 87a9a87
Format code with black and isort
deepsource-autofix[bot] 65273f0
DeepSource changes
jedyeo 5309ac8
deepsoruce changes
jedyeo d5ea497
Format code with black and isort
deepsource-autofix[bot] 1a79772
literals removed
jedyeo c473d43
literals removed
jedyeo 38a9a5e
add pre commit
jedyeo 7a4de6f
Init tests
jedyeo df6c749
temporary static methods
jedyeo 6849b0c
Format code with black and isort
deepsource-autofix[bot] 4e927ab
temporary static methods
jedyeo 93ca7fc
ds changes
jedyeo ea2c766
ds changes
jedyeo 714b33d
changed deepsource
jedyeo 5b4db6e
lint
jedyeo ccd0d04
add black back
jedyeo c8f3c60
Format code with black
deepsource-autofix[bot] c2726b2
update toml
jedyeo 7f96273
ds
jedyeo 3d4ad77
added .ini for pytest
jedyeo b293181
init
jedyeo 02eec0a
Codecov files
49138c7
Linting files
6135c72
Dev requirements
f54ece8
Environment config
b9b540e
Added setup.py
afd7290
env yaml
jedyeo 5a4a56e
Reformatted docstrings
e504e32
Format code with black
deepsource-autofix[bot] a282ffc
Testing workspace changes
43a440e
Merge branch 'itr_ref_docstrings' of github.com:compSPI/reconstructSP…
1a0af09
Fixed test docstrings
3d1be9b
Format code with black
deepsource-autofix[bot] 5268b73
Edited module name
3e638af
Merge branch 'itr_ref_docstrings' of github.com:compSPI/reconstructSP…
984af8a
Linting fixes
eecb97e
Format code with black
deepsource-autofix[bot] f7f98b4
Linting fixes
1a79652
Linting fixes
d2faa25
Fixed imports
a3eedd6
Readded simSPI import
c898048
Fixed function reference
71356a5
Fixed test_build_ctf_array
0522a91
tets
jedyeo ee0ef8f
Format code with black
deepsource-autofix[bot] 7141f37
Reworked tests to use consistent array sizing
5d0d1a1
Tab snuck in
7289762
Format code with black
deepsource-autofix[bot] f1bd414
Comment change to force checks
756adbe
Fixes to tests
40a6fb0
Refactored tests a bit
5c74fb3
Fix to test_split_array
c210d36
Fix to test_generate_xy_plane
8753a1f
Docstring fix
1ce2209
Deepsource tricked me
0673ebb
Dosctring to force checks
ae8c96f
Fixture fix:
16a0b90
Format code with black
deepsource-autofix[bot] 3c3595c
Format code with black
deepsource-autofix[bot] a5bc9ee
Format code with black
deepsource-autofix[bot] b76ec98
Format code with black
deepsource-autofix[bot] b58ba2f
Forcing checks again...
1fb62fa
Merge branch 'itr_ref_docstrings' of github.com:compSPI/reconstructSP…
98a879c
Format code with black
deepsource-autofix[bot] a1dc0cd
Better ctf_info for tests
5818e9e
Merge branch 'itr_ref_docstrings' of github.com:compSPI/reconstructSP…
4bcf15a
Format code with black
deepsource-autofix[bot] ecee780
Strings not variables...
f9c4196
Merging
0ac923c
Format code with black
deepsource-autofix[bot] cb7b069
Fix
0fb3b1b
Fix
966ed3c
test fixes
ccbd07b
Format code with black
deepsource-autofix[bot] 4858d54
Test fixes
b7b8688
Merge branch 'itr_ref_docstrings' of github.com:compSPI/reconstructSP…
bc88498
Force checks
ab014c5
Format code with black
deepsource-autofix[bot] 656b5d2
test fixes
4a86e20
Merge branch 'itr_ref_docstrings' of github.com:compSPI/reconstructSP…
b9e72ad
Forcing checks
5577ded
Reshape fix
bb6e258
Shape fix
81e5faa
Structure changes
a2f4b52
Test framework adjustment
17f94ca
Import change
9f7ee21
Small typo fixes
thisFreya 0d418b9
Added a slightly more comprehensive split test
thisFreya e6410cd
Format code with black
deepsource-autofix[bot] 511d105
Added back references
b575e13
Added the big method
thisFreya 41a5807
Format code with black
deepsource-autofix[bot] 22efe76
Pre-commit and black changes
thisFreya 5a22371
Merging
thisFreya bda6275
Deepsource
thisFreya 976be19
Expanded split_array to allow lists and tuples
thisFreya 00e7281
Format code with black
deepsource-autofix[bot] ff36588
Deepsource fix
thisFreya 8de2090
Merge branch 'itr_ref_docstrings' of https://github.com/compSPI/recon…
thisFreya b8e37ef
Fixed split_array expansion
thisFreya abfd9aa
Fix to fsc test
thisFreya e1cd120
Data type fix
thisFreya 83d9cf7
Concatenate fix
thisFreya f8dc07a
Change to insert slice shape
352a989
Format code with black
deepsource-autofix[bot] d381e3c
fix to numpy array initialization
56f02a1
Merging
294d4b5
Format code with black
deepsource-autofix[bot] 3819306
num rotations change
2b25434
Merge branch 'itr_ref_docstrings' of github.com:compSPI/reconstructSP…
f28ccfa
Format code with black
deepsource-autofix[bot] c053980
fsc fix
89cdaa8
Merge branch 'itr_ref_docstrings' of github.com:compSPI/reconstructSP…
7083a54
Forgot what a dot product did for a second there
2a1eda7
Updated iterative refinement test to reflect fsc fix
cba9320
Removed superfluous imports
d40e6ae
Updated docstrings, fixed splitting arrs
6c6e1f0
Split n into n_pix, n_particles
1e11a9d
Format code with black
deepsource-autofix[bot] d621fb1
Added back numba
b48b5ef
Merge branch 'itr_ref_docstrings' of github.com:compSPI/reconstructSP…
496f88a
Force checks
8aaff78
n_pix in fft docstrings
33e3229
Normalizing half maps
01ceaf5
Format code with black
deepsource-autofix[bot] 20f2232
Removed comments, refactored big method, fixed bayesian weights shapes
e76d8a2
Format code with black
deepsource-autofix[bot] 6e7ed32
Infrastructure files
6a9ee69
Infrastructure files
fd9cefa
Missed a name
bb8b274
Removed last few comments, will re-add if needed
66c0ef3
Format code with black
deepsource-autofix[bot] b9e8c10
Merging changes
579f716
Fixed variable name
6fe971c
Merge branch 'itr_ref_docstrings' of github.com:compSPI/reconstructSP…
55ee718
Changed codecov parameters
0d18f66
Removed __init__.py files
7f81456
Revert "Removed __init__.py files"
887bff7
Merging
e583a54
Revert "Changed codecov parameters"
aa607f3
Merge branch 'reconstructSPI_infrastructure' of github.com:compSPI/re…
15840cf
Refactoring library format
2f91bc6
Directory fixes
bccb41c
Revert "Refactoring library format"
2c42e8d
Revert "Directory fixes"
9126f86
Removed dependencies
3330c11
Testing something
aaf1078
Merge branch 'reconstructSPI_infrastructure' of github.com:compSPI/re…
ef10249
Testing things
d15bdb6
Merge branch 'reconstructSPI_infrastructure' of github.com:compSPI/re…
8dda440
Force checks
749bcd2
Testing things
2ea9459
Added library requirements
5152547
Added dev branch to branches on which tests will run.
5a4ebb5
Merging changes. Have temporarily added infrastructure branch to work…
9d05c1a
removed itr_ref_docstrings branch checks
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,346 @@ | ||
| """ | ||
| Iterative refinement in Bayesian expection maximization setting. | ||
| """ | ||
|
|
||
| import numpy as np | ||
| from compSPI.transforms import do_fft, do_ifft | ||
| # currently only 2D ffts in compSPI.transforms. can use torch.fft for 3d fft and convert back to numpy array | ||
|
|
||
|
|
||
| def do_iterative_refinement(map_3d_init, particles, ctf_info): | ||
| """ | ||
| Performs interative refimenent in a Bayesian expectation maximization setting, | ||
| i.e. maximum a posteriori estimation. | ||
|
|
||
| Input | ||
| ----- | ||
| map_3d_init | ||
| initial estimate | ||
| input map | ||
| shape (n_pix,n_pix,n_pix) | ||
| particles | ||
| particles to be reconstructed | ||
| shape (n_pix,n_pix) | ||
|
|
||
|
|
||
| Returns | ||
| ------- | ||
|
|
||
| map_3d_final | ||
| shape (n_pix,n_pix,n_pix) | ||
|
|
||
| map_3d_r_final | ||
| final updated map | ||
| shape (n_pix,n_pix,n_pix) | ||
| half_map_3d_r_1 | ||
| half map 1 | ||
| half_map_3d_r_2 | ||
| half map 2 | ||
| fsc_1d | ||
| final 1d fsc | ||
| shape (n_pix//2,) | ||
|
|
||
| """ | ||
|
|
||
| # split particles up into two half sets for statistical validation | ||
|
|
||
| def do_split(arr): | ||
| idx_half = arr.shape[0] // 2 | ||
| arr_1, arr_2 = arr[:idx_half], arr[idx_half:] | ||
| assert arr_1.shape[0] == arr_2.shape[0] | ||
| return parr_1, arr_2 | ||
|
|
||
| particles_1, particles_2 = do_split(particles) | ||
|
|
||
| def do_build_ctf(ctf_params): | ||
| """ | ||
| Build 2D array of ctf from ctf params | ||
|
|
||
| Input | ||
| ___ | ||
| Params of ctfs (defocus, etc) | ||
| Suggest list of dicts, one for each particle. | ||
|
|
||
| Returns | ||
| ___ | ||
| ctfs | ||
| type np.ndarray | ||
| shape (n_ctfs,n_pix,n_pix) | ||
|
|
||
| """ | ||
| n_ctfs = len(ctf_params) | ||
| # TODO: see simSPI.transfer | ||
| # https://github.com/compSPI/simSPI/blob/master/simSPI/transfer.py#L57 | ||
| ctfs = np.ones((n_ctfs,n_pix,n_pix)) | ||
|
|
||
| return ctfs | ||
|
|
||
| ctfs = do_build_ctf(ctf_info) | ||
| ctfs_1, ctfs_2 = do_split(ctfs) | ||
|
|
||
| # work in Fourier space. so particles can stay in Fourier space the whole time. | ||
| # they are experimental measurements and are fixed in the algorithm | ||
| particles_f_1 = do_fft(particles_1) | ||
| particles_f_2 = do_fft(particles_2) | ||
|
|
||
| n_pix = map_3d_init.shape[0] | ||
| # suggest 32 or 64 to start with. real data will be more like 128 or 256. | ||
| # can have issues with ctf at small pixels and need to zero pad to avoid artefacts | ||
| # artefacts from ctf not going to zero at edges, and sinusoidal ctf rippling too fast | ||
| # can zero pad when do Fourier convolution (fft is on zero paded and larger sized array) | ||
|
|
||
| max_n_iters = 7 # in practice programs using 3-10 iterations. | ||
|
|
||
| half_map_3d_r_1,half_map_3d_r_2 = map_3d_init, map_3d_init.copy() | ||
| # should diverge because different particles averaging in | ||
|
|
||
| for iteration in range(max_n_iters): | ||
|
|
||
| half_map_3d_f_1 = do_fft(half_map_3d_r_1,d=3) | ||
| half_map_3d_f_2 = do_fft(half_map_3d_r_2,d=3) | ||
|
|
||
|
|
||
| # align particles to 3D volume | ||
| # decide on granularity of rotations | ||
| # i.e. how finely the rotational SO(3) space is sampled in a grid search. | ||
| # smarter method is branch and bound... | ||
| # perhaps can make grid up front of slices, and then only compute norms on finer grid later. so re-use slices | ||
|
|
||
|
|
||
| # def do_adaptive_grid_search(particle, map_3d): | ||
| # # a la branch and bound | ||
| # # not sure exactly how you decide how finely gridded to make it. | ||
| # # perhaps heuristics based on how well the signal agrees in half_map_1, half_map_2 (Fourier frequency) | ||
|
|
||
|
|
||
| def grid_SO3_uniform(n_rotations): | ||
| """ | ||
| uniformly grid (not sample) SO(3) | ||
| can use some intermediate encoding of SO(3) like quaternions, axis angle, Euler | ||
| final output 3x3 rotations | ||
|
|
||
| """ | ||
| # TODO: sample over the sphere at given granularity. | ||
| # easy: draw uniform samples of rotations on sphere. lots of code for this all over the internet. quick solution in geomstats | ||
| # harder: draw samples around some rotation using ProjectedNormal distribution (ask Geoff) | ||
| # unknown difficulty: make a regular grid of SO(3) at given granularity. Khanh says non-trivial. | ||
| rots = np.ones((n_rotations,3,3)) | ||
| return rots | ||
|
|
||
| n_rotations = 1000 | ||
| rots = grid_SO3(n_rotations) | ||
|
|
||
| def do_xy0_plane(n_pix): | ||
| """ | ||
| generate xy0 plane | ||
| xy values are over the xy plane | ||
| all z values are 0 | ||
| see how meshgrid and generate coordinates functions used in https://github.com/geoffwoollard/compSPI/blob/stash_simulate/src/simulate.py#L96 | ||
|
|
||
| """ | ||
|
|
||
| ### methgrid | ||
| xy0_plane = np.ones(n_pix**2,3) | ||
| return xy0 | ||
|
|
||
| xy0_plane = do_xy0_plane(n_pix): | ||
|
|
||
|
|
||
| def do_slices(map_3d_f,rots): | ||
| """ | ||
| generates slice coordinates by rotating xy0 plane | ||
| interpolate values from map_3d_f onto 3D coordinates | ||
| see how scipy map_values used to interpolate in https://github.com/geoffwoollard/compSPI/blob/stash_simulate/src/simulate.py#L111 | ||
|
|
||
| Returns | ||
| ___ | ||
| slices | ||
| slice of map_3d_f | ||
| by Fourier slice theorem corresponds to Fourier transform of projection of rotated map_3d_f | ||
|
|
||
| """ | ||
| n_rotations = rots.shape[0] | ||
| ### TODO: map_values interpolation | ||
| xyz_rotated = np.ones_like(xy0_plane) | ||
| slices = np.random.normal(size=n_rotations*n_pix**2).reshape(n_rotations,n_pix,n_pix) | ||
| return slices, xyz_rotated | ||
|
|
||
| slices_1, xyz_rotated = do_slices(half_map_3d_1_f,rots) # Here rots are the same for the half maps, but could be different in general | ||
| slices_2, xyz_rotated = do_slices(half_map_3d_2_f,rots) | ||
|
|
||
|
|
||
| def do_conv_ctf(projection_f, ctf): | ||
| """ | ||
| Apply CTF to projection | ||
| """ | ||
|
|
||
| # TODO: vectorize and have shape match | ||
| projection_f_conv_ctf = ctf*projection_f | ||
| return | ||
|
|
||
|
|
||
|
|
||
| def do_bayesean_weights(particle, slices): | ||
| """ | ||
| compute bayesean weights of particle to slice | ||
| under gaussian white noise model | ||
|
|
||
| Input | ||
| ____ | ||
| slices | ||
| shape (n_slices, n_pix,n_pix) | ||
| dtype complex32 or complex64 | ||
|
|
||
| Returns | ||
| ___ | ||
| bayesean_weights | ||
| shape (n_slices,) | ||
| dtyle float32 or float64 | ||
|
|
||
| """ | ||
| n_slices = slices.shape[0] | ||
| particle_l2 = np.linalg.norm(particle, ord='fro')**2 | ||
| slices_l2 = np.linalg.norm(slices,axis=(1,2),ord='fro')**2 # TODO: check right axis. should n_slices l2 norms, one for each slice | ||
| # can precompute slices_l2 and keep for all particles if slices the same for different particles | ||
|
|
||
| corr = np.zeros(slices) | ||
| a_slice_corr = particle.dot(slices) # |particle|^2 - particle.dot(a_slice) + |a_slice|^2 | ||
| ### see Sigrowth et al and Nelson for how to get bayes factors | ||
| bayes_factors = np.random.normal(n_slices) # TODO: replace placeholder with right shape | ||
| return bayes_factors | ||
|
|
||
|
|
||
|
|
||
| # initialize | ||
| map_3d_f_updated_1 = np.zeros_like(half_map_3d_f_1) # complex | ||
| map_3d_f_updated_2 = np.zeros_like(half_map_3d_f_2) # complex | ||
| counts_3d_updated_1 = np.zeros_like(half_map_3d_r_1) # float/real | ||
| counts_3d_updated_2 = np.zeros_like(half_map_3d_r_2) # float/real | ||
|
|
||
| for particle_idx in range(particles_1_f.shape[0]): | ||
| ctf_1 = ctfs_1[particle_idx] | ||
| ctf_2 = ctfs_2[particle_idx] | ||
| particle_f_1 = particles_f_1[particle_idx] | ||
| particle_f_2 = particles_f_2[particle_idx] | ||
|
|
||
| def do_wiener_filter(projection, ctf, small_number): | ||
| wfilter = ctf/(ctf*ctf+small_number) | ||
| projection_wfilter_f = projection*w_filter | ||
| return projection_wfilter_f | ||
|
|
||
|
|
||
| particle_f_deconv_1 = do_wiener_filter(particles_f_1, ctf_1) | ||
| particle_f_deconv_1 = do_wiener_filter(particles_f_1, ctf_1) | ||
|
|
||
| slices_conv_ctfs_1 = do_conv_ctf(slices_1, ctf_1) # all slices get convolved with the ctf for the particle | ||
| slices_conv_ctfs_2 = do_conv_ctf(slices_2, ctf_2) | ||
|
|
||
| bayes_factors_1 = do_bayesean_weights(particles_1_f[particle_idx], slices_conv_ctfs_1) | ||
| bayes_factors_2 = do_bayesean_weights(particles_2_f[particle_idx], slices_conv_ctfs_2) | ||
|
|
||
| def do_insert_slice(slice_real,xyz,n_pix): | ||
| """ | ||
| Update map_3d_f_updated with values from slice. Requires interpolation of off grid | ||
| see "Insert Fourier slices" in https://github.com/geoffwoollard/learn_cryoem_math/blob/master/nb/fourier_slice_2D_3D_with_trilinear.ipynb | ||
| # TODO: vectorize so can take in many slices; | ||
| # i.e. do the compoutation in a vetorized way and return inserted_slices_3d, counts_3d of shape (n_slice,n_pix,n_pix,n_pix) | ||
|
|
||
| Input | ||
| ___ | ||
|
|
||
| slice | ||
| type array of shape (n_pix,n_pix) | ||
| dtype float32 or float64. real since imaginary and real part done separately | ||
| xyz | ||
| type array of shape (n_pix**2,3) | ||
| volume_3d_shape: float n_pix | ||
|
|
||
|
|
||
|
|
||
|
|
||
| Return | ||
| ___ | ||
| inserted_slice_3d | ||
| count_3d | ||
|
|
||
| """ | ||
|
|
||
| volume_3d = np.zeros((n_pix,n_pix,n_pix)) | ||
| # TODO: write insertion code. use linear interpolation (order of interpolation kernel) so not expensive. | ||
| # nearest neightbors cheaper, but we can afford to do better than that | ||
|
|
||
| return inserted_slice_3d, count_3d | ||
|
|
||
|
|
||
| for one_slice_idx in range(bayes_factors_1.shape[0]): | ||
| xyz = xyz_rotated[one_slice_idx] | ||
| inserted_slice_3d_r, count_3d_r = do_insert_slice(particle_f_deconv_1.real,xyz,volume_3d) # if this can be vectorized, can avoid loop over slices | ||
| inserted_slice_3d_i, count_3d_i = do_insert_slice(particle_f_deconv_1.imag,xyz,volume_3d) # if this can be vectorized, can avoid loop over slices | ||
| map_3d_f_updated_1 += inserted_slice_3d_r + 1j*inserted_slice_3d_i | ||
| counts_3d_updated_1 += count_3d_r + count_3d_i | ||
|
|
||
| for one_slice_idx in range(bayes_factors_2.shape[0]): | ||
| xyz = xyz_rotated[one_slice_idx] | ||
| inserted_slice_3d_r, count_3d_r = do_insert_slice(particle_f_deconv_2.real,xyz,volume_3d) # if this can be vectorized, can avoid loop over slices | ||
| inserted_slice_3d_i, count_3d_i = do_insert_slice(particle_f_deconv_2.imag,xyz,volume_3d) # if this can be vectorized, can avoid loop over slices | ||
| map_3d_f_updated_2 += inserted_slice_3d_r + 1j*inserted_slice_3d_i | ||
| counts_3d_updated_2 += count_3d_r + count_3d_i | ||
|
|
||
|
|
||
| # apply noise model | ||
| # half_map_1, half_map_2 come from doing the above independently | ||
| # filter by noise estimate (e.g. multiply both half maps by FSC) | ||
|
|
||
| def do_fsc(map_3d_f_1,map_3d_f_2): | ||
| """ | ||
| Estimate noise from half maps | ||
| for now do noise estimate as FSC between half maps | ||
|
|
||
| """ | ||
| # TODO: write fast vectorized fsc from code snippets in | ||
| # https://github.com/geoffwoollard/learn_cryoem_math/blob/master/nb/fsc.ipynb | ||
| # https://github.com/geoffwoollard/learn_cryoem_math/blob/master/nb/mFSC.ipynb | ||
| # https://github.com/geoffwoollard/learn_cryoem_math/blob/master/nb/guinier_fsc_sharpen.ipynb | ||
| n_pix = map_3d_f_1.shape[0] | ||
| fsc_1d = np.ones(n_pix//2) | ||
| return noise_estimate | ||
|
|
||
|
|
||
| fsc_1d = do_estimate_noise(map_3d_f_updated_1,map_3d_f_updated_2) | ||
|
|
||
| def do_expand_1d_3d(arr_1d): | ||
| n_pix = arr_1d.shape[0]*2 | ||
| arr_3d = np.ones((n_pix,n_pix,n_pix)) | ||
| # TODO: arr_1d fsc_1d to 3d (spherical shells) | ||
| return arr_3d | ||
|
|
||
| fsc_3d = do_expand_1d_3d(fsc_1d) | ||
|
|
||
| # multiplicative filter on maps with fsc | ||
| # The FSC is 1D, one number per spherical shells | ||
| # it can be expanded back to a multiplicative filter of the same shape as the maps | ||
| map_3d_f_filtered_1 = map_3d_f_updated_1*fsc_3d | ||
| map_3d_f_filtered_2 = map_3d_f_updated_2*fsc_3d | ||
|
|
||
| # update iteration | ||
| half_map_3d_f_1 = map_3d_f_filtered_1 | ||
| half_map_3d_f_2 = map_3d_f_filtered_2 | ||
|
|
||
| # final map | ||
| fsc_1d = do_estimate_noise(half_map_3d_f_1,half_map_3d_f_2) | ||
| fsc_3d = do_expand_1d_3d(fsc_1d) | ||
| map_3d_f_final = (half_map_3d_f_1 + half_map_3d_f_2 / 2)*fsc_3d | ||
| map_3d_r_final = do_ifft(map_3d_f_final) | ||
| half_map_3d_r_1 = do_ifft(half_map_3d_f_1) | ||
| half_map_3d_r_2 = do_ifft(half_map_3d_f_2) | ||
|
|
||
| return map_3d_r_final, half_map_3d_r_1, half_map_3d_r_2, fsc_1d | ||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,9 @@ | ||
| def test_do_iterative_refinement(): | ||
| n_pix = 64 | ||
| map_3d_init = np.random.normal(size=n_pix**3).reshape(n_pix,n_pix,n_pix) | ||
| particles = np.random.normal(size=n_pix**2).reshape(n_pix,n_pix) | ||
| map_3d_r_final, half_map_3d_r_1, half_map_3d_r_2, fsc_1d = iterative_refinement(map_3d_init, particles) | ||
| assert map_3d_r_final.shape == (n_pix,n_pix,n_pix) | ||
| assert fsc_1d.dtype = np.float32 | ||
| assert half_map_3d_r_1.dtype = np.float32 | ||
| assert half_map_3d_r_2.dtype = np.float32 | ||
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.