Note
Go to the end to download the full example code.
Ab-initio Pipeline Demonstration¶
This tutorial demonstrates some key components of an ab-initio
reconstruction pipeline using synthetic data generated with ASPIRE’s
Simulation
class of objects.
Download an Example Volume¶
We begin by downloading a high resolution volume map of the 80S Ribosome, sourced from EMDB: https://www.ebi.ac.uk/emdb/EMD-2660. This is one of several volume maps that can be downloaded with ASPIRE’s data downloading utility by using the following import. sphinx_gallery_start_ignore flake8: noqa sphinx_gallery_end_ignore
from aspire.downloader import emdb_2660
# Load 80s Ribosome as a ``Volume`` object.
original_vol = emdb_2660()
# Downsample the volume
res = 41
vol = original_vol.downsample(res)
Note
A Volume
can be saved using the Volume.save()
method as follows:
fn = f"downsampled_80s_ribosome_size{res}.mrc"
vol.save(fn, overwrite=True)
Create a Simulation Source¶
ASPIRE’s Simulation
class can be used to generate a synthetic
dataset of projection images. A Simulation
object produces
random projections of a supplied Volume and applies noise and CTF
filters. The resulting stack of 2D images is stored in an Image
object.
CTF Filters¶
Let’s start by creating CTF filters. The operators
package
contains a collection of filter classes that can be supplied to a
Simulation
. We use RadialCTFFilter
to generate a set of CTF
filters with various defocus values.
# Create CTF filters
import numpy as np
from aspire.operators import RadialCTFFilter
# Radial CTF Filter
defocus_min = 15000 # unit is angstroms
defocus_max = 25000
defocus_ct = 7
ctf_filters = [
RadialCTFFilter(pixel_size=vol.pixel_size, defocus=d)
for d in np.linspace(defocus_min, defocus_max, defocus_ct)
]
Initialize Simulation Object¶
We feed our Volume
and filters into Simulation
to generate
the dataset of images. When controlled white Gaussian noise is
desired, WhiteNoiseAdder.from_snr()
can be used to generate a
simulation data set around a specific SNR.
Alternatively, users can bring their own images using an
ArrayImageSource
, or define their own custom noise functions via
Simulation(..., noise_adder=CustomNoiseAdder(...))
. Examples
can be found in tutorials/class_averaging.py
and
experiments/simulated_abinitio_pipeline.py
.
from aspire.noise import WhiteNoiseAdder
from aspire.source import Simulation
# set parameters
n_imgs = 2500
# SNR target for white gaussian noise.
snr = 0.5
Note
Note, the SNR value was chosen based on other parameters for this quick tutorial, and can be changed to adjust the power of the additive noise.
# For this ``Simulation`` we set all 2D offset vectors to zero,
# but by default offset vectors will be randomly distributed.
src = Simulation(
n=n_imgs, # number of projections
vols=vol, # volume source
offsets=0, # Default: images are randomly shifted
unique_filters=ctf_filters,
noise_adder=WhiteNoiseAdder.from_snr(snr=snr), # desired SNR
)
0%| | 0/5 [00:00<?, ?it/s]
20%|██ | 1/5 [00:00<00:00, 4.83it/s]
40%|████ | 2/5 [00:00<00:00, 4.93it/s]
60%|██████ | 3/5 [00:00<00:00, 4.57it/s]
80%|████████ | 4/5 [00:00<00:00, 4.71it/s]
100%|██████████| 5/5 [00:01<00:00, 4.98it/s]
100%|██████████| 5/5 [00:01<00:00, 4.87it/s]
Several Views of the Projection Images¶
We can access several views of the projection images.
# with no corruption applied
src.projections[0:10].show()
# with no noise corruption
src.clean_images[0:10].show()
# with noise and CTF corruption
src.images[0:10].show()
CTF Correction¶
We apply phase_flip()
to correct for CTF effects.
src = src.phase_flip()
Cache¶
We apply cache
to store the results of the ImageSource
pipeline up to this point. This is optional, but can provide
benefit when used intently on machines with adequate memory.
src = src.cache()
src.images[0:10].show()
Class Averaging¶
We use RIRClass2D
object to classify the images via the
rotationally invariant representation (RIR) algorithm. Class
selection is customizable. The classification module also includes a
set of protocols for selecting a set of images to be used after
classification. Here we’re using the simplest
DebugClassAvgSource
which internally uses the
TopClassSelector
to select the first n_classes
images from
the source. In practice, the selection is done by sorting class
averages based on some configurable notion of quality (contrast,
neighbor distance etc).
from aspire.classification import RIRClass2D
# set parameters
n_classes = 200
n_nbor = 6
# We will customize our class averaging source. Note that the
# ``fspca_components`` and ``bispectrum_components`` were selected for
# this small tutorial.
rir = RIRClass2D(
src,
fspca_components=40,
bispectrum_components=30,
n_nbor=n_nbor,
)
from aspire.denoising import DebugClassAvgSource
avgs = DebugClassAvgSource(
src=src,
classifier=rir,
)
# We'll continue our pipeline using only the first ``n_classes`` from
# ``avgs``. The ``cache()`` call is used here to precompute results
# for the ``:n_classes`` slice. This avoids recomputing the same
# images twice when peeking in the next cell then requesting them in
# the following ``CLSyncVoting`` algorithm. Outside of demonstration
# purposes, where we are repeatedly peeking at various stage results,
# such caching can be dropped allowing for more lazy evaluation.
avgs = avgs[:n_classes].cache()
0%| | 0/5 [00:00<?, ?it/s]
100%|██████████| 5/5 [00:00<00:00, 588.69it/s]
0%| | 0/5 [00:00<?, ?it/s]
20%|██ | 1/5 [00:00<00:01, 3.79it/s]
40%|████ | 2/5 [00:00<00:00, 3.74it/s]
60%|██████ | 3/5 [00:00<00:00, 3.68it/s]
80%|████████ | 4/5 [00:01<00:00, 3.70it/s]
100%|██████████| 5/5 [00:01<00:00, 3.83it/s]
100%|██████████| 5/5 [00:01<00:00, 3.78it/s]
0%| | 0/200 [00:00<?, ?it/s]
1%| | 2/200 [00:00<00:16, 12.21it/s]
2%|▏ | 4/200 [00:00<00:16, 12.13it/s]
3%|▎ | 6/200 [00:00<00:16, 12.12it/s]
4%|▍ | 8/200 [00:00<00:15, 12.21it/s]
5%|▌ | 10/200 [00:00<00:15, 12.10it/s]
6%|▌ | 12/200 [00:00<00:15, 12.11it/s]
7%|▋ | 14/200 [00:01<00:15, 12.17it/s]
8%|▊ | 16/200 [00:01<00:15, 12.24it/s]
9%|▉ | 18/200 [00:01<00:14, 12.23it/s]
10%|█ | 20/200 [00:01<00:14, 12.21it/s]
11%|█ | 22/200 [00:01<00:14, 12.29it/s]
12%|█▏ | 24/200 [00:01<00:14, 12.20it/s]
13%|█▎ | 26/200 [00:02<00:14, 12.10it/s]
14%|█▍ | 28/200 [00:02<00:14, 12.18it/s]
15%|█▌ | 30/200 [00:02<00:13, 12.22it/s]
16%|█▌ | 32/200 [00:02<00:13, 12.21it/s]
17%|█▋ | 34/200 [00:02<00:13, 12.23it/s]
18%|█▊ | 36/200 [00:02<00:13, 12.15it/s]
19%|█▉ | 38/200 [00:03<00:13, 12.19it/s]
20%|██ | 40/200 [00:03<00:13, 12.21it/s]
21%|██ | 42/200 [00:03<00:12, 12.25it/s]
22%|██▏ | 44/200 [00:03<00:12, 12.28it/s]
23%|██▎ | 46/200 [00:03<00:12, 12.17it/s]
24%|██▍ | 48/200 [00:03<00:12, 12.26it/s]
25%|██▌ | 50/200 [00:04<00:12, 12.26it/s]
26%|██▌ | 52/200 [00:04<00:11, 12.34it/s]
27%|██▋ | 54/200 [00:04<00:11, 12.39it/s]
28%|██▊ | 56/200 [00:04<00:11, 12.30it/s]
29%|██▉ | 58/200 [00:04<00:11, 12.27it/s]
30%|███ | 60/200 [00:04<00:11, 12.34it/s]
31%|███ | 62/200 [00:05<00:11, 12.19it/s]
32%|███▏ | 64/200 [00:05<00:11, 12.21it/s]
33%|███▎ | 66/200 [00:05<00:10, 12.31it/s]
34%|███▍ | 68/200 [00:05<00:10, 12.25it/s]
35%|███▌ | 70/200 [00:05<00:10, 12.29it/s]
36%|███▌ | 72/200 [00:05<00:10, 12.29it/s]
37%|███▋ | 74/200 [00:06<00:10, 12.24it/s]
38%|███▊ | 76/200 [00:06<00:10, 12.26it/s]
39%|███▉ | 78/200 [00:06<00:09, 12.30it/s]
40%|████ | 80/200 [00:06<00:09, 12.36it/s]
41%|████ | 82/200 [00:06<00:09, 12.39it/s]
42%|████▏ | 84/200 [00:06<00:09, 12.39it/s]
43%|████▎ | 86/200 [00:07<00:09, 12.39it/s]
44%|████▍ | 88/200 [00:07<00:09, 12.26it/s]
45%|████▌ | 90/200 [00:07<00:08, 12.33it/s]
46%|████▌ | 92/200 [00:07<00:08, 12.27it/s]
47%|████▋ | 94/200 [00:07<00:08, 12.26it/s]
48%|████▊ | 96/200 [00:07<00:08, 12.33it/s]
49%|████▉ | 98/200 [00:07<00:08, 12.40it/s]
50%|█████ | 100/200 [00:08<00:08, 12.42it/s]
51%|█████ | 102/200 [00:08<00:07, 12.44it/s]
52%|█████▏ | 104/200 [00:08<00:07, 12.46it/s]
53%|█████▎ | 106/200 [00:08<00:07, 12.46it/s]
54%|█████▍ | 108/200 [00:08<00:07, 12.39it/s]
55%|█████▌ | 110/200 [00:08<00:07, 12.42it/s]
56%|█████▌ | 112/200 [00:09<00:07, 12.43it/s]
57%|█████▋ | 114/200 [00:09<00:06, 12.40it/s]
58%|█████▊ | 116/200 [00:09<00:06, 12.42it/s]
59%|█████▉ | 118/200 [00:09<00:06, 12.44it/s]
60%|██████ | 120/200 [00:09<00:06, 12.45it/s]
61%|██████ | 122/200 [00:09<00:06, 12.39it/s]
62%|██████▏ | 124/200 [00:10<00:06, 12.42it/s]
63%|██████▎ | 126/200 [00:10<00:05, 12.45it/s]
64%|██████▍ | 128/200 [00:10<00:05, 12.48it/s]
65%|██████▌ | 130/200 [00:10<00:05, 12.49it/s]
66%|██████▌ | 132/200 [00:10<00:05, 12.50it/s]
67%|██████▋ | 134/200 [00:10<00:05, 12.47it/s]
68%|██████▊ | 136/200 [00:11<00:05, 12.43it/s]
69%|██████▉ | 138/200 [00:11<00:05, 12.39it/s]
70%|███████ | 140/200 [00:11<00:04, 12.44it/s]
71%|███████ | 142/200 [00:11<00:04, 12.45it/s]
72%|███████▏ | 144/200 [00:11<00:04, 12.46it/s]
73%|███████▎ | 146/200 [00:11<00:04, 12.44it/s]
74%|███████▍ | 148/200 [00:12<00:04, 12.30it/s]
75%|███████▌ | 150/200 [00:12<00:04, 12.33it/s]
76%|███████▌ | 152/200 [00:12<00:03, 12.38it/s]
77%|███████▋ | 154/200 [00:12<00:03, 12.43it/s]
78%|███████▊ | 156/200 [00:12<00:03, 12.45it/s]
79%|███████▉ | 158/200 [00:12<00:03, 12.47it/s]
80%|████████ | 160/200 [00:12<00:03, 12.43it/s]
81%|████████ | 162/200 [00:13<00:03, 12.42it/s]
82%|████████▏ | 164/200 [00:13<00:02, 12.37it/s]
83%|████████▎ | 166/200 [00:13<00:02, 12.39it/s]
84%|████████▍ | 168/200 [00:13<00:02, 12.46it/s]
85%|████████▌ | 170/200 [00:13<00:02, 12.47it/s]
86%|████████▌ | 172/200 [00:13<00:02, 12.45it/s]
87%|████████▋ | 174/200 [00:14<00:02, 12.30it/s]
88%|████████▊ | 176/200 [00:14<00:01, 12.36it/s]
89%|████████▉ | 178/200 [00:14<00:01, 12.44it/s]
90%|█████████ | 180/200 [00:14<00:01, 12.48it/s]
91%|█████████ | 182/200 [00:14<00:01, 12.50it/s]
92%|█████████▏| 184/200 [00:14<00:01, 12.51it/s]
93%|█████████▎| 186/200 [00:15<00:01, 12.35it/s]
94%|█████████▍| 188/200 [00:15<00:00, 12.39it/s]
95%|█████████▌| 190/200 [00:15<00:00, 12.43it/s]
96%|█████████▌| 192/200 [00:15<00:00, 12.48it/s]
97%|█████████▋| 194/200 [00:15<00:00, 12.49it/s]
98%|█████████▊| 196/200 [00:15<00:00, 12.47it/s]
99%|█████████▉| 198/200 [00:16<00:00, 12.44it/s]
100%|██████████| 200/200 [00:16<00:00, 12.38it/s]
100%|██████████| 200/200 [00:16<00:00, 12.35it/s]
0%| | 0/200 [00:00<?, ?it/s]
10%|▉ | 19/200 [00:00<00:00, 185.10it/s]
19%|█▉ | 38/200 [00:00<00:00, 185.00it/s]
28%|██▊ | 57/200 [00:00<00:00, 185.06it/s]
38%|███▊ | 76/200 [00:00<00:00, 185.03it/s]
48%|████▊ | 95/200 [00:00<00:00, 185.14it/s]
57%|█████▋ | 114/200 [00:00<00:00, 185.20it/s]
66%|██████▋ | 133/200 [00:00<00:00, 185.17it/s]
76%|███████▌ | 152/200 [00:00<00:00, 180.81it/s]
86%|████████▌ | 171/200 [00:00<00:00, 181.98it/s]
95%|█████████▌| 190/200 [00:01<00:00, 182.77it/s]
100%|██████████| 200/200 [00:01<00:00, 183.67it/s]
View the Class Averages¶
# Show class averages
avgs.images[0:10].show()
# Show original images corresponding to those classes. This 1:1
# comparison is only expected to work because we used
# ``TopClassSelector`` to classify our images.
src.images[0:10].show()
Orientation Estimation¶
We create an OrientedSource
, which consumes an ImageSource
object, an
orientation estimator, and returns a new source which lazily estimates orientations.
In this case we supply avgs
for our source and a CLSyncVoting
class instance for our orientation estimator. The CLSyncVoting
algorithm employs
a common-lines method with synchronization and voting.
from aspire.abinitio import CLSyncVoting
from aspire.source import OrientedSource
# Stash true rotations for later comparison
true_rotations = src.rotations[:n_classes]
# For this low resolution example we will customize the ``CLSyncVoting``
# instance to use fewer theta points ``n_theta`` then the default value of 360.
orient_est = CLSyncVoting(avgs, n_theta=72)
# Instantiate an ``OrientedSource``.
oriented_src = OrientedSource(avgs, orient_est)
Mean Error of Estimated Rotations¶
ASPIRE has the built-in utility function, mean_aligned_angular_distance
, which globally
aligns the estimated rotations to the true rotations and computes the mean
angular distance (in degrees).
from aspire.utils import mean_aligned_angular_distance
# Compare with known true rotations
mean_ang_dist = mean_aligned_angular_distance(oriented_src.rotations, true_rotations)
print(f"Mean aligned angular distance: {mean_ang_dist} degrees")
Searching over common line pairs: 0%| | 0/19900 [00:00<?, ?it/s]
Searching over common line pairs: 1%| | 246/19900 [00:00<00:07, 2459.48it/s]
Searching over common line pairs: 2%|▏ | 495/19900 [00:00<00:07, 2472.59it/s]
Searching over common line pairs: 4%|▎ | 744/19900 [00:00<00:07, 2478.21it/s]
Searching over common line pairs: 5%|▍ | 993/19900 [00:00<00:07, 2479.67it/s]
Searching over common line pairs: 6%|▌ | 1242/19900 [00:00<00:07, 2481.73it/s]
Searching over common line pairs: 7%|▋ | 1492/19900 [00:00<00:07, 2486.13it/s]
Searching over common line pairs: 9%|▉ | 1742/19900 [00:00<00:07, 2487.80it/s]
Searching over common line pairs: 10%|█ | 1991/19900 [00:00<00:07, 2487.77it/s]
Searching over common line pairs: 11%|█▏ | 2241/19900 [00:00<00:07, 2489.69it/s]
Searching over common line pairs: 13%|█▎ | 2491/19900 [00:01<00:06, 2491.05it/s]
Searching over common line pairs: 14%|█▍ | 2742/19900 [00:01<00:06, 2494.80it/s]
Searching over common line pairs: 15%|█▌ | 2993/19900 [00:01<00:06, 2497.38it/s]
Searching over common line pairs: 16%|█▋ | 3244/19900 [00:01<00:06, 2500.48it/s]
Searching over common line pairs: 18%|█▊ | 3495/19900 [00:01<00:06, 2499.85it/s]
Searching over common line pairs: 19%|█▉ | 3746/19900 [00:01<00:06, 2500.06it/s]
Searching over common line pairs: 20%|██ | 3997/19900 [00:01<00:06, 2495.91it/s]
Searching over common line pairs: 21%|██▏ | 4247/19900 [00:01<00:06, 2492.21it/s]
Searching over common line pairs: 23%|██▎ | 4497/19900 [00:01<00:06, 2486.90it/s]
Searching over common line pairs: 24%|██▍ | 4746/19900 [00:01<00:06, 2480.67it/s]
Searching over common line pairs: 25%|██▌ | 4995/19900 [00:02<00:06, 2474.23it/s]
Searching over common line pairs: 26%|██▋ | 5245/19900 [00:02<00:05, 2479.50it/s]
Searching over common line pairs: 28%|██▊ | 5495/19900 [00:02<00:05, 2484.06it/s]
Searching over common line pairs: 29%|██▉ | 5746/19900 [00:02<00:05, 2488.92it/s]
Searching over common line pairs: 30%|███ | 5996/19900 [00:02<00:05, 2490.77it/s]
Searching over common line pairs: 31%|███▏ | 6246/19900 [00:02<00:05, 2489.69it/s]
Searching over common line pairs: 33%|███▎ | 6495/19900 [00:02<00:05, 2487.95it/s]
Searching over common line pairs: 34%|███▍ | 6745/19900 [00:02<00:05, 2491.04it/s]
Searching over common line pairs: 35%|███▌ | 6995/19900 [00:02<00:05, 2490.12it/s]
Searching over common line pairs: 36%|███▋ | 7245/19900 [00:02<00:05, 2488.95it/s]
Searching over common line pairs: 38%|███▊ | 7494/19900 [00:03<00:04, 2489.18it/s]
Searching over common line pairs: 39%|███▉ | 7744/19900 [00:03<00:04, 2489.90it/s]
Searching over common line pairs: 40%|████ | 7994/19900 [00:03<00:04, 2489.91it/s]
Searching over common line pairs: 41%|████▏ | 8243/19900 [00:03<00:04, 2489.26it/s]
Searching over common line pairs: 43%|████▎ | 8493/19900 [00:03<00:04, 2492.26it/s]
Searching over common line pairs: 44%|████▍ | 8743/19900 [00:03<00:04, 2492.13it/s]
Searching over common line pairs: 45%|████▌ | 8994/19900 [00:03<00:04, 2495.00it/s]
Searching over common line pairs: 46%|████▋ | 9244/19900 [00:03<00:04, 2494.20it/s]
Searching over common line pairs: 48%|████▊ | 9494/19900 [00:03<00:04, 2491.28it/s]
Searching over common line pairs: 49%|████▉ | 9744/19900 [00:03<00:04, 2491.58it/s]
Searching over common line pairs: 50%|█████ | 9994/19900 [00:04<00:03, 2492.41it/s]
Searching over common line pairs: 51%|█████▏ | 10244/19900 [00:04<00:03, 2493.37it/s]
Searching over common line pairs: 53%|█████▎ | 10494/19900 [00:04<00:03, 2493.90it/s]
Searching over common line pairs: 54%|█████▍ | 10744/19900 [00:04<00:03, 2493.77it/s]
Searching over common line pairs: 55%|█████▌ | 10994/19900 [00:04<00:03, 2494.02it/s]
Searching over common line pairs: 57%|█████▋ | 11244/19900 [00:04<00:03, 2495.54it/s]
Searching over common line pairs: 58%|█████▊ | 11494/19900 [00:04<00:03, 2456.82it/s]
Searching over common line pairs: 59%|█████▉ | 11744/19900 [00:04<00:03, 2467.62it/s]
Searching over common line pairs: 60%|██████ | 11991/19900 [00:04<00:03, 2468.29it/s]
Searching over common line pairs: 62%|██████▏ | 12242/19900 [00:04<00:03, 2478.75it/s]
Searching over common line pairs: 63%|██████▎ | 12492/19900 [00:05<00:02, 2484.77it/s]
Searching over common line pairs: 64%|██████▍ | 12741/19900 [00:05<00:02, 2483.02it/s]
Searching over common line pairs: 65%|██████▌ | 12991/19900 [00:05<00:02, 2486.07it/s]
Searching over common line pairs: 67%|██████▋ | 13241/19900 [00:05<00:02, 2488.16it/s]
Searching over common line pairs: 68%|██████▊ | 13491/19900 [00:05<00:02, 2489.50it/s]
Searching over common line pairs: 69%|██████▉ | 13741/19900 [00:05<00:02, 2490.50it/s]
Searching over common line pairs: 70%|███████ | 13991/19900 [00:05<00:02, 2487.57it/s]
Searching over common line pairs: 72%|███████▏ | 14240/19900 [00:05<00:02, 2485.77it/s]
Searching over common line pairs: 73%|███████▎ | 14489/19900 [00:05<00:02, 2482.84it/s]
Searching over common line pairs: 74%|███████▍ | 14738/19900 [00:05<00:02, 2483.41it/s]
Searching over common line pairs: 75%|███████▌ | 14987/19900 [00:06<00:01, 2484.55it/s]
Searching over common line pairs: 77%|███████▋ | 15237/19900 [00:06<00:01, 2486.29it/s]
Searching over common line pairs: 78%|███████▊ | 15486/19900 [00:06<00:01, 2486.41it/s]
Searching over common line pairs: 79%|███████▉ | 15735/19900 [00:06<00:01, 2485.48it/s]
Searching over common line pairs: 80%|████████ | 15985/19900 [00:06<00:01, 2486.72it/s]
Searching over common line pairs: 82%|████████▏ | 16234/19900 [00:06<00:01, 2486.59it/s]
Searching over common line pairs: 83%|████████▎ | 16483/19900 [00:06<00:01, 2486.33it/s]
Searching over common line pairs: 84%|████████▍ | 16732/19900 [00:06<00:01, 2482.28it/s]
Searching over common line pairs: 85%|████████▌ | 16981/19900 [00:06<00:01, 2479.00it/s]
Searching over common line pairs: 87%|████████▋ | 17229/19900 [00:06<00:01, 2478.20it/s]
Searching over common line pairs: 88%|████████▊ | 17477/19900 [00:07<00:00, 2478.01it/s]
Searching over common line pairs: 89%|████████▉ | 17726/19900 [00:07<00:00, 2480.37it/s]
Searching over common line pairs: 90%|█████████ | 17975/19900 [00:07<00:00, 2479.79it/s]
Searching over common line pairs: 92%|█████████▏| 18223/19900 [00:07<00:00, 2477.06it/s]
Searching over common line pairs: 93%|█████████▎| 18471/19900 [00:07<00:00, 2477.83it/s]
Searching over common line pairs: 94%|█████████▍| 18720/19900 [00:07<00:00, 2480.41it/s]
Searching over common line pairs: 95%|█████████▌| 18969/19900 [00:07<00:00, 2478.33it/s]
Searching over common line pairs: 97%|█████████▋| 19217/19900 [00:07<00:00, 2478.30it/s]
Searching over common line pairs: 98%|█████████▊| 19465/19900 [00:07<00:00, 2358.01it/s]
Searching over common line pairs: 99%|█████████▉| 19703/19900 [00:07<00:00, 2283.12it/s]
Searching over common line pairs: 100%|██████████| 19900/19900 [00:08<00:00, 2475.19it/s]
Mean aligned angular distance: 13.057247876877636 degrees
Volume Reconstruction¶
Now that we have our class averages and rotation estimates, we can estimate the mean volume by supplying the class averages and basis for back projection.
from aspire.reconstruction import MeanEstimator
# Setup an estimator to perform the back projection.
estimator = MeanEstimator(oriented_src)
# Perform the estimation and save the volume.
estimated_volume = estimator.estimate()
Comparison of Estimated Volume with Source Volume¶
To get a visual confirmation that our results are sane, we rotate the estimated volume by the estimated rotations and project along the z-axis. These estimated projections should align with the original projection images.
# Get the first 10 projections from the estimated volume using the
# estimated orientations. Recall that ``project`` returns an
# ``Image`` instance, which we can peek at with ``show``.
projections_est = estimated_volume.project(oriented_src.rotations[0:10])
# We view the first 10 projections of the estimated volume.
projections_est.show()
# For comparison, we view the first 10 source projections.
src.projections[0:10].show()
Total running time of the script: (1 minutes 8.102 seconds)