Skip to content

Commit eff9b98

Browse files
add initial materials as used in bapython
1 parent d0c824a commit eff9b98

8 files changed

Lines changed: 4661 additions & 0 deletions

notebooks/assets/arrays.png

33.4 KB
Loading

notebooks/assets/bitdepth.png

360 KB
Loading

notebooks/assets/digitalimage.png

264 KB
Loading

notebooks/assets/filters.png

1.57 MB
Loading

notebooks/images-in-python.ipynb

Lines changed: 970 additions & 0 deletions
Large diffs are not rendered by default.
Lines changed: 353 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,353 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {
6+
"slideshow": {
7+
"slide_type": "slide"
8+
}
9+
},
10+
"source": [
11+
"# Image Processing in Python\n",
12+
"\n",
13+
"**Part of the IAFIG-RMS *Python for Bioimage Analysis* Course.**\n",
14+
"\n",
15+
"*Dr Chas Nelson*\n",
16+
"\n",
17+
"2019-12-09 1300--1430"
18+
]
19+
},
20+
{
21+
"cell_type": "markdown",
22+
"metadata": {
23+
"slideshow": {
24+
"slide_type": "slide"
25+
}
26+
},
27+
"source": [
28+
"## Aim\n",
29+
"\n",
30+
"To revise key tools of image processing and carry out these operations in Python."
31+
]
32+
},
33+
{
34+
"cell_type": "markdown",
35+
"metadata": {
36+
"slideshow": {
37+
"slide_type": "subslide"
38+
}
39+
},
40+
"source": [
41+
"## ILOs\n",
42+
"\n",
43+
"* Appreciate the capabilities of `scikit-image` for image processing in a Python environment\n",
44+
"* Apply known image processing techniques (e.g. smoothing) in a Python environment\n",
45+
"* Recognise additional image processing techniques (e.g. deconvolution) that are possible in a Python environment\n",
46+
"* Relate global grayscale thresholding and the logical array to segmentation and binary images"
47+
]
48+
},
49+
{
50+
"cell_type": "markdown",
51+
"metadata": {},
52+
"source": [
53+
"## Imports"
54+
]
55+
},
56+
{
57+
"cell_type": "code",
58+
"execution_count": null,
59+
"metadata": {},
60+
"outputs": [],
61+
"source": [
62+
"%matplotlib widget\n",
63+
"import numpy as np\n",
64+
"import matplotlib.pyplot as plt\n",
65+
"import seaborn as sns; sns.set()\n",
66+
"import ipywidgets as widgets\n",
67+
"import IPython.display as ipyd\n",
68+
"from skimage import io"
69+
]
70+
},
71+
{
72+
"cell_type": "markdown",
73+
"metadata": {},
74+
"source": [
75+
"## Data\n",
76+
"\n",
77+
"The image we will use for the rest of this tutorial is from the Broad Bioimage Benchmark Collection data set BBBC0034v1 (https://data.broadinstitute.org/bbbc/; Thirstrup et al. 2018).\n",
78+
"\n",
79+
"See https://data.broadinstitute.org/bbbc/BBBC034/ for the full description; however, the key points are:\n",
80+
"\n",
81+
"* $1024 \\times 1024 \\times 52$ pixels\n",
82+
"* $65 \\times 65 \\times 290$ nm/pixel\n",
83+
"* 4 channels (each stored as separate files):\n",
84+
" * Cell membrane label (C=0)\n",
85+
" * Actin label (C=1)\n",
86+
" * DNA label (C=2)\n",
87+
" * Brightfield image (C=3)\n",
88+
" \n",
89+
"The below cell can be run to create a local link to the data that we downloaded in the previous session. You only need to run this cell once and then you may comment it out"
90+
]
91+
},
92+
{
93+
"cell_type": "code",
94+
"execution_count": null,
95+
"metadata": {},
96+
"outputs": [],
97+
"source": [
98+
"import os\n",
99+
"\n",
100+
"os.symlink('../01_images-in-python/assets/bbbc034v1','./assets/bbbc034v1')"
101+
]
102+
},
103+
{
104+
"cell_type": "markdown",
105+
"metadata": {},
106+
"source": [
107+
"## Contrast and Histogram Equalisation\n",
108+
"\n",
109+
"* As previously mentioned, image data may not spread across the whole bit-depth (`dtype`) of an image (array).\n",
110+
"* The submodule `skimage.exposure` provides a range of functions for spreading an image's intensity over the full range.\n",
111+
"* The simplest approach to this is to rescale the intensity levels."
112+
]
113+
},
114+
{
115+
"cell_type": "code",
116+
"execution_count": null,
117+
"metadata": {},
118+
"outputs": [],
119+
"source": [
120+
"# Read a multidimensional TIF file, in this case a single channel with multiple z-slices.\n",
121+
"myStack = io.imread('./assets/bbbc034v1/AICS_12_134_C=1.tif')\n",
122+
"\n",
123+
"# Metadata for future use later\n",
124+
"x_pixel_size = 65 # nm\n",
125+
"y_pixel_size = 65 # nm\n",
126+
"z_pixel_size = 290 # nm\n",
127+
"\n",
128+
"# Take single slice\n",
129+
"mySlice = myStack[26,:,:]"
130+
]
131+
},
132+
{
133+
"cell_type": "markdown",
134+
"metadata": {},
135+
"source": [
136+
"<div style=\"background-color:#abd9e9; border-radius: 5px; padding: 10pt\"><strong>Task:</strong> Create a new cell below and use the <a href='https://scikit-image.org/docs/stable/api/skimage.exposure.html#skimage.exposure.rescale_intensity'><code>skimage.exposure.rescale_intensity()</code></a> function to rescale `mySlice` from 16-bit (assume it uses the full range) to 8-bit values. Check that the np array dtype is correct. Plot the two images side by side and their histograms beneath.</div>"
137+
]
138+
},
139+
{
140+
"cell_type": "markdown",
141+
"metadata": {},
142+
"source": [
143+
"<div style=\"background-color:#abd9e9; border-radius: 5px; padding: 10pt\"><strong>Task:</strong> Now create a new cell below and map the data to the full 16-bit range. Check that the np array dtype is correct. Plot the two images side by side (use a full 16-bit colour mapping) and their histograms beneath.</div>"
144+
]
145+
},
146+
{
147+
"cell_type": "markdown",
148+
"metadata": {},
149+
"source": [
150+
"<div style=\"background-color:#abd9e9; border-radius: 5px; padding: 10pt\"><strong>Task:</strong> Now create a new cell below and, using the codes above and the following tutorial, create a figure howing the original image, constrast stretched image, histogram equalised image and adaptive histogram equalised image, all with their histograms. You can find the tutorial at: <a href=\"https://scikit-image.org/docs/stable/auto_examples/color_exposure/plot_equalize.html#sphx-glr-auto-examples-color-exposure-plot-equalize-py\">https://scikit-image.org/docs/stable/auto_examples/color_exposure/plot_equalize.html#sphx-glr-auto-examples-color-exposure-plot-equalize-py</a>.</div>"
151+
]
152+
},
153+
{
154+
"cell_type": "markdown",
155+
"metadata": {},
156+
"source": [
157+
"## Image Filtering\n",
158+
"\n",
159+
"* Many image processing tasks include filtering, either in the spatial or frequency domain.\n",
160+
"* Again, `scitkit-image` has many of these filters built in to the submodule `scikit-image.filters`."
161+
]
162+
},
163+
{
164+
"cell_type": "markdown",
165+
"metadata": {},
166+
"source": [
167+
"<div style=\"background-color:#abd9e9; border-radius: 5px; padding: 10pt\"><strong>Task:</strong> Using the <a href=\"https://scikit-image.org/docs/stable/api/skimage.filters.html\"><code>skimage.filters</code></a> submodule create a figure with a crop of the original slice (256-by-256 pixels, centred) and the results of applying a Gaussian blue; median filter; unsharp mask; sobel edge filter; and Meijering neuriteness ridge oeprator of the cropped region.</div>"
168+
]
169+
},
170+
{
171+
"cell_type": "markdown",
172+
"metadata": {},
173+
"source": [
174+
"## Deconvolution\n",
175+
"\n",
176+
"* One common operation in microscopy that takes place in the frequency domain is deconvolution.\n",
177+
"* `scitkit-image.restoration` has a variety of denoising and deconvolution algorithms, including a Richardson-Lucy implementation."
178+
]
179+
},
180+
{
181+
"cell_type": "code",
182+
"execution_count": null,
183+
"metadata": {},
184+
"outputs": [],
185+
"source": [
186+
"import psf\n",
187+
"from skimage import transform\n",
188+
"from skimage import exposure\n",
189+
"\n",
190+
"sz = 11\n",
191+
"args = {\n",
192+
" 'shape': (sz, sz), # size of calculated psf array in pixels\n",
193+
" 'dims': (x_pixel_size/1000*sz, y_pixel_size/1000*sz), # size of array in microns\n",
194+
" 'em_wavelen': 520.0, # emission wavelength in nanometers\n",
195+
" 'num_aperture': 1.25, # numerical aperture\n",
196+
" 'refr_index': 1.333, # refractive index\n",
197+
" 'magnification': 100, # magnification\n",
198+
"}\n",
199+
"\n",
200+
"gauss = psf.PSF(psf.GAUSSIAN | psf.EMISSION, **args)\n",
201+
"\n",
202+
"psf_ideal = gauss.volume()\n",
203+
"\n",
204+
"# # Display PSF before resizing for anisotropy\n",
205+
"# f, axes = plt.subplots(2,2)\n",
206+
"# (XZ, XY, null, ZY) = axes.flatten()\n",
207+
"# f.suptitle(\"Gaussian PSF\")\n",
208+
"\n",
209+
"# ZY.imshow(psf_ideal[:,sz,:], cmap=\"gray\", interpolation='none')\n",
210+
"# ZY.grid(False)\n",
211+
"# ZY.set_title(\"Central X-slice\")\n",
212+
"\n",
213+
"# XZ.imshow(psf_ideal[:,:,sz].T, cmap=\"gray\", interpolation='none')\n",
214+
"# XZ.grid(False)\n",
215+
"# XZ.set_title(\"Central Y-slice\")\n",
216+
"\n",
217+
"# XY.imshow(psf_ideal[sz,:,:], cmap=\"gray\", interpolation='none')\n",
218+
"# XY.grid(False)\n",
219+
"# XY.set_title(\"Central Z-slice\")\n",
220+
"\n",
221+
"# null.set_axis_off() # clear unused subplot\n",
222+
"\n",
223+
"# plt.tight_layout()\n",
224+
"# plt.show()\n",
225+
"\n",
226+
"# Resize for anisotropy of our image (this is a bit rough and can be done better - but it works for this example)\n",
227+
"psf_rescaled = transform.resize(psf_ideal,\n",
228+
" (np.ceil(psf_ideal.shape[0]*(x_pixel_size/z_pixel_size)),\n",
229+
" psf_ideal.shape[1],\n",
230+
" psf_ideal.shape[2]))\n",
231+
"psf_rescaled = psf_rescaled/psf_rescaled.sum()\n",
232+
"\n",
233+
"# # Display PSF after resizing for anisotropy\n",
234+
"# f, axes = plt.subplots(2,2)\n",
235+
"# (XZ, XY, null, ZY) = axes.flatten()\n",
236+
"# f.suptitle(\"Gaussian PSF\")\n",
237+
"\n",
238+
"# ZY.imshow(psf_rescaled[:,psf_rescaled.shape[1]//2+1,:], cmap=\"gray\", interpolation='none')\n",
239+
"# ZY.grid(False)\n",
240+
"# ZY.set_title(\"Central X-slice\")\n",
241+
"\n",
242+
"# XZ.imshow(psf_rescaled[:,:,psf_rescaled.shape[2]//2+1].T, cmap=\"gray\", interpolation='none')\n",
243+
"# XZ.grid(False)\n",
244+
"# XZ.set_title(\"Central Y-slice\")\n",
245+
"\n",
246+
"# XY.imshow(psf_rescaled[psf_rescaled.shape[0]//2+1,:,:], cmap=\"gray\", interpolation='none')\n",
247+
"# XY.grid(False)\n",
248+
"# XY.set_title(\"Central Z-slice\")\n",
249+
"\n",
250+
"# null.set_axis_off() # clear unused subplot\n",
251+
"\n",
252+
"# plt.tight_layout()\n",
253+
"# plt.show()"
254+
]
255+
},
256+
{
257+
"cell_type": "markdown",
258+
"metadata": {},
259+
"source": [
260+
"<div style=\"background-color:#abd9e9; border-radius: 5px; padding: 10pt\"><strong>Task:</strong> Using the <a href=\"https://scikit-image.org/docs/stable/api/skimage.restoration.html#skimage.restoration.richardson_lucy\"><code>skimage.restoration.richardson_lucy</code></a> function, and in a new cell, deconvolve our 3D image for channel 1 (GFP) with the PSF defined above. Display a region of the central slice before and after convolution.</div>"
261+
]
262+
},
263+
{
264+
"cell_type": "markdown",
265+
"metadata": {},
266+
"source": [
267+
"## Segmentation\n",
268+
"\n",
269+
"* Here we must introduce the Python concept of Boolean or logical values: i.e. True and False\n",
270+
"* True and False can be represented in arrays of `dtype` 'logical' or as arrays of 1s and 0s.\n",
271+
" * In both cases these are essentailly black and white images and can be displayed and processing as such\n",
272+
"* There are two groups of thresholding algorithms available in `sciki-image`:\n",
273+
" 1. Thresholding (found in `skimage.filters`), including Otsu and hysteresis thresholding\n",
274+
" 2. More complex segmentation algorithms, e.g. active contours and the watershed algorithm (found in `skimage.segmentation`)\n",
275+
"\n",
276+
"### Thresholding\n",
277+
"\n",
278+
"* Usually we would combine thresholding with pre-processing, e.g. noise reduction or deconvolution, and post-processing, e.g. morphological operations to fill holes and smooth the resulting segmentation."
279+
]
280+
},
281+
{
282+
"cell_type": "markdown",
283+
"metadata": {},
284+
"source": [
285+
"<div style=\"background-color:#abd9e9; border-radius: 5px; padding: 10pt\"><strong>Task:</strong> Using the very helpful <a href=\"https://scikit-image.org/docs/stable/api/skimage.filters.html#skimage.filters.try_all_threshold\"><code>skimage.filters.try_all_threshold</code></a> function see what a single slice of our nuclei-labelled channel looks like after different thresholding approaches.</div>"
286+
]
287+
},
288+
{
289+
"cell_type": "markdown",
290+
"metadata": {},
291+
"source": [
292+
"<div style=\"background-color:#abd9e9; border-radius: 5px; padding: 10pt\"><strong>Task:</strong> Pick the best segmentation by thresholding from your results and apply morphological (binary) closing, using <a href=\"https://scikit-image.org/docs/stable/api/skimage.morphology.html#skimage.morphology.binary_closing\"><code>skimage.morphology.binary_closing</code></a>, to fill the small holes for a cleaner segmentation.</div>"
293+
]
294+
},
295+
{
296+
"cell_type": "markdown",
297+
"metadata": {},
298+
"source": [
299+
"## Extracting Regions of Interest and Features\n",
300+
"\n",
301+
"* Once segmented, we often want to measure a variety of features of our objects."
302+
]
303+
},
304+
{
305+
"cell_type": "markdown",
306+
"metadata": {},
307+
"source": [
308+
"<div style=\"background-color:#abd9e9; border-radius: 5px; padding: 10pt\"><strong>Task: </strong>In a new cell, use <a href=\"https://scikit-image.org/docs/stable/api/skimage.measure.html\"><code>skimage.measure</code></a> to get the centroid, major and minor axis length, orientation, perimeter and intensity range for cells segmented in the previous task. Can you be sure all the detected objects are cells? Can you easily filter your results to only include those you trust?</div>"
309+
]
310+
},
311+
{
312+
"cell_type": "markdown",
313+
"metadata": {},
314+
"source": [
315+
"## Summary\n",
316+
"\n",
317+
"* Appreciate the capabilities of `scikit-image` for image processing in a Python environment\n",
318+
"* Apply known image processing techniques (e.g. smoothing) in a Python environment\n",
319+
"* Recognise additional image processing techniques (e.g. deconvolution) that are possible in a Python environment\n",
320+
"* Relate global grayscale thresholding and the logical array to segmentation and binary images\n",
321+
"* Extract features of objects from segmented images"
322+
]
323+
}
324+
],
325+
"metadata": {
326+
"kernelspec": {
327+
"display_name": "Python 3",
328+
"language": "python",
329+
"name": "python3"
330+
},
331+
"language_info": {
332+
"codemirror_mode": {
333+
"name": "ipython",
334+
"version": 3
335+
},
336+
"file_extension": ".py",
337+
"mimetype": "text/x-python",
338+
"name": "python",
339+
"nbconvert_exporter": "python",
340+
"pygments_lexer": "ipython3",
341+
"version": "3.7.4"
342+
},
343+
"widgets": {
344+
"application/vnd.jupyter.widget-state+json": {
345+
"state": {},
346+
"version_major": 2,
347+
"version_minor": 0
348+
}
349+
}
350+
},
351+
"nbformat": 4,
352+
"nbformat_minor": 4
353+
}

0 commit comments

Comments
 (0)