SEM
The SEM dataset is composed of a training and a validation images acquired on a scanning electron microscopy (SEM). They were originally used in Buchholtz et al (2019) to showcase CARE denoising. Here, we demonstrate the performances of Noise2Noise on this particular dataset!
# Imports necessary to execute the code
from pathlib import Path
import matplotlib.pyplot as plt
import tifffile
import numpy as np
from PIL import Image
from careamics import CAREamist
from careamics.config import create_n2n_configuration
from careamics.utils.metrics import scale_invariant_psnr
from careamics_portfolio import PortfolioManager
Import the dataset¶
The dataset can be directly downloaded using the careamics-portfolio
package, which uses pooch
to download the data.
The N2N SEM dataset consists of EM images with 7 different levels of noise:
- Image 0 is recorded with 0.2 us scan time
- Image 1 is recorded with 0.5 us scan time
- Image 2 is recorded with 1 us scan time
- Image 3 is recorded with 1 us scan time
- Image 4 is recorded with 2.1 us scan time
- Image 5 is recorded with 5.0 us scan time
- Image 6 is recorded with 5.0 us scan time and is the avg. of 4 images
# instantiate data portfolio manage
portfolio = PortfolioManager()
# and download the data
root_path = Path("./data")
download = portfolio.denoising.N2N_SEM.download(root_path)
files = [f for f in download if f.endswith("tif")]
files.sort()
Visualize data¶
# load training and target image and show them side by side
train_stack = tifffile.imread(files[1])
# use the 1 us scan time to perform Noise2Noise
train_image = train_stack[2]
train_target = train_stack[3]
# plot the two images and a crop
fig, ax = plt.subplots(2, 2, figsize=(10, 10))
ax[0, 0].imshow(train_image, cmap="gray")
ax[0, 0].set_title("Training image (1 us)")
ax[0, 1].imshow(train_target, cmap="gray")
ax[0, 1].set_title("Target image (1 us)")
x_start, x_end = 600, 850
y_start, y_end = 200, 450
ax[1, 0].imshow(train_image[y_start:y_end, x_start:x_end], cmap="gray")
ax[1, 0].set_title("Training crop (1 us)")
ax[1, 1].imshow(train_target[y_start:y_end, x_start:x_end], cmap="gray")
ax[1, 1].set_title("Target crop (1 us)")
Text(0.5, 1.0, 'Target crop (1 us)')
Train with CAREamics¶
The easiest way to use CAREamics is to create a configuration and a CAREamist
.
Create configuration¶
The configuration can be built from scratch, giving the user full control over the various parameters available in CAREamics. However, a straightforward way to create a configuration for a particular algorithm is to use one of the convenience functions.
config = create_n2n_configuration(
experiment_name="sem_n2n",
data_type="array",
axes="YX",
patch_size=(64, 64),
batch_size=64,
num_epochs=50,
)
print(config)
{'algorithm_config': {'algorithm': 'n2n', 'loss': 'mae', 'lr_scheduler': {'name': 'ReduceLROnPlateau', 'parameters': {}}, 'model': {'architecture': 'UNet', 'conv_dims': 2, 'depth': 2, 'final_activation': 'None', 'in_channels': 1, 'independent_channels': False, 'n2v2': False, 'num_channels_init': 32, 'num_classes': 1}, 'optimizer': {'name': 'Adam', 'parameters': {'lr': 0.0001}}}, 'data_config': {'axes': 'YX', 'batch_size': 64, 'data_type': 'array', 'patch_size': [64, 64], 'transforms': [{'flip_x': True, 'flip_y': True, 'name': 'XYFlip', 'p': 0.5}, {'name': 'XYRandomRotate90', 'p': 0.5}]}, 'experiment_name': 'sem_n2n', 'training_config': {'checkpoint_callback': {'auto_insert_metric_name': False, 'mode': 'min', 'monitor': 'val_loss', 'save_last': True, 'save_top_k': 3, 'save_weights_only': False, 'verbose': False}, 'num_epochs': 50}, 'version': '0.1.0'}
Train¶
A CAREamist
can be created using a configuration alone, and then be trained by using the data already loaded in memory.
# instantiate a CAREamist
careamist = CAREamist(source=config)
# train
careamist.train(
train_source=train_image,
train_target=train_target,
val_minimum_split=5,
)
Predict with CAREamics¶
Prediction is done with the same CAREamist
used for training. Because the image is large we predict using tiling.
prediction = careamist.predict(source=train_image, tile_size=(256, 256))
Visualize the prediction¶
# get pseudo ground-truth from the 5 us averaged scan time
pseudo_gt = train_stack[-1]
psnr_noisy = scale_invariant_psnr(pseudo_gt, train_image)
psnr_pred = scale_invariant_psnr(pseudo_gt, prediction[0].squeeze())
# Show the full image and crops
x_start, x_end = 600, 850
y_start, y_end = 200, 450
fig, ax = plt.subplots(2, 3, figsize=(15, 10))
ax[0, 0].imshow(train_image, cmap="gray")
ax[0, 0].title.set_text(f"Training image (1 us)\nPSNR: {psnr_noisy:.2f}")
ax[0, 1].imshow(prediction[0].squeeze(), cmap="gray")
ax[0, 1].title.set_text(f"Prediction (1 us)\nPSNR: {psnr_pred:.2f}")
ax[0, 2].imshow(pseudo_gt, cmap="gray")
ax[0, 2].title.set_text("Pseudo GT (5 us averaged)")
ax[1, 0].imshow(train_image[y_start:y_end, x_start:x_end], cmap="gray")
ax[1, 1].imshow(prediction[0].squeeze()[y_start:y_end, x_start:x_end], cmap="gray")
ax[1, 2].imshow(pseudo_gt[y_start:y_end, x_start:x_end], cmap="gray")
<matplotlib.image.AxesImage at 0x7ff8d0325f90>
Create cover¶
# create a cover image
x_start, width = 500, 512
y_start, height = 1400, 512
# create image
cover = np.zeros((height, width))
# normalize train and prediction
norm_train = (train_image - train_image.min()) / (train_image.max() - train_image.min())
pred = prediction[0].squeeze()
norm_pred = (pred - pred.min()) / (pred.max() - pred.min())
# fill in halves
cover[:, :width // 2] = norm_train[y_start:y_start + height, x_start:x_start + width // 2]
cover[:, width // 2:] = norm_pred[y_start:y_start + height, x_start + width // 2:x_start + width]
# plot the single image
plt.imshow(cover, cmap="gray")
# save the image
im = Image.fromarray(cover * 255)
im = im.convert('L')
im.save("SEM_Noise2Noise.jpeg")