Convenience functions#
As building a full CAREamics configuration requires a complete understanding of the various parameters and experience with Pydantic, we provide convenience functions to create configurations with a only few parameters related to the algorithm users want to train.
All convenience methods can be found in the careamics.config
modules. CAREamics currently supports Noise2Void and its variants, CARE and Noise2Noise.
from careamics.config import (
create_care_configuration, # CARE
create_n2n_configuration, # Noise2Noise
create_n2v_configuration, # Noise2Void
)
Each method does all the heavy lifting to make the configuration coherent. They share a certain numbers of mandatory parameters:
experiment_name
: The name of the experiment, used to differentiate trained models.data_type
: One of the types supported by CAREamics (array
,tiff
orcustom
).axes
: Axes of the data (e.g. SYX), can only the following letters:STCZYX
.patch_size
: Size of the patches along the spatial dimensions (e.g. [64, 64]).batch_size
: Batch size to use during training (e.g. 8). This parameter affects the memory footprint on the GPU.num_epochs
: Number of epochs.
Additional optional parameters can be passed to tweak the configuration.
Training with channels#
When training with multiple channels, the axes
parameter should contain C
(e.g. YXC
). An error will be then thrown if the optional parameter n_channels
(or n_channel_in
for CARE and Noise2Noise) is not specified! Likewise if n_channels
is specified but C
is not in axes
.
The correct way is to specify them both at the same time.
config = create_n2v_configuration(
experiment_name="n2v_2D_channels",
data_type="tiff",
axes="YXC", # (1)!
patch_size=[64, 64],
batch_size=8,
num_epochs=20,
n_channels=3, # (2)!
)
- The axes contain the letter
C
. - The number of channels is specified.
config = create_care_configuration(
experiment_name="care_2D_channels",
data_type="tiff",
axes="YXC", # (1)!
patch_size=[64, 64],
batch_size=8,
num_epochs=20,
n_channels_in=3, # (2)!
n_channels_out=2, # (3)!
)
- The axes contain the letter
C
. - The number of channels is specified.
- Depending on the CARE task, you also see to set
n_channels_out
(optional).
config = create_n2n_configuration(
experiment_name="n2n_2D_channels",
data_type="tiff",
axes="YXC", # (1)!
patch_size=[64, 64],
batch_size=8,
num_epochs=20,
n_channels_in=3, # (2)!
n_channels_out=2, # (3)!
)
- The axes contain the letter
C
. - The number of channels is specified.
- Depending on the CARE task, you also see to set
n_channels_out
(optional).
Independent channels
By default, the channels are trained independently: that means that they have no influence on each other. As they might have completely different noise models, this can lead to better results.
However, in some cases, you might want to train the channels together to get more structural information.
To control whether the channels are trained independently, you can use the independent_channels
parameter:
config = create_n2v_configuration(
experiment_name="n2v_2D_mix_channels",
data_type="tiff",
axes="YXC", # (1)!
patch_size=[64, 64],
batch_size=8,
num_epochs=20,
n_channels=3,
independent_channels=False, # (2)!
)
- As previously, we specify the channels in
axes
andn_channels
. - This ensures that the channels are trained together!
config = create_care_configuration(
experiment_name="care_2D_mix_channels",
data_type="tiff",
axes="YXC", # (1)!
patch_size=[64, 64],
batch_size=8,
num_epochs=20,
n_channels_in=3,
n_channels_out=2,
independent_channels=False, # (2)!
)
- As previously, we specify the channels in
axes
andn_channels_in
. - This ensures that the channels are trained together!
config = create_n2n_configuration(
experiment_name="n2n_2D_mix_channels",
data_type="tiff",
axes="YXC", # (1)!
patch_size=[64, 64],
batch_size=8,
num_epochs=20,
n_channels_in=3,
independent_channels=False, # (2)!
)
- As previously, we specify the channels in
axes
andn_channels
. - This ensures that the channels are trained together!
Augmentations#
By default CAREamics configuration uses augmentations that are specific to the algorithm (e.g. Noise2Void) and that are compatible with microscopy images (e.g. flip and 90 degrees rotations).
Disable augmentations#
However in certain cases, users might want to disable augmentations. For instance if you have structures that are always oriented in the same direction. To do so there is a single augmentations
parameter:
config = create_n2v_configuration(
experiment_name="n2v_2D_no_aug",
data_type="tiff",
axes="YX",
patch_size=[64, 64],
batch_size=8,
num_epochs=20,
augmentations=[], # (1)!
)
- Augmentations are disabled (but normalization and N2V pixel manipulation will still be added by CAREamics!).
Non-default augmentations#
Default augmentations apply a random flip along X or Y and a 90 degrees rotation (note that there is always for each patch and for each augmentation a 0.5 probability that no augmentation is applied). For samples that contain objects that are never flipped or rotated (e.g. objects with always the same orientation, or with patterns along a certain direction), it will be beneficial to apply non-default augmentations.
For instance, in a case where the objects can only be flipped horizontally, we would only apply flipping along the X
axis and not apply any rotation.
from careamics.config.transformations import XYFlipModel
config = create_n2v_configuration(
experiment_name="n2v_2D_no_aug",
data_type="tiff",
axes="YX",
patch_size=[64, 64],
batch_size=8,
num_epochs=20,
augmentations=[XYFlipModel(flip_y=False)], # (1)!
)
- Only flipping along the
X
axis is applied.
from careamics.config.transformations import XYFlipModel
config = create_care_configuration(
experiment_name="care_2D_aug",
data_type="tiff",
axes="YX",
patch_size=[64, 64],
batch_size=8,
num_epochs=20,
augmentations=[XYFlipModel(flip_y=False)], # (1)!
)
- Only flipping along the
X
axis is applied.
from careamics.config.transformations import XYFlipModel
config = create_n2n_configuration(
experiment_name="n2n_2D_aug",
data_type="tiff",
axes="YX",
patch_size=[64, 64],
batch_size=8,
num_epochs=20,
augmentations=[XYFlipModel(flip_y=False)], # (1)!
)
- Only flipping along the
X
axis is applied.
Available augmentations
The available augmentations are the following:
XYFlipModel
, which can be alongX
,Y
or both.XYRandomRotate90Model
Choosing a logger#
By default, CAREamics simply log the training progress in the console. However, it is possible to use either WandB or TensorBoard.
Loggers installation
Using WandB or TensorBoard require the installation of extra
dependencies. Check out the installation section to know more about it.
config = create_n2v_configuration(
experiment_name="n2v_2D_wandb",
data_type="tiff",
axes="YX",
patch_size=[64, 64],
batch_size=8,
num_epochs=20,
logger="wandb", # (1)!
)
wandb
ortensorboard
(Advanced) Passing data loader parameters#
The convenience functions allow passing data loader parameters directly through the train_dataloader_params
or val_dataloader_params
parameters. These are the same parameters as those accepted by the torch.utils.data.DataLoader
class (see PyTorch documentation).
config = create_n2v_configuration(
experiment_name="n2v_3D",
data_type="tiff",
axes="ZYX",
patch_size=[16, 64, 64],
batch_size=8,
num_epochs=20,
train_dataloader_params={
"num_workers": 4, # (1)!
},
val_dataloader_params={
"num_workers": 2, # (2)!
},
)
- In practice this is the one parameter you might want to change.
- You can also set the parameters for the validation dataloader.
config = create_care_configuration(
experiment_name="n2n_3D",
data_type="tiff",
axes="ZYX",
patch_size=[16, 64, 64],
batch_size=8,
num_epochs=20,
train_dataloader_params={
"num_workers": 4, # (1)!
},
val_dataloader_params={
"num_workers": 2, # (2)!
},
)
- In practice this is the one parameter you might want to change.
- You can also set the parameters for the validation dataloader.
config = create_n2n_configuration(
experiment_name="n2n_3D",
data_type="tiff",
axes="ZYX",
patch_size=[16, 64, 64],
batch_size=8,
num_epochs=20,
train_dataloader_params={
"num_workers": 4, # (1)!
},
val_dataloader_params={
"num_workers": 2, # (2)!
},
)
- In practice this is the one parameter you might want to change.
- You can also set the parameters for the validation dataloader.
(Advanced) Passing model specific parameters#
By default, the convenience functions use the default UNet model parameters. But if you are feeling brave, you can pass model specific parameters in the model_params
dictionary.
config = create_n2v_configuration(
experiment_name="n2v_3D",
data_type="tiff",
axes="ZYX",
patch_size=[16, 64, 64],
batch_size=8,
num_epochs=20,
model_params={
"depth": 3, # (1)!
"num_channels_init": 64, # (2)!
# (3)!
},
)
- The depth of the UNet.
- The number of channels in the first layer.
- Add any other parameter specific to the model!
config = create_care_configuration(
experiment_name="care_3D",
data_type="tiff",
axes="ZYX",
patch_size=[16, 64, 64],
batch_size=8,
num_epochs=20,
model_params={
"depth": 3, # (1)!
"num_channels_init": 64, # (2)!
# (3)!
},
)
- The depth of the UNet.
- The number of channels in the first layer.
- Add any other parameter specific to the model!
config = create_n2n_configuration(
experiment_name="n2n_3D",
data_type="tiff",
axes="ZYX",
patch_size=[16, 64, 64],
batch_size=8,
num_epochs=20,
model_params={
"depth": 3, # (1)!
"num_channels_init": 64, # (2)!
# (3)!
},
)
- The depth of the UNet.
- The number of channels in the first layer.
- Add any other parameter specific to the model!
Model parameters overwriting
Some values of the model parameters are not compatible with certain algorithms. Therefore, these are overwritten by the convenience functions. For instance, if you pass in_channels
or independent_channels
in the model_kwargs
dictionary, they will be ignored and replaced by the explicit parameters passed to the convenience function.
Model parameters
The model parameters are the following:
conv_dims
num_classes
in_channels
depth
num_channels_init
final_activation
n2v2
independent_channels
Description for each parameter can be found in the code reference.
Noise2Void specific parameters#
Noise2Void has a few additional parameters that can be set, including for using its variants N2V2 and structN2V.
Understanding Noise2Void and its variants
Before deciding which variant to use, and how to modify the parameters, we recommend to die a little a bit on how each algorithm works!
Noise2Void parameters#
There are two Noise2Void parameters that influence how the patches are manipulated during training:
roi_size
: This parameter specifies the size of the area used to replace the masked pixel value.masked_pixel_percentage
: This parameter specifies how many pixels per patch will be manipulated.
While the default values are usually fine, they can be tweaked to improve the training in certain cases.
config = create_n2v_configuration(
experiment_name="n2v_2D",
data_type="tiff",
axes="YX",
patch_size=[64, 64],
batch_size=8,
num_epochs=20,
roi_size=7,
masked_pixel_percentage=0.5,
)
N2V2#
To use N2V2, the use_n2v2
parameter should simply be set to True
.
config = create_n2v_configuration(
experiment_name="n2v2_3D",
data_type="tiff",
axes="ZYX",
patch_size=[16, 64, 64],
batch_size=8,
num_epochs=20,
use_n2v2=True, # (1)!
)
- What it does is modifying the architecture of the UNet model and the way the masked pixels are replaced.
structN2V#
StructN2V has two parameters that can be set:
struct_n2v_axis
: The axis along which the structN2V mask will be applied. By default it is set tonone
(structN2V is disabled), you can set it to eitherhorizontal
orvertical
.struct_n2v_span
: The size of the structN2V mask.
config = create_n2v_configuration(
experiment_name="structn2v_3D",
data_type="tiff",
axes="ZYX",
patch_size=[16, 64, 64],
batch_size=8,
num_epochs=20,
struct_n2v_axis="horizontal",
struct_n2v_span=5,
)
CARE and Noise2Noise parameters#
Using another loss function#
As opposed to Noise2Void, CARE and Noise2Noise can be trained with different loss functions. This can be set using the loss
parameter (surprise, surprise!).