training_model
Training configuration.
TrainingConfig
#
Bases: BaseModel
Parameters related to the training.
Mandatory parameters are: - num_epochs: number of epochs, greater than 0. - batch_size: batch size, greater than 0. - augmentation: whether to use data augmentation or not (True or False).
Attributes:
Name | Type | Description |
---|---|---|
num_epochs | int | Number of epochs, greater than 0. |
Source code in src/careamics/config/training_model.py
checkpoint_callback = CheckpointModel()
class-attribute
instance-attribute
#
Checkpoint callback configuration, following PyTorch Lightning Checkpoint callback.
early_stopping_callback = Field(default=None, validate_default=True)
class-attribute
instance-attribute
#
Early stopping callback configuration, following PyTorch Lightning Checkpoint callback.
lightning_trainer_config = None
class-attribute
instance-attribute
#
Configuration for the PyTorch Lightning Trainer, following PyTorch Lightning Trainer class
logger = None
class-attribute
instance-attribute
#
Logger to use during training. If None, no logger will be used. Available loggers are defined in SupportedLogger.
__str__()
#
Pretty string reprensenting the configuration.
Returns:
Type | Description |
---|---|
str | Pretty string. |
has_logger()
#
Check if the logger is defined.
Returns:
Type | Description |
---|---|
bool | Whether the logger is defined or not. |