Skip to content

Callback Config

Source

Callback Pydantic models.

CheckpointConfig

Bases: BaseModel

Checkpoint saving callback Pydantic model.

The parameters corresponds to those of pytorch_lightning.callbacks.ModelCheckpoint.

See: https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.callbacks.ModelCheckpoint.html#modelcheckpoint

auto_insert_metric_name = Field(default=False) class-attribute instance-attribute

When True, the checkpoints filenames will contain the metric name. Note that val_loss is already embedded in the default filename pattern and enabling this field will produce redundant metric names in the filename.

every_n_epochs = Field(default=None, ge=1, le=100) class-attribute instance-attribute

Number of epochs between checkpoints.

every_n_train_steps = Field(default=None, ge=1, le=1000) class-attribute instance-attribute

Number of training steps between checkpoints.

mode = Field(default='min') class-attribute instance-attribute

One of {min, max}. If save_top_k != 0, the decision to overwrite the current save file is made based on either the maximization or the minimization of the monitored quantity. For 'val_acc', this should be 'max', for 'val_loss' this should be 'min', etc.

monitor = Field(default='val_loss') class-attribute instance-attribute

Quantity to monitor, currently only val_loss.

save_last = Field(default=True) class-attribute instance-attribute

When True, saves a {experiment_name}_last.ckpt copy whenever a checkpoint file gets saved.

save_top_k = Field(default=3, ge=(-1), le=100) class-attribute instance-attribute

If save_top_k == k, the best k models according to the quantity monitored will be saved. Ifsave_top_k == 0, no models are saved. ifsave_top_k == -1`, all models are saved.

save_weights_only = Field(default=False) class-attribute instance-attribute

When True, only the model's weights will be saved (model.save_weights).

train_time_interval = Field(default=None) class-attribute instance-attribute

Checkpoints are monitored at the specified time interval.

verbose = Field(default=False) class-attribute instance-attribute

Verbosity mode.

EarlyStoppingConfig

Bases: BaseModel

Early stopping callback Pydantic model.

The parameters corresponds to those of pytorch_lightning.callbacks.ModelCheckpoint.

See: https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.callbacks.EarlyStopping.html#lightning.pytorch.callbacks.EarlyStopping

check_finite = Field(default=True) class-attribute instance-attribute

When True, stops training when the monitored quantity becomes NaN or inf.

check_on_train_epoch_end = Field(default=False) class-attribute instance-attribute

Whether to run early stopping at the end of the training epoch. If this is False, then the check runs at the end of the validation.

divergence_threshold = Field(default=None) class-attribute instance-attribute

Stop training as soon as the monitored quantity becomes worse than this threshold.

log_rank_zero_only = Field(default=False) class-attribute instance-attribute

When set True, logs the status of the early stopping callback only for rank 0 process.

min_delta = Field(default=0.0, ge=0.0, le=1.0) class-attribute instance-attribute

Minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than or equal to min_delta, will count as no improvement.

mode = Field(default='min') class-attribute instance-attribute

One of {min, max, auto}.

monitor = Field(default='val_loss') class-attribute instance-attribute

Quantity to monitor.

patience = Field(default=3, ge=1, le=10) class-attribute instance-attribute

Number of checks with no improvement after which training will be stopped.

stopping_threshold = Field(default=None) class-attribute instance-attribute

Stop training immediately once the monitored quantity reaches this threshold.

verbose = Field(default=False) class-attribute instance-attribute

Verbosity mode.