layers
Layer module.
This submodule contains layers used in the CAREamics models.
Conv_Block
#
Bases: Module
Convolution block used in UNets.
Convolution block consist of two convolution layers with optional batch norm, dropout and with a final activation function.
The parameters are directly mapped to PyTorch Conv2D and Conv3d parameters, see PyTorch torch.nn.Conv2d and torch.nn.Conv3d for more information.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
conv_dim | int | Number of dimension of the convolutions, 2 or 3. | required |
in_channels | int | Number of input channels. | required |
out_channels | int | Number of output channels. | required |
intermediate_channel_multiplier | int | Multiplied for the number of output channels, by default 1. | 1 |
stride | int | Stride of the convolutions, by default 1. | 1 |
padding | int | Padding of the convolutions, by default 1. | 1 |
bias | bool | Bias of the convolutions, by default True. | True |
groups | int | Controls the connections between inputs and outputs, by default 1. | 1 |
activation | str | Activation function, by default "ReLU". | 'ReLU' |
dropout_perc | float | Dropout percentage, by default 0. | 0 |
use_batch_norm | bool | Use batch norm, by default False. | False |
Source code in src/careamics/models/layers.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 |
|
__init__(conv_dim, in_channels, out_channels, intermediate_channel_multiplier=1, stride=1, padding=1, bias=True, groups=1, activation='ReLU', dropout_perc=0, use_batch_norm=False)
#
Constructor.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
conv_dim | int | Number of dimension of the convolutions, 2 or 3. | required |
in_channels | int | Number of input channels. | required |
out_channels | int | Number of output channels. | required |
intermediate_channel_multiplier | int | Multiplied for the number of output channels, by default 1. | 1 |
stride | int | Stride of the convolutions, by default 1. | 1 |
padding | int | Padding of the convolutions, by default 1. | 1 |
bias | bool | Bias of the convolutions, by default True. | True |
groups | int | Controls the connections between inputs and outputs, by default 1. | 1 |
activation | str | Activation function, by default "ReLU". | 'ReLU' |
dropout_perc | float | Dropout percentage, by default 0. | 0 |
use_batch_norm | bool | Use batch norm, by default False. | False |
Source code in src/careamics/models/layers.py
forward(x)
#
Forward pass.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x | Tensor | Input tensor. | required |
Returns:
Type | Description |
---|---|
Tensor | Output tensor. |
Source code in src/careamics/models/layers.py
MaxBlurPool
#
Bases: Module
Compute pools and blurs and downsample a given feature map.
Inspired by Kornia MaxBlurPool implementation. Equivalent to nn.Sequential(nn.MaxPool2d(...), BlurPool2D(...))
Parameters#
dim : int Toggles between 2D and 3D. kernel_size : Union[tuple[int, int], int] Kernel size for max pooling. stride : int Stride for pooling. max_pool_size : int Max kernel size for max pooling. ceil_mode : bool Ceil mode, by default False. Set to True to match output size of conv2d.
Source code in src/careamics/models/layers.py
413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 |
|
__init__(dim, kernel_size, stride=2, max_pool_size=2, ceil_mode=False)
#
Constructor.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dim | int | Dimension of the convolution. | required |
kernel_size | Union[tuple[int, int], int] | Kernel size for max pooling. | required |
stride | int | Stride, by default 2. | 2 |
max_pool_size | int | Maximum pool size, by default 2. | 2 |
ceil_mode | bool | Ceil mode, by default False. Set to True to match output size of conv2d. | False |
Source code in src/careamics/models/layers.py
forward(x)
#
Forward pass of the function.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x | Tensor | Input tensor. | required |
Returns:
Type | Description |
---|---|
Tensor | Output tensor. |
Source code in src/careamics/models/layers.py
get_pascal_kernel_1d(kernel_size, norm=False, *, device=None, dtype=None)
#
Generate Yang Hui triangle (Pascal's triangle) for a given number.
Inspired by Kornia implementation. TODO link
Parameters:
Name | Type | Description | Default |
---|---|---|---|
kernel_size | int | Kernel size. | required |
norm | bool | Normalize the kernel, by default False. | False |
device | Optional[device] | Device of the tensor, by default None. | None |
dtype | Optional[dtype] | Data type of the tensor, by default None. | None |
Returns:
Type | Description |
---|---|
Tensor | Pascal kernel. |
Examples:
>>> get_pascal_kernel_1d(1)
tensor([1.])
>>> get_pascal_kernel_1d(2)
tensor([1., 1.])
>>> get_pascal_kernel_1d(3)
tensor([1., 2., 1.])
>>> get_pascal_kernel_1d(4)
tensor([1., 3., 3., 1.])
>>> get_pascal_kernel_1d(5)
tensor([1., 4., 6., 4., 1.])
>>> get_pascal_kernel_1d(6)
tensor([ 1., 5., 10., 10., 5., 1.])