eval_utils
This script provides methods to evaluate the performance of the LVAE model. It includes functions to: - make predictions, - quantify the performance of the model - create plots to visualize the results.
Calibration
#
Source code in src/careamics/lvae_training/eval_utils.py
873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 |
|
compute_bin_boundaries(predict_logvar)
#
Compute the bin boundaries for num_bins
bins and the given logvar values.
Source code in src/careamics/lvae_training/eval_utils.py
compute_stats(pred, pred_logvar, target)
#
It computes the bin-wise RMSE and RMV for each channel of the predicted image.
Recall that: - RMSE = np.sqrt((pred - target)2 / num_pixels) - RMV = np.sqrt(np.mean(pred_std2))
ALGORITHM - For each channel: - Given the bin boundaries, assign pixels of std_ch
array to a specific bin index. - For each bin index: - Compute the RMSE, RMV, and number of pixels for that bin.
NOTE: each channel of the predicted image/logvar has its own stats.
Args: pred: np.ndarray, shape (n, h, w, c) pred_logvar: np.ndarray, shape (n, h, w, c) target: np.ndarray, shape (n, h, w, c)
Source code in src/careamics/lvae_training/eval_utils.py
PatchLocation
#
Encapsulates t_idx and spatial location.
Source code in src/careamics/lvae_training/eval_utils.py
TilingMode
#
add_psnr_str(ax_, psnr)
#
Add psnr string to the axes
Source code in src/careamics/lvae_training/eval_utils.py
clean_ax(ax)
#
Helper function to remove ticks from axes in plots.
Source code in src/careamics/lvae_training/eval_utils.py
get_calibrated_factor_for_stdev(pred, pred_logvar, target, batch_size=32, epochs=500, lr=0.01)
#
Here, we calibrate the uncertainty by multiplying the predicted std (mmse estimate or predicted logvar) with a scalar. We return the calibrated scalar. This needs to be multiplied with the std.
NOTE: Why is the input logvar and not std? because the model typically predicts logvar and not std.
Source code in src/careamics/lvae_training/eval_utils.py
get_dset_predictions(model, dset, batch_size, loss_type, mmse_count=1, num_workers=4)
#
Get patch-wise predictions from a model for the entire dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model | VAEModule | Lightning model used for prediction. | required |
dset | Dataset | Dataset to predict on. | required |
batch_size | int | Batch size to use for prediction. | required |
loss_type | Literal['musplit', 'denoisplit', 'denoisplit_musplit'] | Type of reconstruction loss used by the model, by default | required |
mmse_count | int | Number of samples to generate for each input and then to average over for MMSE estimation, by default 1. | 1 |
num_workers | int | Number of workers to use for DataLoader, by default 4. | 4 |
Returns:
Type | Description |
---|---|
tuple[ndarray, ndarray, ndarray, ndarray, List[float]] | Tuple containing: - predictions: Predicted images for the dataset. - predictions_std: Standard deviation of the predicted images. - logvar_arr: Log variance of the predicted images. - losses: Reconstruction losses for the predictions. - psnr: PSNR values for the predictions. |
Source code in src/careamics/lvae_training/eval_utils.py
515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 |
|
get_eval_output_dir(saveplotsdir, patch_size, mmse_count=50)
#
Given the path to a root directory to save plots, patch size, and mmse count, it returns the specific directory to save the plots.
Source code in src/careamics/lvae_training/eval_utils.py
get_fractional_change(target, prediction, max_val=None)
#
Get relative difference between target and prediction.
get_location_from_idx(dset, dset_input_idx, pred_h, pred_w)
#
For a given idx of the dataset, it returns where exactly in the dataset, does this prediction lies. Note that this prediction also has padded pixels and so a subset of it will be used in the final prediction. Which time frame, which spatial location (h_start, h_end, w_start,w_end) Args: dset: dset_input_idx: pred_h: pred_w:
Source code in src/careamics/lvae_training/eval_utils.py
get_predictions(idx, val_dset, model, mmse_count=50, patch_size=256)
#
Given an index and a validation/test set, it returns the input, target and the reconstructed images for that index.
Source code in src/careamics/lvae_training/eval_utils.py
get_psnr_str(tar_hsnr, pred, col_idx)
#
Compute PSNR between the ground truth (tar_hsnr
) and the predicted image (pred
).
get_zero_centered_midval(error)
#
When done this way, the midval ensures that the colorbar is centered at 0. (Don't know how, but it works ;))
Source code in src/careamics/lvae_training/eval_utils.py
nll(x, mean, logvar)
#
Log of the probability density of the values x under the Normal distribution with parameters mean and logvar.
:param x: tensor of points, with shape (batch, channels, dim1, dim2) :param mean: tensor with mean of distribution, shape (batch, channels, dim1, dim2) :param logvar: tensor with log-variance of distribution, shape has to be either scalar or broadcastable
Source code in src/careamics/lvae_training/eval_utils.py
plot_crops(inp, tar, tar_hsnr, recon_img_list, calibration_stats, num_samples=2, baseline_preds=None)
#
Source code in src/careamics/lvae_training/eval_utils.py
150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 |
|
plot_error(target, prediction, cmap=matplotlib.cm.coolwarm, ax=None, max_val=None)
#
Plot the relative difference between target and prediction. NOTE: The plot is overlapped to the prediction image (in gray scale). NOTE: The colorbar is centered at 0.
Source code in src/careamics/lvae_training/eval_utils.py
shiftedColorMap(cmap, start=0, midpoint=0.5, stop=1.0, name='shiftedcmap')
#
Adapted from https://stackoverflow.com/questions/7404116/defining-the-midpoint-of-a-colormap-in-matplotlib
Function to offset the "center" of a colormap. Useful for data with a negative min and positive max and you want the middle of the colormap's dynamic range to be at zero.
Input
cmap : The matplotlib colormap to be altered start : Offset from lowest point in the colormap's range. Defaults to 0.0 (no lower offset). Should be between 0.0 and midpoint
. midpoint : The new center of the colormap. Defaults to 0.5 (no shift). Should be between 0.0 and 1.0. In general, this should be 1 - vmax / (vmax + abs(vmin)) For example if your data range from -15.0 to +5.0 and you want the center of the colormap at 0.0, midpoint
should be set to 1 - 5/(5 + 15)) or 0.75 stop : Offset from highest point in the colormap's range. Defaults to 1.0 (no upper offset). Should be between midpoint
and 1.0.
Source code in src/careamics/lvae_training/eval_utils.py
show_for_one(idx, val_dset, highsnr_val_dset, model, calibration_stats, mmse_count=5, patch_size=256, num_samples=2, baseline_preds=None)
#
Given an index, it plots the input, target, reconstructed images and the difference image. Note the the difference image is computed with respect to a ground truth image, obtained from the high SNR dataset.
Source code in src/careamics/lvae_training/eval_utils.py
stitch_predictions(predictions, dset, smoothening_pixelcount=0)
#
Args: smoothening_pixelcount: number of pixels which can be interpolated
Source code in src/careamics/lvae_training/eval_utils.py
stitch_predictions_new(predictions, dset)
#
Args: smoothening_pixelcount: number of pixels which can be interpolated
Source code in src/careamics/lvae_training/eval_utils.py
785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 |
|