This folder contains examples for training and validating generative models in MONAI. The examples are designed as demonstration to showcase the training process for these types of networks. To achieve optimal performance, it is recommended that users to adjust the network and hyper-parameters based on their device and training dataset.
Some tutorials may require extra components on top of what is installed with base MONAI:
pip install monai[lpips]The MONAI GenerateModels package is no longer required, however this can be installed with pip install monai-generative==0.2.3.
Example notebook demonstrating diffusion with MedNIST toy dataset.
Example shows the use cases of training and validating a 2D Latent Diffusion Model on axial slices from Brats 2016&2017 data.
Example shows the use cases of training and validating a 3D Latent Diffusion Model on Brats 2016&2017 data.
Example shows the use cases of training and validating a 3D Latent Diffusion Model on Brats 2016&2017 data, expanding on the above notebook.
Example shows the use cases of training and validating Nvidia MAISI (Medical AI for Synthetic Imaging) model, a 3D Latent Diffusion Model that can generate large CT images with paired segmentation masks, variable volume size and voxel size, as well as controllable organ/tumor size.
Example shows the use cases of applying SPADE, a VAE-GAN-based neural network for semantic image synthesis, to a subset of BraTS that was registered to MNI space and resampled to 2mm isotropic space, with segmentations obtained using Geodesic Information Flows (GIF).
Example shows the use cases of applying SPADE normalization to a latent diffusion model, following the methodology by Wang et al., for semantic image synthesis on a subset of BraTS registered to MNI space and resampled to 2mm isotropic space, with segmentations obtained using Geodesic Information Flows (GIF).
Example shows the use cases of how to use MONAI for 2D segmentation of images using DDPMs. The same structure can also be used for conditional image generation, or image-to-image translation.
Example shows the use cases of using MONAI to evaluate the performance of a generative model by computing metrics such as Frechet Inception Distance (FID) and Maximum Mean Discrepancy (MMD) for assessing realism, as well as MS-SSIM and SSIM for evaluating image diversity.
Example shows how to train a Vector-Quantized Variation Autoencoder + Transformers on the MedNIST dataset.
Examples show how to train Vector Quantized Variation Autoencoder on 2D and 3D, and how to use the PatchDiscriminator class to train a VQ-GAN and improve the quality of the generated images.
Example shows how to easily train a DDPM on medical data (MedNIST).
Example shows how to easily train a DDPM on medical data (Decathlon Task 01).
Example compares the performance of different noise schedulers. This shows how to sample a diffusion model using the DDPM, DDIM, and PNDM schedulers and how different numbers of timesteps affect the quality of the samples.
Example shows how to train a DDPM using the v-prediction parameterization, which improves the stability and convergence of the model. MONAI supports different parameterizations for the diffusion model (epsilon, sample, and v-prediction).
Example shows how to train a DDPM on medical data using Pytorch Ignite. This shows how to use the DiffusionPrepareBatch to prepare the model inputs and MONAI's SupervisedTrainer and SupervisedEvaluator to train DDPMs.
Example shows how to use a DDPM to inpaint of 2D images from the MedNIST dataset using the RePaint method.
Example shows how to use ControlNet to condition a diffusion model trained on 2D brain MRI images on binary brain masks.
Example shows the use cases of applying a spatial VAE to a 2D synthesis example. To obtain realistic results, the model is trained on the original VAE losses, as well as perceptual and adversarial ones.
Example shows the use cases of applying a spatial VAE to a 3D synthesis example. To obtain realistic results, the model is trained on the original VAE losses, as well as perceptual and adversarial ones.
Performing anomaly detection with diffusion models: implicit guidance, using transformers and classifier free guidance
Examples show how to perform anomaly detection in 2D, using implicit guidance 2D-classifier free guiance, transformers using transformers and classifier free guidance.
2D super-resolution using diffusion models: using torch and using torch lightning.
Examples show how to perform super-resolution in 2D, using PyTorch and PyTorch Lightning.
Example shows how to train a DDPM and an encoder simultaneously, resulting in the latents of the encoder guiding the inference process of the DDPM.