2D variational autoencoder for reconstructing PET images implemented in pytorch
Trained using data from ACRIN-NSCLC-FDG-PET hosted by The Cancer Imaging Archive.
Converges in ~3-5 hours training on an NVIDIA 1080 (8 GB).
git clone https://github.com/dthuff/pet-vae.git
Install dependencies with Poetry:
cd pet-vae
poetry install --no-root
Training:
poetry run python main_training.py --config /path/to/my/training_config.yml
Inference: (requires that you point to a saved model .pth
in config
poetry run python main_inference.py --config /path/to/my/test_config.yml
Example config is provided: configs/train_config.yml
- Reducing config.model.img_dim
- Reducing config.model.latent_dim
- Reducing number of encoder/decoder blocks, or number of filters per block in
model.py
- Adjusting
beta
intrain.py
.beta
controls the weight of the KL-loss relative to the Reconstruction loss. Check your loss.png to see which loss term is dominant and adjustbeta
accordingly. - Checking data consistency/correctness
- Acquiring additional data
- Increasing config.model.latent_dim