World Model implementation for fastMRI datasets: towards autonoumus imaging (selecting acceleration factors)
- Train VAE with random actions (select acceleration factor and centre fraction), save model as json format (to load multiple times)
- Check VAE recons: they will be blurry use autoregressive flows (VQ-VAE) in future
- Generate series datasets with random actions "aceeleration and centre fraction" reconstructed images to obtain latent representation "z" required to train RNN
- Train MDN-RNN with series datasets i.e. actions and latent vae-representation rnn.train() to obtain probablity distribution over action given latents
- Save the model in json format
- Commit the models to a git-repo
- Clone git-repo in a GCP VM-instance with 64 cores to train the controller
- Install MPI (https://www.youtube.com/watch?v=FOqhiX4X5xw), anaconda, and mpi4py (via pip)
- Train linear controller using CMA-ES: No tensorflow required, using David Ha's implementation for details references below
python train_controller.py fastmri -e 1 -n 4 -t 1 --max_length 100
Inference using controller to generate new actions (see the Jupyter notebook)
e.g. VAE versus traditional recons at 8X acceleration acceleration factors obtained via controller
See blog post for further details: https://medium.com/@jehillparikh/towards-autonomous-mr-imaging-using-world-models-accacce00b5a