-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loss weights for optimal training #16
Comments
As far as I remember, using equal loss weight for all losses did give meaningful outputs with OpenCell data. A tradeoff was observed between fc loss and reconstruction loss. When the weight for fc loss is high, the reconstruction gets poor quality and vice versa. Looking at my old code, I have used |
Hi,
I had a question about the optimal weighting of the losses during training.
I notice that initially if you weigh the fc loss, vq_loss, and reconstruction loss equally, the quantizer is not trained well enough to provide meaningful outputs, and the network never learns a good codebook because it is biased too much towards prediction. Reducing the prediction loss, on the other hand, makes the network learn a representation which is good for reconstruction/quantization but not prediction. I am unable to find a good balance.
Was the weighting of the losses modulated during training? What loss weights were optimal for training?
The text was updated successfully, but these errors were encountered: