This is an official implementation of CW-RGP.
@inproceedings{
weng2022an,
title={An Investigation into Whitening Loss for Self-supervised Learning},
author={Xi Weng and Lei Huang and Lei Zhao and Rao Muhammad Anwer and Salman Khan and Fahad Khan},
booktitle={Advances in Neural Information Processing Systems},
editor={Alice H. Oh and Alekh Agarwal and Danielle Belgrave and Kyunghyun Cho},
year={2022},
url={https://openreview.net/forum?id=BbUxkmrstyk}
}
- Install PyTorch (pytorch.org)
- Install wandb for Logging (wandb.ai)
The code includes experiments in section 4.1.
The datasets include CIFAR-10, CIFAR-100, STL-10 and Tiny ImageNet, and the setup is strictly following W-MSE paper.
The unsupervised pretraining scripts for small and medium datasets are shown in scripts/base.sh
The results are shown in the following table:
Method | CIFAR-10 | CIFAR-100 | STL-10 | Tiny-ImageNet |
---|---|---|---|---|
top-1 5-nn | top-1 5-nn | top-1 5-nn | top-1 5-nn | |
CW-RGP 2 | 91.92 89.54 | 67.51 57.35 | 90.76 87.34 | 49.23 34.04 |
CW-RGP 4 | 92.47 90.74 | 68.26 58.67 | 92.04 88.95 | 50.24 35.99 |
The unsupervised pretraining and linear classification scripts for ImageNet are shown in scripts/ImageNet.sh
Our pretrained ResNet-50 models:
pretrain epochs |
batch size |
pretrain ckpt |
lincls ckpt |
top-1 acc. |
---|---|---|---|---|
100 | 512 | train | lincls | 69.7 |
200 | 512 | train | lincls | 71.0 |
Same as MoCo for object detection transfer, please see moco/detection.
Transfer learning results of CW-RGP (200-epochs pretrained on ImageNet):
downstream task | ckpt | log | |||
---|---|---|---|---|---|
VOC 07+12 detection | voc_ckpt | voc_log | |||
COCO detection | coco_ckpt | coco_log | |||
COCO instance seg. | coco_ckpt | coco_log |