- scikit-learn
- pytorch >= 1.4
- sacred >= 0.8
- tqdm
- visdom_logger https://github.com/luizgh/visdom_logger
- faiss https://github.com/facebookresearch/faiss
For In-Shop, you need to manually download the data from https://drive.google.com/drive/folders/0B7EVK8r0v71pVDZFQXRsMDZCX1E (at least the img.zip
and list_eval_partition.txt
), put them in data/InShop
and extract img.zip
.
You can download and generate the train.txt
and test.txt
for every dataset using the prepare_data.py
script with:
python prepare_data.py
This will download and prepare all the necessary data for CUB200, Cars-196 and Stanford Online Products.
This repo uses sacred
to manage the experiments.
To run an experiment (e.g. on CUB200):
python experiment.py with dataset.cub
You can add an observer to save the metrics and files related to the expriment by adding -F result_dir
:
python experiment.py -F result_dir with dataset.cub
CUB200
python experiment.py with dataset.cub model.resnet50 epochs=30 lr=0.02
CARS-196
python experiment.py with dataset.cars model.resnet50 epochs=100 lr=0.05 model.norm_layer=batch
Stanford Online Products
python experiment.py with dataset.sop model.resnet50 epochs=100 lr=0.003 momentum=0.99 nesterov=True model.norm_layer=batch
In-Shop
python experiment.py with dataset.inshop model.resnet50 epochs=100 lr=0.003 momentum=0.99 nesterov=True model.norm_layer=batch
@inproceedings{boudiaf2020unifying,
title={A unifying mutual information view of metric learning: cross-entropy vs. pairwise losses},
author={Boudiaf, Malik and Rony, J{\'e}r{\^o}me and Ziko, Imtiaz Masud and Granger, Eric and Pedersoli, Marco and Piantanida, Pablo and {Ben Ayed}, Ismail},
booktitle={European Conference on Computer Vision},
pages={548--564},
year={2020},
organization={Springer}
}