Skip to content

Code for <Confidence Regularized Self-Training> in ICCV19 (Oral)

Notifications You must be signed in to change notification settings

ShigemichiMatsuzaki/CRST

 
 

Repository files navigation

Confidence Regularized Self-Training (ICCV19, Oral)

By Yang Zou*, Zhiding Yu*, Xiaofeng Liu, Vijayakumar Bhagavatula, Jinsong Wang (* indicates equal contribution).

[Paper] [Slides] [Poster]

Update

2019-10-10: CBST/CRST pytorch code for semantic segmentation released

Contents

  1. Introduction
  2. Citation and license
  3. Requirements
  4. Results
  5. Setup
  6. Usage
  7. Note

Introduction

This repository contains the regularized self-training based methods described in the ICCV 2019 paper "Confidence Regularized Self-training". Both Class-Balanced Self-Training (CBST) and Confidence Regularized Self-Training (CRST) are implemented.

Citation and license

If you use this code, please cite:

@InProceedings{Zou_2019_ICCV,
author = {Zou, Yang and Yu, Zhiding and Liu, Xiaofeng and Kumar, B.V.K. Vijaya and Wang, Jinsong},
title = {Confidence Regularized Self-Training},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}

@inproceedings{zou2018unsupervised,
  title={Unsupervised Domain Adaptation for Semantic Segmentation via Class-Balanced Self-Training},
  author={Zou, Yang and Yu, Zhiding and Kumar, BVK Vijaya and Wang, Jinsong},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  pages={289--305},
  year={2018}
}

The model and code are available for non-commercial (NC) research purposes only. If you modify the code and want to redistribute, please include the CC-BY-NC-SA-4.0 license.

Requirements:

The code is implemented based on Pytorch 0.4.0 with CUDA 9.0, OpenCV 3.2.0 and Python 2.7.12. It is tested in Ubuntu 16.04 with a single 12GB NVIDIA TiTan Xp. Maximum GPU usage is about 11GB.

Results:

  1. GTA2city:

    Case mIoU Road Sidewalk Build Wall Fence Pole Traffic Light Traffic Sign Veg. Terrain Sky Person Rider Car Truck Bus Train Motor Bike
    Source 33.35 71.71 18.53 68.02 17.37 10.15 36.63 27.63 6.27 78.66 21.80 67.69 58.28 20.72 59.26 16.43 12.45 7.93 21.21 12.96
    CBST 46.47 89.91 53.84 79.73 30.29 19.21 40.23 32.28 22.26 84.11 29.96 75.52 61.93 28.54 82.57 25.89 33.76 19.29 33.62 40.00
    CRST-LRENT 46.51 89.98 53.86 79.81 30.27 19.15 40.30 32.22 22.24 84.09 29.81 75.45 62.09 28.66 82.76 26.02 33.61 19.42 33.69 40.34
    CRST-MRKLD 47.39 91.30 55.64 80.04 30.22 18.85 39.27 35.96 27.09 84.52 31.81 74.55 62.59 27.90 82.43 23.81 31.10 25.36 32.60 45.43

Setup

We assume you are working in CRST-master folder.

  1. Datasets:
  • Download GTA5 dataset. Since GTA-5 contains images with different resolutions, we need to resize all images to 1052x1914.
  • Download Cityscapes.
  • Put downloaded data in "dataset" folder.
  1. Source pretrained models:
  • Download source model trained in GTA5 and put it into "src_model/gta5" folder.

Usage

  1. To run the self-training, you need to set the data paths of source data (data-src-dir) and target data (data-tgt-dir) by yourself. Besides that, you can keep other argument setting as default.

  2. Play with self-training for GTA2Cityscapes.

  • CBST:
sh cbst.sh
  • CRST-MRKLD:
sh mrkld.sh
  • CRST-LREND:
sh lrent.sh
  • For CBST, set "--kc-policy cb --kc-value conf". You can keep them as default.
  • Multi-scale testing are implemented in both self-training code and evaluation code. Set MST with "--test-scale".
  • We use a small class patch mining strategy to mine the patches including small classes. To turn off small class mining, set "--mine-chance 0.0".
  1. Evaluation
  • Test in Cityscapes for model compatible with GTA-5 (Initial source trained model as example). Remember to set the data folder (--data-dir).
sh evaluate.sh
  1. Train in source domain. Also remember to set the data folder (--data-dir).
  • Train in GTA-5
sh train.sh

Note

  • This code is based on DeepLab-ResNet-Pytorch.
  • The code is tested in Pytorch 0.4.0 and Python 2.7. We found running the code with other Pytorch versions will give different results. I suggest to run the code with the exact Pytorch version 0.4.0. Different performances on even 0.4.1 were reported by other users of this code.

Related Works

Contact: [email protected]

About

Code for <Confidence Regularized Self-Training> in ICCV19 (Oral)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.5%
  • Shell 0.5%