Skip to content

Pytorch Code release for our NeurIPS paper "Multi-source Domain Adaptation for Semantic Segmentation"

License

Notifications You must be signed in to change notification settings

ShigemichiMatsuzaki/MADAN

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MADAN

A Pytorch Code for Multi-source Domain Adaptation for Semantic Segmentation

If you use this code in your research please consider citing:

@InProceedings{zhao2019madan,
   title = {Multi-source Domain Adaptation for Semantic Segmentation},
   author = {Zhao, Sicheng and Li, Bo and Yue, Xiangyu and Gu, Yang and Xu, Pengfei and Tan, Hu, Runbo and Chai, Hua and   Keutzer, Kurt},
   booktitle = {Advances in Neural Information Processing Systems},
   year = {2019}
}

Quick Look

Our multi-source domain adaptation builds on the work CyCADA and CycleGAN. Since we focus on Semantic Segmentation task, we remove Digit Classfication part in CyCADA.

We add following modules and achieve startling improvements.

  1. Dynamic Semantic Consistency Module
  2. Adversarial Aggregation Module
    1. Sub-domain Aggregation Discriminator
    2. Cross-domain Cycle Discriminator

While we implements MDAN for Semantic Segmentation task in Pytorch as our baseline comparasion.

Overall Structure

image-20190608104531451

Setup

Check out this repo:

git clone https://github.com/pikachusocute/MADAN.git

Install Python3 requirements

pip3 install -r requirements.txt

Dynamic Adversarial Image Generation

We follow the way in CyCADA, in the first step, we need to train Image Adaptation module to transfer source image(GTA, Synthia or Multi-source) to "source as target".

image-20190608111738818

We refer Image Adaptation module from GTA to Cityscapes as GTA->Cityscapes in the following.

GTA->Cityscapes

cd scripts/CycleGAN
bash cyclegan_gta2cityscapes.sh

In the training process, snapshot files will be stored in cyclegan/checkpoints/[EXP_NAME].

Usually, afer we run for 20 epochs, there'll be a file 20_net_G_A.pth in previous folder path.

Then we run the test process.

bash scripts/CycleGAN/test_templates.sh [EXP_NAME] 20 cycle_gan_semantic_fcn gta5_cityscapes

In multi-source case, there are both 20_net_G_A_1.pth and 20_net_G_A_2.pth exist. We use another script to run test process.

image

bash scripts/CycleGAN/test_templates_cycle.sh [EXP_NAME] 20 test synthia_cityscapes gta5_cityscapes

New dataset will be generated at ~/cyclegan/results/[EXP_NAME]/train_20.

After we obtain a new source stylized dataset, we then train segmenter on the new dataset.

Pixel Level Adaptation

In this part, we train our new segmenter on new dataset.

ln -s ~/cyclegan/results/[EXP_NAME]/train_20 ~/data/cyclegta5/[EXP_NAME]_TRAIN_60

Then we set dataflag = [EXP_NAME]_TRAIN_60 to find datasets' paths, and follow instructions to train segmenter to perform pixel level adaptation.

bash scripts/FCN/train_fcn8s_cyclesgta5_DSC.sh

Feature Level Adaptation

For adaptation, we use

bash scripts/ADDA/adda_cyclegta2cs_score.sh

Make sure you choose the desired src and tgt and datadir before. In this process, you should load your base_model trained on synthetic dataset and perform adaptation in feature level to real scene dataset.

Our Model

We release our adaptation model in the ./models, you can use scripts/eval_templates.sh to evaluate its validity.

  1. CycleGTA5_Dynamic_Semantic_Consistency
  2. CycleSYNTHIA_Dynamic_Semantic_Consistency
  3. Multi_Source_SAD_CCD

Transfered Dataset

We will release our transfer dataset soon, where our CycleGTA5_Dynamic_Semantic_Consistency model is trained to perform pixel level adaptation.

About

Pytorch Code release for our NeurIPS paper "Multi-source Domain Adaptation for Semantic Segmentation"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 94.7%
  • Shell 5.3%