TransTIC: Transferring Transformer-based Image Compression from Human Visualization to Machine Perception
Accpeted to ICCV 2023
This repository contains the source code of our ICCV 2023 paper TransTIC arXiv.
This work aims for transferring a Transformer-based image compression codec from human vision to machine perception without fine-tuning the codec. We propose a transferable Transformer-based image compression framework, termed TransTIC. Inspired by visual prompt tuning, we propose an instance-specific prompt generator to inject instance-specific prompts to the encoder and task-specific prompts to the decoder. Extensive experiments show that our proposed method is capable of transferring the codec to various machine tasks and outshining the competing methods significantly. To our best knowledge, this work is the first attempt to utilize prompting on the low-level image compression task.
git clone https://github.com/NYCU-MAPL/TransTIC
cd TransTIC
pip install -U pip && pip install -e .
pip install timm tqdm click
Install Detectron2 for object detection and instance segementation.
The following datasets are used and needed to be downloaded.
- Flicker2W (download here, and use this script for preprocessing)
- ImageNet1K
- COCO 2017 Train/Val
- Kodak
Specify the data paths, target rate point, corresponding lambda, and checkpoint in the config file accordingly.
python examples/train.py -c config/base_codec.yaml
python examples/classification.py -c config/classification.yaml
Add argument -T
for evaluation.
python examples/detection.py -c config/detection.yaml
Add argument -T
for evaluation.
python examples/segmentation.py -c config/segmentation.yaml
Add argument -T
for evaluation.
Tasks | ||||
---|---|---|---|---|
Base codec (TIC) | 1 | 2 | 3 | 4 |
Classification | 1 | 2 | 3 | 4 |
Object Detection | 1 | 2 | 3 | 4 |
Instance Segmentation | 1 | 2 | 3 | 4 |
If you find our project useful, please cite the following paper.
@inproceedings{TransTIC,
title={TransTIC: Transferring Transformer-based Image Compression from Human Visualization to Machine Perception},
author={Chen, Yi-Hsin and Weng, Ying-Chieh and Kao, Chia-Hao and Chien, Cheng and Chiu, Wei-Chen and Peng, Wen-Hsiao},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={},
year={2023}
}
Our work is based on the framework of CompressAI. The base codec is adopted from TIC/TinyLIC and the prompting method is modified from VPT. We thank the authors for open-sourcing their code.