Skip to content
/ TFR-Net Public

This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis, accepted at ACMMM 2021.

License

Notifications You must be signed in to change notification settings

thuiar/TFR-Net

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Python 3.6

This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis, accepted at ACMMM 2021.

Note: We strongly recommend that you browse the overall structure of our code at first. If you have any question, feel free to contact us.

Support Models

In this framework, we support the following methods:

Type Model Name From
Baselines TFN Tensor-Fusion-Network
Baselines MulT(without CTC) Multimodal-Transformer
Baselines MISA MISA
Missing-Task TFR-Net TFR-Net

Usage

  • Clone this repo and install requirements.
git clone https://github.com/Columbine21/TFR-Net.git
cd TFR-Net

Data Preprocessing

  1. Download datasets from the following links.
  • MOSI

download from CMU-MultimodalSDK

  • SIMS

download from Baidu Yun Disk [code: mfet] or Google Drive
Notes: Please download new features unaligned_39.pkl from Baidu Yun Disk [code: mfet] or Google Drive, which is compatible with our new code structure. The md5 code is a5b2ed3844200c7fb3b8ddc750b77feb.

  1. Download Bert-Base, Chinese from Google-Bert.

  2. Convert Tensorflow into pytorch using transformers-cli

  3. Install python dependencies

  4. Organize features and save them as pickle files with the following structure.

Notes: unaligned_39.pkl is compatible with the following structure

Dataset Feature Structure
{
    "train": {
        "raw_text": [],
        "audio": [],
        "vision": [],
        "id": [], # [video_id$_$clip_id, ..., ...]
        "text": [],
        "text_bert": [],
        "audio_lengths": [],
        "vision_lengths": [],
        "annotations": [],
        "classification_labels": [], # Negative(< 0), Neutral(0), Positive(> 0)
        "regression_labels": []
    },
    "valid": {***}, # same as the "train" 
    "test": {***}, # same as the "train"
}
  1. Modify config/config_regression.py to update dataset pathes.

Run

sh test.sh

Paper

Please cite our paper if you find our work useful for your research:

@inproceedings{yu2020ch,
  title={CH-SIMS: A Chinese Multimodal Sentiment Analysis Dataset with Fine-grained Annotation of Modality},
  author={Yu, Wenmeng and Xu, Hua and Meng, Fanyang and Zhu, Yilin and Ma, Yixiao and Wu, Jiele and Zou, Jiyun and Yang, Kaicheng},
  booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
  pages={3718--3727},
  year={2020}
}
@inproceedings{yuan2021transformer,
  title={Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis},
  author={Yuan, Ziqi and Li, Wei and Xu, Hua and Yu, Wenmeng},
  booktitle={Proceedings of the 29th ACM International Conference on Multimedia},
  pages={4400--4407},
  year={2021}
}

About

This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis, accepted at ACMMM 2021.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published