Accepted to NAACL2022, Workshop on Gender Bias in Natural Language Processing
👉 arxiv paper 👈
We freeze weights of a pre-trained BERT and we fine-prune it on a gender debiasing loss. Optimized are only the pruning scores -- they act a gate to the BERT's weights. We utilzie block movement pruning.
Setup
conda env create -f envs/pruning-bias.yaml
conda activate debias
pip uninstall nn_pruning
pip install git https://github.com/[anonymized]/nn_pruning.git@automodel
Block pruning
python run.py --multirun \
experiment=debias_block_pruning_frozen \
model.embedding_layer=last,all \
model.debias_mode=sentence,token \
prune_block_size=32,64
Pruning enitre heads
python run.py --multirun \
experiment=debias_head_pruning_frozen_values_only \
model.embedding_layer=last,all \
model.debias_mode=sentence,token
Debiasing-only:
python run.py --multirun \
model.embedding_layer=first,last,all,intermediate \
model.debias_mode=sentence,token
- The first run will download, process, and cache datasets.
- By default, debiasing will run on a single GPU. For more options, see configs.
- This project uses hydra for config managements and pytorch lightning for training loops.
- All experiments are defined in configs/experiment/
- We use run_glue.py to evaluate GLUE. To evaluate pruned models, we manually load the pruning scores state dicts.
- Block pruning:
@article{Lagunas2021BlockPF,
title={Block Pruning For Faster Transformers},
author={Franccois Lagunas and Ella Charlaix and Victor Sanh and Alexander M. Rush},
journal={ArXiv},
year={2021},
volume={abs/2109.04838}
}
- The original debiaing idea:
@inproceedings{kaneko-bollegala-2021-context,
title={Debiasing Pre-trained Contextualised Embeddings},
author={Masahiro Kaneko and Danushka Bollegala},
booktitle = {Proc. of the 16th European Chapter of the Association for Computational Linguistics (EACL)},
year={2021}
}
- Hydra lightning template by ashleve.