Skip to content
/ LTE Public

Code for "Learning to Edit: Aligning LLMs with Knowledge Editing (ACL 2024)"

License

Notifications You must be signed in to change notification settings

YJiangcm/LTE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

48 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Learning to Edit: Aligning LLMs with Knowledge Editing (ACL 2024)

Github HuggingFace

We introduces a novel Learning to Edit (LTE) framework for effective and efficient knowledge editing of large language models (LLMs). our LTE framework focuses on teaching LLMs to apply updated knowledge into input questions, inspired by the philosophy of "Teach a man to fish."

As the below figure shows, LTE features a two-phase process: (i) the Alignment Phase, which fine-tunes LLMs on a meticulously curated parallel dataset to make reliable, in-scope edits while preserving out-of-scope information and linguistic proficiency; and (ii) the Inference Phase, which employs a retrieval-based mechanism for real-time and mass knowledge editing.



⚙️ How to implement

Requirements

Note: Please use Python 3.10 for LTE. To get started, simply install conda and run:

conda create -n LTE python=3.10
conda activate LTE
conda install pytorch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 pytorch-cuda=12.1 -c pytorch -c nvidia
pip install -r requirements.txt

1. Alignment Phrase

Firstly, please download the training data of LTE from HuggingFace and put it into data/.

LLaMA2-Chat-7B

The code is based on FastChat. Standard fine-tuning was conducted on 4×A100 GPUs (80G) for about 9 hours.

cd LTE/
bash FastChat/ft_train.sh

To reduce the total memory footprint, LTE also supports LoRA, which fine-tunes low-rank slices of the query, key, and value embedding heads.

cd LTE/
bash FastChat/lora_train.sh

Qwen-Chat-7B

The code is based on Qwen. Standard fine-tuning was conducted on 4×A100 GPUs (80G) for about 9 hours.

cd LTE/
bash Qwen/finetune/finetune_ds.sh

To reduce the total memory footprint, LTE also supports LoRA, which fine-tunes low-rank slices of the query, key, and value embedding heads.

cd LTE/
bash Qwen/finetune/finetune_lora_single_gpu.sh

2. Inference Phrase

The evaluation of our proposed LTE is based on EasyEdit. Please download multi-qa-mpnet-base-dot-v1 and add it to "LTE/SeqEdit/multi-qa-mpnet-base-dot-v1".

Please run the following command for experiments of LLaMA2-Chat-7B:

cd LTE/
bash EasyEdit/run_lte_llama.sh
bash SeqEdit/run_lte_llama.sh

Please run the following command for experiments of Qwen-Chat-7B:

cd LTE/
bash EasyEdit/run_lte_qwen.sh
bash SeqEdit/run_lte_qwen.sh

📝 Citation

Please cite our paper if you use the data or code in this repo.

@inproceedings{jiang-etal-2024-learning,
    title = "Learning to Edit: Aligning {LLM}s with Knowledge Editing",
    author = "Jiang, Yuxin  and
      Wang, Yufei  and
      Wu, Chuhan  and
      Zhong, Wanjun  and
      Zeng, Xingshan  and
      Gao, Jiahui  and
      Li, Liangyou  and
      Jiang, Xin  and
      Shang, Lifeng  and
      Tang, Ruiming  and
      Liu, Qun  and
      Wang, Wei",
    editor = "Ku, Lun-Wei  and
      Martins, Andre  and
      Srikumar, Vivek",
    booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.acl-long.258",
    pages = "4689--4705",
}