We propose an approach for zero- and few-shot re-ranking of multi-hop document paths for open-domain QA. PromptRank constructs a prompt that consists of (i) an instruction and (ii) the path and uses an LLM to score paths as probability of generating the question given the prompt.
Download the TF-IDF retriever and database for HotpotQA provided by PathRetriever from this link and place its contents in path-retriever/models
pip install -r requirements.txt
The HotpotQA and 2WikiMQA processed data can be downloaded from [Google Drive] The data is preprocessed by retrieving 200 top TF-IDF articles to seed up inference. (https://drive.google.com/file/d/1mI7XAdHWLhlW6fMOW3LJQMPipSmlnP67/view?usp=share_link). Then unzip the data and place the content it in data/
python run.py \
--model google/t5-base-lm-adapt \
--eval_batch_size=50 \
--max_prompt_len 600 \
--max_doc_len 230 \
--tfidf_pool_size 100 \
--n_eval_examples 1000 \
--temperature 1.0 \
--eval_data data/hotpotqa/dev.json \
--prompt_template 'Document: <P> Review previous documents and ask some question. Question:'
python run.py \
--model google/t5-base-lm-adapt \
--eval_batch_size=50 \
--max_prompt_len 600 \
--max_doc_len 230 \
--tfidf_pool_size 100 \
--n_eval_examples 1000 \
--temperature 1.0 \
--eval_data data/hotpotqa/dev.json \
--instruction_template_file instruction-templates/top_instructions.txt
--ensemble_prompts
This uses the top 10 instructions found over HotpotQA which are in instruction-templates/top_instructions.txt
:
Document: <P> Review previous documents and ask some question. Question
Document: <P> Review the previous documents and answer question. Question:
Document: <P> Read the previous documents and write the following question. Question:
Document: <P> Search previous documents and ask the question. Question:
To analyze the documents and ask question. Document: <P> Question:
Document: <P> To read the previous documents and write a question. Question:
Document: <P> Read previous documents and write your exam question. Question:
Document: <P> Read the previous documents and ask this question. Question:
Read two documents and answer a question. Document: <P> Question:
Identify all documents and ask question. Document: <P> Question:
4.(c) To run PromptRank with a in-context learning (only two demonstrations: one bridge and one yes/no question)
python run.py \
--model google/t5-base-lm-adapt \
--eval_batch_size=50 \
--max_prompt_len 600 \
--max_doc_len 230 \
--tfidf_pool_size 100 \
--n_eval_examples 1000 \
--temperature 1.0 \
--eval_data data/hotpotqa/dev.json \
--prompt_template 'Document: <P> Review previous documents and ask some question. Question:' \
--demos_ids 0,1 \
--demos_file data/hotpotqa/in_context_demos.json \
4.(d) To run PromptRank with a in-context learning with demonstration ensembling (with 3 ensembles -- each having two demonstrations i.e, 6 demonstrations in total)
python run.py \
--model google/t5-base-lm-adapt \
--eval_batch_size=50 \
--max_prompt_len 600 \
--max_doc_len 230 \
--tfidf_pool_size 100 \
--n_eval_examples 1000 \
--temperature 1.0 \
--eval_data data/hotpotqa/dev.json \
--prompt_template 'Document: <P> Review previous documents and ask some question. Question:' \
--demos_ids 0,1 \
--demos_file data/hotpotqa/in_context_demos.json \
--n_ensemble_demos 3
Note: The code supports either instruction ensembling or demonstration ensembling -- not both.
We use many components from PathRetriever. So thanks to Akari Asai and others for providing their code and models.
If you use this code, please consider citing our paper:
@article{promptrank,
title={Few-shot Reranking for Multi-hop QA via Language Model Prompting},
author={Khalifa, Muhammad and Logeswaran, Lajanugen and Lee, Moontae and Lee, Honglak and Wang, Lu},
journal={arXiv preprint arXiv:2205.12650},
year={2023}
}