Papers
arxiv:2401.16380

Rephrasing the Web: A Recipe for Compute and Data-Efficient Language Modeling

Published on Jan 29
Β· Submitted by akhaliq on Jan 30
#3 Paper of the day
Authors:
,
He Bai ,
,

Abstract

Large language models are trained on massive scrapes of the web, which are often unstructured, noisy, and poorly phrased. Current scaling laws show that learning from such data requires an abundance of both compute and data, which grows with the size of the model being trained. This is infeasible both because of the large compute costs and duration associated with pre-training, and the impending scarcity of high-quality data on the web. In this work, we propose Web Rephrase Augmented Pre-training (WRAP) that uses an off-the-shelf instruction-tuned model prompted to paraphrase documents on the web in specific styles such as "like Wikipedia" or in "question-answer format" to jointly pre-train LLMs on real and synthetic rephrases. First, we show that using WRAP on the C4 dataset, which is naturally noisy, speeds up pre-training by sim3x. At the same pre-training compute budget, it improves perplexity by more than 10% on average across different subsets of the Pile, and improves zero-shot question answer accuracy across 13 tasks by more than 2%. Second, we investigate the impact of the re-phrasing style on the performance of the model, offering insights into how the composition of the training data can impact the performance of LLMs in OOD settings. Our gains are attributed to the fact that re-phrased synthetic data has higher utility than just real data because it (i) incorporates style diversity that closely reflects downstream evaluation style, and (ii) has higher 'quality' than web-scraped data.

Community

I love to see research like this. Especially given how it used less compute. Intuition ought to imply higher computer due to heavy pre-pre-processing. (Can I call it that? πŸ˜‚)

been doing this for months. Highly effective. 10x more effective with reinforced with self-notes tokens containing deductive logic. which I personally find to be more effective alone, since you can retain the original salient representations in the source training samples. We all know OSS models writing styles are pretty trash and repetitive. Great research though. The era of hybrid/synthetic data has FULLY arrived.

Isn't this just a roundabout way of distilling the LLM used to rephrase the data?

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

See connected papers for this paper: access upstream and downstream papers’ graph and interact visually.

https://www.connectedpapers.com/main/2905dc5ad70b462f4f5543df3047dffadb5c0e4e/Rephrasing-the-Web:-A-Recipe-for-Compute-and-Data Efficient-Language-Modeling/graph

@librarian-bot recommend

Unlocking Faster AI: How WRAP Transforms Language Models with Synthetic Data!

Links πŸ”—:

πŸ‘‰ Subscribe: https://www.youtube.com/@Arxflix
πŸ‘‰ Twitter: https://x.com/arxflix
πŸ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2401.16380 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2401.16380 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2401.16380 in a Space README.md to link it from this page.

Collections including this paper 22