Papers
arxiv:2402.19427

Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models

Published on Feb 29
ยท Submitted by akhaliq on Mar 1
#2 Paper of the day
Authors:
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

Recurrent neural networks (RNNs) have fast inference and scale efficiently on long sequences, but they are difficult to train and hard to scale. We propose Hawk, an RNN with gated linear recurrences, and Griffin, a hybrid model that mixes gated linear recurrences with local attention. Hawk exceeds the reported performance of Mamba on downstream tasks, while Griffin matches the performance of Llama-2 despite being trained on over 6 times fewer tokens. We also show that Griffin can extrapolate on sequences significantly longer than those seen during training. Our models match the hardware efficiency of Transformers during training, and during inference they have lower latency and significantly higher throughput. We scale Griffin up to 14B parameters, and explain how to shard our models for efficient distributed training.

Community

This is awesome. New architecture - new possibilities!

And are these architectures more optimised for TPUs than GPUs?

And are you gonna release a comparison of Griffin 14B with Mixtral which is almost 13B model (2ร—7B MoE) though trained on far more tokens than 300B?

And why did you selected Llama but not Mistral 7B for comparison? May be because there is no information about how many tokens it was trained on?

ยท

Not an author or anything, but yeah, they use llama because they do an closer comparison with it (most papers do this, especially when they don't train a model fully).

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Will these models and code open source?

Hawk & Griffin: Revolutionizing Language Models with Efficient Architecture

Links ๐Ÿ”—:

๐Ÿ‘‰ Subscribe: https://www.youtube.com/@Arxflix
๐Ÿ‘‰ Twitter: https://x.com/arxflix
๐Ÿ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 11

Browse 11 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2402.19427 in a dataset README.md to link it from this page.

Spaces citing this paper 4

Collections including this paper 25