Nous Research

Nous Research

Research Services

The AI Accelerator Company.

About us

Nous Research is an applied research group focused on LLM architecture, Data Synthesis and Local Inference. We are launching a composer for AI orchestration called Nous-Forge soon.

Website
https://nousresearch.com/
Industry
Research Services
Company size
2-10 employees
Type
Privately Held
Founded
2023

Employees at Nous Research

Updates

  • View organization page for Nous Research, graphic

    1,902 followers

    Introducing a new open dataset release, Hermes Function Calling V1, the datamix that gave Hermes 2 Pro its tool use and structured output capabilities. HuggingFace Repo: https://lnkd.in/ex_smRtv The dataset includes single and multi-turn Function Calling and Structured JSON Output datasets, and an updated version of Glaive AI's function calling dataset, perfect for training LLMs to be better agents! Also check out our Hermes Function Calling repo for more information on the format and how to use models trained with this data here: https://lnkd.in/eqphH6fA And for help or questions, join our Discord! Here: https://lnkd.in/gnp7JnuQ This work is a culmination of the contributions of interstellar ninja, Teknium, Glaive AI, Theodore Galanos and many others who provided assistance along the way.

    • No alternative text description for this image
  • View organization page for Nous Research, graphic

    1,902 followers

    What if you could use all the computing power in the world to train a shared, open source AI model? The preliminary report: https://lnkd.in/eXpXUv-M Nous Research is proud to release a preliminary report on DisTrO (Distributed Training Over-the-Internet) a family of architecture-agnostic and network-agnostic distributed optimizers that reduces the inter-GPU communication requirements by 1000x to 10,000x without relying on amortized analysis, and matches AdamW All-Reduce in convergence rates. This enables low-latency training of large neural networks on slow internet bandwidths with heterogeneous networking hardware. DisTrO can increase the resilience and robustness of training LLMs by minimizing dependency on a single entity for computation. DisTrO is one step towards a more secure and equitable environment for all participants involved in building LLMs. Without relying on a single company to manage and control the training process, researchers and institutions can have more freedom to collaborate and experiment with new techniques, algorithms, and models. This increased competition fosters innovation, drives progress, and ultimately benefits society as a whole. This research is thanks to the hard work of Bowen Peng, Jeffrey Quesnelle, Ari Lotter and Umer Adil. We invite researchers interested in exploring this area to join us in our quest!

    • No alternative text description for this image
  • Nous Research reposted this

    View organization page for Lambda, graphic

    19,514 followers

    🔥 Meet Hermes 3: The First Full-Parameter Fine-Tuning of Llama 3.1! 🧙♂️ ✍️ We’re excited to introduce Hermes 3, the first full-parameter fine-tune of Llama 3.1 405B, brought to you by Nous Research and trained on a Lambda 1-Click-Cluster. From strategic decision-making to complex creative writing, role-playing, and Agent building—Hermes 3 excels across the board. Try your prompts immediately on https://lambda.chat/ or use our Chat Completions API - for FREE.

    • https://lambda.chat/
  • Nous Research reposted this

    View profile for Mitesh Agrawal, graphic

    Head of Cloud/COO, Lambda

    The first Llama 3.1 405B fine-tuning was just completed by Nous Research on Lambda 1-Click Cluster. The outcome is Hermes 3, an unlocked and uncensored instruct model, which a/ follows user directions better and b/ scores as good or better than base model on benchmarks, even after quantization. That enables it to be served on a single 8x H100s node - for which we have on-demand availability and an amazing $2.99 / GPU / Hour price. All can prompt Hermes 3 for free on the https://lnkd.in/dZAMPWcb All can leverage the Lambda Completions API for free: https://lnkd.in/dCM_YEkQ We have a blog post out to bring our story together: https://lnkd.in/dvAf6RTT

  • Nous Research reposted this

    View profile for Yaqi Zhangعالية, graphic

    Author ,Influencer,Founder, Born to be Global, Strategic Content Development, Microsoft MVP,Speaker of ApacheCon/PKU/Upenn, مبتدئ في اللغة العربية,Super-Connector ,Passionate about observing new technologies in the world

    5 months ago, Andreessen Horowitz(A16Z)announced its support for the Open Source AI Initiative, which includes Nous Research and a friend of mine. Half a month ago, Nous Research just announced the closing of $5.2 million seed financing round. Today, Nous Research are announcing the latest project, an effort to provide a new evaluation system for open source models. Traditional benchmarking leans heavily on public datasets which can be easy to game and often lead to superficial score improvements that mask true model capabilities (or lack thereof), misdirecting efforts towards meaningless score chasing. To achieve this, Nous Research built a subnet on Bittensor, a decentralized network for AI projects. This system allows people to submit your finetuned model, and then validators will evaluate all models submitted against freshly generated data from the Cortex subnet. The Cortex subnet is a dynamic source of fresh synthetic data, continuously generated by GPT-4, which allows for fresh and unpredictable high quality data to test models against, and ensures a fair and accurate consensus on which model most closely mirrors GPT-4's performance. This new system aims to celebrate and reward creators of public, open source models that genuinely meet user needs, through the incentive structures built into Bittensor. Learn more on the following pages: Nous Subnet Leaderboard: https://lnkd.in/gHjM6Equ Nous Subnet Repository:https://lnkd.in/gSzcWVTH Bittensor:https://bittensor.com Cortex Subnet Repository :https://lnkd.in/guBmfuNu #ai #artificialintelligence #opensource #gpt #decentralized

    • No alternative text description for this image
  • Nous Research reposted this

    View profile for Maxime Labonne, graphic

    Head of Post-Training @ Liquid AI

    🎉 NousResearch just released Nous-Hermes-2-Mixtral-8x7B 🏆 Potentially the best open-source LLM: high-quality merges incoming! 🥇 First fine-tuned version of Mixtral 8x7B that outperforms the official Mixtral Instruct 📅 Trained on >1 million samples generated by GPT-4 other open-source datasets They released an SFT and a DPO versions of this model GGUF versions. Impressive work, true MoEs like Mixtral are tricky to fine-tune. This is going to be a very popular model. 🤗 SFT: https://lnkd.in/g5ZT7B7b 🤗 DPO: https://lnkd.in/g8hPubpz

    • No alternative text description for this image
  • View organization page for Nous Research, graphic

    1,902 followers

    Nous Research is excited to announce the closing of our $5.2 million seed financing round. We're proud to work with passionate, high-integrity partners that made this round possible, including co-leads Distributed Global and OSS Capital, with participation from Vipul Ved Reddy, founder and CEO at Together AI, Yonatan Ben Shimon, founder at Matchbox DAO, and several angel investors including Balaji Srinivasan, entrepreneur and investor, Thibaud Zamora, entrepreneur and investor, Alex Attallah, founder at OpenRouter and OpenSea, Chris Prucha, investor and founder at Notion, Sahil Chaudhary, founder and CEO at Glaive AI, and Gavin Uberti, founder and CEO at Etched AI. Nous has historically been a volunteer project- with this investment, we can empower a small group of our most dedicated members to join us in bringing a composer for AI orchestration to all, Nous-Forge, in 2024! The importance of pursuing open source AI today cannot be overstated; Nous will continue open-source research in LLM Architecture, Data Synthesis, Simulation, & Agent Engineering, amongst other areas. We welcome your ideas and experiments here, always (no matter how out-of-the-box they may be). Let's talk: https://lnkd.in/eQ3ZvJm4

    Join the Nous Research Discord Server!

    Join the Nous Research Discord Server!

    discord.com

  • View organization page for Nous Research, graphic

    1,902 followers

    We’re extremely proud to contribute to post-transformers architecture with the collaborative release of the StripedHyena family of models alongside Together.

    View organization page for Together AI, graphic

    31,863 followers

    Announcing StripedHyena 7B — an open source model using an architecture that goes beyond Transformers achieving faster performance and longer context. StripedHyena builds on the many lessons learned in the past year on designing efficient sequence modeling architectures. This release includes StripedHyena-Hessian-7B (SH 7B), a base model, and StripedHyena-Nous-7B (SH-N 7B), a chat model. StripedHyena 7B is a hybrid architecture based on our latest on scaling laws of efficient architectures inspired by signal processing. Full details in our blog post! https://lnkd.in/eGWQmXVs Both models are available to try now on the Together API! https://lnkd.in/enbeFqdM This work would not have been possible without our collaborators — Hazy Research, Nous Research, and hessian.AI

    • No alternative text description for this image

Similar pages

Browse jobs