SylphAI

SylphAI

Software Development

Mountain View, California 6,887 followers

Conversational people search engine

About us

Gaia, the copilot for people search. To start, we help founders automate investor reach out flow. Discord community: https://discord.gg/ezzszrRZvT

Website
https://sylph.ai/
Industry
Software Development
Company size
2-10 employees
Headquarters
Mountain View, California
Type
Privately Held
Founded
2023

Locations

Employees at SylphAI

Updates

  • SylphAI reposted this

    View profile for Li Yin, graphic

    Author of AdalFlow | AI researcher | x MetaAI

    Explore the free Llama3.1 with AdalFlow library on ChatBot, RAG, and ReAct Agent in a single notebook. Meta has released three models: 8B, 70B, and 405B. The 8B model is for efficient deployment and development on consumer-size GPUs, the 70B model is for large-scale AI native applications, and the 405B model is for synthetic data, LLM as a Judge, or distillation. The 70B and 405B models perform on par with GPT-4 and GPT-4.0. Notebook scope: - Models: We will use Ollama and Groq if you have an API - Use Cases: We will create a single chatbot, a RAG, and a ReAct Agent. For agents, it requires more reasoning capability. We have observed that the new llama3.1-8b model has broken the previously well-crafted prompts on llama3-8b. This is the most frustrating part of working on LLM applications. AdalFlow is actively working on our optimizer to smooth this prompt adaptation process. https://lnkd.in/gwgrmnwh #adalflow #artificialintelligence #machinelearning #llms

    Google Colab

    Google Colab

    colab.research.google.com

  • SylphAI reposted this

    View profile for Li Yin, graphic

    Author of AdalFlow | AI researcher | x MetaAI

    Three Tips on Building LLM Applications 1. 𝐀𝐥𝐰𝐚𝐲𝐬 𝐊𝐧𝐨𝐰 𝐭𝐡𝐞 𝐏𝐫𝐨𝐦𝐩𝐭 Prompt is the most straightforward way to understand any LLM application; whether manually created or auto-generated. Unfortunately, many libraries and APIs are becoming high-level without an easy way to see the prompts they use. We have always craved to understand models; while we can't read model weights, prompts are human-readable. It is bizarre that the community is moving towards hiding prompts from developers. 2. 𝐄𝐯𝐚𝐥𝐮𝐚𝐭𝐞 𝐚𝐧𝐝 𝐃𝐞𝐯𝐞𝐥𝐨𝐩 𝐚𝐧 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧 𝐏𝐥𝐚𝐧 Simply having something that works is not enough. Make sure to bootstrap a small evaluation set. Run your application to manually observe issues and decide on an optimization plan based on your findings. 3. 𝐌𝐚𝐤𝐞 𝐚 𝐜𝐚𝐥𝐥 𝐛𝐞𝐭𝐰𝐞𝐞𝐧 𝐌𝐚𝐧𝐮𝐚𝐥 𝐚𝐧𝐝 𝐚𝐮𝐭𝐨-𝐩𝐫𝐨𝐦𝐩𝐭𝐢𝐧𝐠 Auto-prompting can be effective but also very expensive due to the many API calls required. Manual prompting can still be beneficial. Auto-prompting isn't a complex concept; it essentially uses another LLM to suggest improvements on the system prompt or provide examples based on your evaluation and error messages. The prompts of these optimizers are still manually crafted and are called meta-prompts, a concept from meta-learning. #adalflow #artificialintelligence #machinelearning #llms ___________________________________________________ AdalFlow: The PyTorch Library for LLM Applications. It is light and modular, with a 100% readable codebase. Follow hit 🔔 to stay updated.

  • SylphAI reposted this

    View profile for Li Yin, graphic

    Author of AdalFlow | AI researcher | x MetaAI

    We all kind of know what a system prompt is, but we also kind of don't. Although I have read many papers in In-context learning, I can't recall a formal definition being given. This ambiguity creates uncertainty in prompt engineering and LLM modeling. Here is my attempt to come up with an absolute definition: A system prompt can be simplified to fixed instructions across LLM calls. It should not include user conversation memory or context as these vary with each query. The system prompt should describe the role of the assistant, task description, task specifications, output format, and fixed examples. This prompt can be compressed using a prompt optimizer or completely removed if the model is fine-tuned. It should not include the steps for agent execution either. I wonder if following this approach in modeling could help us adapt LLMs more easily. What are your thoughts? #adalflow #artificialintelligence #machinelearning #llm #prompt

  • SylphAI reposted this

    View profile for Paul Iusztin, graphic

    Senior Machine Learning Engineer • MLOps • Founder @ Decoding ML ~ Posts and articles about building production-grade ML/AI systems.

    There is a new LLM & RAG framework in town that you must know about, as it might replace LangChain and LlamaIndex ↓ The new RAG framework targets to be the PyTorch library for LLM applications. It prioritizes: - simplicity - modularity - robustness - a readable codebase ...over out-of-the-box one-liners that are hard to understand and extend. ...which we often encounter when working with LangChain and LlamaIndex. . 𝗦𝗼, 𝘄𝗵𝗼 𝗶𝘀 𝘁𝗵𝗲 𝗻𝗲𝘄 𝗸𝗶𝗱 𝗶𝗻 𝘁𝗼𝘄𝗻? → 𝘓𝘪𝘨𝘩𝘵𝘙𝘈𝘎 It was initiated by Li Yin, who started implementing LLM solutions before they were cool and realized how limited one is when building custom solutions. She highlighted that each use case is unique in its data, business logic, and user experience. Thus, no library can provide out-of-the-box solutions. Ultimately, LightRAG aims to provide a robust and clean codebase you can 100% trust, understand and extend. With the ultimate goal of quickly customizing your own LLM, RAG, and agent solutions. . If you are into LLMs, RAG and agents, consider checking it out on GitHub and support it with a ⭐️ (or even contribute) ↓ 🔗 𝘓𝘪𝘨𝘩𝘵𝘙𝘈𝘎: https://lnkd.in/d-fcDZ3A #machinelearning #mlops #datascience . 💡 Follow me for daily content on production ML and MLOps engineering.

    • No alternative text description for this image
  • SylphAI reposted this

    View profile for Li Yin, graphic

    Author of AdalFlow | AI researcher | x MetaAI

    Why most likely you don’t need GraphRAG GraphRAG went viral. They gained 12k stars within two weeks of open-sourcing the code. But LightRAG, as a library does not integrate or release our GraphRAG implementation right now for the following reasons: 1️⃣ First, we are sticking to our design philosophy: offering high-quality building blocks instead of chasing after shiny demos. At this moment, we are working closely with early adopters to fix bugs and prioritize optimizations to provide users with a frustration-free development experience for LLM applications. We prioritize fundamental building blocks over high-level demos. 2️⃣ Second, according to 1littlecoder, there are three reasons GraphRAG is not the de facto choice over baseline RAG: - GraphRAG only compared itself with a very basic RAG without optimization. A well-tuned basic RAG does not necessarily have lower performance. - GraphRAG will be significantly more expensive than normal RAG. - It will also have higher latency, making it hard to meet production standards. The conclusion is: for only 10% of RAG use cases, you might need GraphRAG over normal RAG. 👉 Links in comments #lightrag #artificialintelligence #machinelearning ____________________________________________ ⚡ LightRAG: The Lightning Library for LLM Applications. It is light and modular, with a 100% readable codebase. 𝐆𝐢𝐯𝐞 𝐢𝐭 𝐚 𝐬𝐡𝐨𝐭, 𝐚𝐧𝐝 𝐲𝐨𝐮 𝐰𝐢𝐥𝐥 𝐛𝐞 𝐩𝐥𝐞𝐚𝐬𝐚𝐧𝐭𝐥𝐲 𝐬𝐮𝐫𝐩𝐫𝐢𝐬𝐞𝐝.

  • SylphAI reposted this

    View profile for Li Yin, graphic

    Author of AdalFlow | AI researcher | x MetaAI

    ⚡ Must-read: the most in-depth tutorial on retrievers. As the scope of the retriever is as wide as the entire search and information retrieval field, it took me two weeks to even settle for the first draft. LightRAG offers five high-precision retrievers. They work hand in hand with databases and data pipelines. Besides just applying it, you can easily read our source code to learn their implementations. - Semantic Retriever with FAISS (Bi-encoder) - BM25 - Reranker as Retriever (Cross-encoder) - LLM as Retriever - Postgres Retriever (DB's built-in retrieval) Links in comments! #lightrag #artificialintelligence #machinelearning #llms _______________ LightRAG : The Lightning Library for LLM Applications. It is light and modular, with a 100% readable codebase. Follow hit 🔔 to stay updated.

  • SylphAI reposted this

    View profile for Li Yin, graphic

    Author of AdalFlow | AI researcher | x MetaAI

    🚀 Learn all about Ollama in one single Colab with LightRAG ⚡ Ollama lets you run transformer models locally with optimized performance We prepared a Google Colab notebook to show you how to use Ollama LLM and embedding models with LightRAG Generator and Embedder. In particular, we will test llama3 for LLM and jina/jina-embeddings-v2-base-e for embeddings. This guide goes beyond just using it in LightRAG: - We also explore the performance of async calls. - We will guide you on how to modify the model file of Ollama. - You will see llama3's model prompt. - Learn how to use Google Colab’s GPU to serve Ollama. https://lnkd.in/gnNEaWia #lightrag #artificialintelligence #machinelearning #llms _____________________________________________ LightRAG : The Lightning Library for LLM Applications. It is light and modular, with a 100% readable codebase. Follow hit 🔔 to stay updated.

  • SylphAI reposted this

    View profile for Li Yin, graphic

    Author of AdalFlow | AI researcher | x MetaAI

    This video shows how easy it is to test GPT4o-mini's performance on your existing LightRAG pipeline without any code changes. Great work Brandon. Do follow him. #lightrag #artificialintelligence #machinelearning #llms ______________________________________________ LightRAG : The PyTorch Library for LLM Applications. It is light and modular, with a 100% readable codebase. Follow hit 🔔 to stay updated.

    View profile for Brandon Phillips, graphic

    Data Solutions Engineer - Sharing lessons and Solving problems

    If you were wondering how GPT4o-mini is doing on day one, check this out! I went ahead and ran a few tests on GPT4o-mini, against the good ol Haiku. Overall, the quality and the price is definitely nice. It's close to 50% cheaper than Haiku, with similar performance. I tested it on semi-complex output, a multi-step pipeline. All I had to do was swap out a client and change a few parameters using Li Yin's LightRAG LLM Application library, super simple. You can see if you can come up with some solutions to problems using https://offers.bpdata.io . It's using gpt4o-mini on the backend right now. If you want to see what I did, check out my video! Have a great ThursYay yall!

  • SylphAI reposted this

    View profile for Li Yin, graphic

    Author of AdalFlow | AI researcher | x MetaAI

    Prompting is the new coding language. We went beyond the basic React agent and taught it how to “divide-and-conquer”. The agent managed to use it well, wether it’s GPT3.5 or LLama3, it always give clean and bare minimum steps to achive user’s goal. Links in comments. Let me know how well it works in your use case. #lightrag #artificialintelligence #machinelearning ______________________________________ LightRAG : The PyTorch Library for LLM Applications. It is light and modular, with a 100% readable codebase. Follow hit 🔔 to stay updated.

  • SylphAI reposted this

    View profile for Li Yin, graphic

    Author of AdalFlow | AI researcher | x MetaAI

    🤖 Here is a 10-minute quick start Colab on LightRAG. It includes: 1. A simple chatbot with OpenAI and Groq. 2. A RAG pipeline with batch embedder and FAISS semantic retriever. https://lnkd.in/g23YDaHt Feedback is appreciated as always. #lightrag #artificialintelligence #machinelearning ______________________________________________ LightRAG : The PyTorch Library for LLM Applications. It is light and modular, with a 100% readable codebase. Follow hit 🔔 to stay updated.

    Google Colab

    Google Colab

    colab.research.google.com

Similar pages