Weaviate is hiring! 🌟 Join our team and help shape the future of AI technology. Check out our open roles: • Event Marketing Manager: https://lnkd.in/d4sK7kNq • Senior Software Engineer Database: https://lnkd.in/daa2F-uM • Revenue Operations Manager: https://lnkd.in/dSkHt2uh • Client-Focused Site Reliability Engineer: https://lnkd.in/dAvgickM • Machine Learning Engineer: https://lnkd.in/edunj5yu Explore more roles and learn more about them on our Careers page: https://lnkd.in/dzibHnwZ Stay tuned for more opportunities coming soon! 👀
Weaviate
Technologie, informatie en internet
Amsterdam, North Holland 20.017 volgers
The AI-native database for a new generation of software.
Over ons
Weaviate is a cloud-native, real-time vector database that allows you to bring your machine-learning models to scale. There are extensions for specific use cases, such as semantic search, plugins to integrate Weaviate in any application of your choice, and a console to visualize your data.
- Website
-
https://weaviate.io
Externe link voor Weaviate
- Branche
- Technologie, informatie en internet
- Bedrijfsgrootte
- 51 - 200 medewerkers
- Hoofdkantoor
- Amsterdam, North Holland
- Type
- Particuliere onderneming
- Opgericht
- 2019
Locaties
-
Primair
Amsterdam, North Holland, NL
Medewerkers van Weaviate
Updates
-
Weaviate heeft dit gerepost
🚢 Today marks another big shipment day at Weaviate (v1.26)! 🙏 but first, I'm incredibly grateful for the invaluable insights we gain from working with the Weaviate community and our customers 💡 One key insight: the need for flexibility when building AI-native applications, especially when developers scale AI prototypes into production 💿 Today, we release 𝗳𝗹𝗲𝘅𝗶𝗯𝗹𝗲 𝘀𝘁𝗼𝗿𝗮𝗴𝗲 𝘁𝗶𝗲𝗿𝘀 (hot, warm, and cold storage) in the 𝗪𝗲𝗮𝘃𝗶𝗮𝘁𝗲 𝗘𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗖𝗹𝗼𝘂𝗱 and 𝗪𝗲𝗮𝘃𝗶𝗮𝘁𝗲 𝗢𝗽𝗲𝗻 𝗦𝗼𝘂𝗿𝗰𝗲! 🛠️ 𝗧𝗵𝗲 𝗪𝗲𝗮𝘃𝗶𝗮𝘁𝗲 𝗪𝗼𝗿𝗸𝗯𝗲𝗻𝗰𝗵 (accessible through the cloud console), including private beta signups for the 𝗥𝗲𝗰𝗼𝗺𝗺𝗲𝗻𝗱𝗲𝗿 𝗦𝗲𝗿𝘃𝗶𝗰𝗲 and more! 👉 Read more in my release blog: https://lnkd.in/eNXdFYE3
-
-
We've been working hard behind the scenes to deliver new capabilities that help our users build, scale, and run AI applications more efficiently 👩🔬 We’re thrilled to share the latest Weaviate product updates, including: ⚙️ Controlled tenant offloading and flexible storage tiers for optimizing cost and performance. 🔎 New Tools to help manage and explore data within Weaviate Cloud Console. 🛒 New Apps to accelerate specific AI-native use cases, starting with a Recommender service. Read the release here: https://lnkd.in/dYNAhieN
-
-
2 Days left until ⏳ Advanced Retrieval Augmented Generation (RAG) Tricks with Zain Hasan 💚 Join our hands-on session & learn to: ✅ Leverage vector databases for powerful RAG applications ✅ Explore advanced RAG use cases across different scenarios ✅ Improve each phase of RAG: Indexing, Retrieval, & Generation Get hands-on & ask all your questions: https://lnkd.in/dMFMZsiG
RAG is one of the most promising AI use cases for companies of all industries… …but there are many challenges to deal with when it comes to scaling RAG use cases to production. 1️⃣ Data preparation, 2️⃣ Query Optimization, or 3️⃣ Retrieval Quality are some examples of important aspects to consider. Join us in our upcoming online hands-on session with Zain Hasan and learn some of the best practices, options, and strategies around Indexing Data, Retrieval, or Generation to build and effectively scale advanced RAG applications for production.
Learn Advanced RAG Tricks with Zain 💚
www.linkedin.com
-
How can you optimize the performance of your AI application while minimizing costs? High-end models like GPT-4 are powerful but expensive. Here's a trick: use a smaller model to tweak responses from a generative model based on past Q&A, saving resources! Watch the full video on YouTube: https://lnkd.in/dJQvwH4y
-
In this blog, Zain Hasan breaks down RAG into indexing, retrieval, and generation components and proposes 2 to 3 practical steps to improve each part of your RAG pipeline. Covering everything from chunking techniques, filtered search, and hybrid search to reranking, fine-tuning embedding models, and generating metadata for your text chunks! https://lnkd.in/dqfKWfiu
-
-
Weaviate heeft dit gerepost
Learning & sharing ML | Summarized Papers | ML @ Weaviate | UofT ℕΨ Engineering, Data Scientist, Lecturer, Digital Health/Biomed
Using a cross encoder model to re-rank retrieved results is a practical way to improve your RAG pipeline! In our upcoming RAG workshop I'll talk about: 🔹how cross-encoders work 🔹where to add them to your RAG pipeline 🔹the difference between cross-encoders vs something like a multi-vector bi-encoder like ColBERT for reranking Signup here: https://lnkd.in/e4RnSV2t Other cool stuff we'll cover: 🔸Auto-Cut: Remove irrelevant context from prompts by detecting semantic jumps between queries and documents. 🔸Re-Rankers: Utilize cross encoder-based models to meticulously rescore retrieved candidates, aiming for improved recall. 🔸Finetuning LLMs: Customize your LLMs with domain-specific datasets for optimal task performance.
-
-
Sifting through endless search results? Check out our Autocut feature! Autocut helps you get straight to the good stuff by detecting significant jumps/discontinuities in your search results, meaning you only get the most relevant results. Just set an autocut value, and let Weaviate do the rest. For example, if your search scores are [0.1899, 0.1901, 0.191, 0.21, 0.215, 0.23]: 👉 autocut: 1 gives you the first 3 objects. 👉 autocut: 2 gives you the first 5 objects. 👉 autocut: 3 gives you all 6 objects. It's super useful for vector, bm25, and hybrid searches. Learn more in our docs: https://lnkd.in/dqddqyzy
-
-
Ready to build a “Chat with your code” application with Ollama, Weaviate, LlamaIndex, and Streamlit? We’ve published a Lightning AI Studio template that you can copy. In minutes, you’ll be up and running with your Retrieval-Augmented Generation (RAG) pipeline. Start building today: https://lnkd.in/d7qxNGtW