The team here at Objective HQ is at it again this morning, with Ranking Refinement shipping into Private Beta! Ranking Refinement helps you remove bad vector search results by predicting great, ok, or bad relevance for each result at query-time with a layer of human-like classification, something that vector databases really struggle with.
Objective, Inc.’s Post
More Relevant Posts
-
SEO Specialist| Driving Organic Growth Through SEO Audit, Content Strategy, Link Building, and Website Optimization|| Bromatologist.
When I first navigated the maze of algorithms The first time I successfully cracked the elusive code of a search engine algorithm felt like deciphering a hidden treasure map that only a few had ever seen. Months of meticulous adjustments, constant fine-tuning of keywords, and endless audits - all the small, invisible tasks - finally converged when a client’s obscure blog post suddenly began to climb the rankings and attract attention. This wasn’t merely a stroke of luck, nor a random occurrence. It was the result of understanding the intricate, ever-changing nature of the algorithm, almost as if I had trained my mind to think like the machine itself. The satisfaction of watching this post gain momentum was nothing short of intoxicating, like standing at the helm of a ship, guiding it through a labyrinth of invisible currents, knowing you’re on the right path but unsure when you'll reach the destination. Are you experiencing any problems with ranking your keywords? Let's have a conversation.
To view or add a comment, sign in
-
🔍 Spent some time diving deep into hybrid search and reranking lately. LanceDB makes custom reranking and hybrid search effortless. In my latest blog, I experimented to see if tweaking query types and reranking could really boost retrieval performance. Spoiler alert: it does wonders! 😲 I compared ColBERT and Cohere reranking models across multiple datasets and embedding models. 🚀 I was particularly impressed by Cohere reranker models. 👉 Just by reranking the over-fetched vector search results, there was a significant improvement. 👉 And when you throw hybrid search into the mix, the results become even more impressive! I opted for LlamaIndex's QA datasets from this time as they're widely used and are simple to get started with. Next I plan to run similar tests on structured and semi-structured datasets for a tougher challenge. 💪 Check out the blog for a deep dive into comparisons and metrics. Dive in here 👇 https://lnkd.in/gAh9RrMw
Benchmarking Cohere Rerankers with LanceDB
blog.lancedb.com
To view or add a comment, sign in
-
Boost retrieval performance with LanceDB's custom reranking & hybrid search! This blog explores how combining these techniques can significantly improve results. Dive in 👇
🔍 Spent some time diving deep into hybrid search and reranking lately. LanceDB makes custom reranking and hybrid search effortless. In my latest blog, I experimented to see if tweaking query types and reranking could really boost retrieval performance. Spoiler alert: it does wonders! 😲 I compared ColBERT and Cohere reranking models across multiple datasets and embedding models. 🚀 I was particularly impressed by Cohere reranker models. 👉 Just by reranking the over-fetched vector search results, there was a significant improvement. 👉 And when you throw hybrid search into the mix, the results become even more impressive! I opted for LlamaIndex's QA datasets from this time as they're widely used and are simple to get started with. Next I plan to run similar tests on structured and semi-structured datasets for a tougher challenge. 💪 Check out the blog for a deep dive into comparisons and metrics. Dive in here 👇 https://lnkd.in/gAh9RrMw
Benchmarking Cohere Rerankers with LanceDB
blog.lancedb.com
To view or add a comment, sign in
-
This morning we’re stoked to ship Auto-Evaluation into your Console! You can’t improve what you don’t measure, especially in search. Auto-Evaluation lets you quantitatively — and comprehensively — measure the relevance of your search. All powered by Objective’s AI-native intelligence layer.
Get Quantitative and Comprehensive Insights into your Search Quality with Auto-Evaluations
objective.inc
To view or add a comment, sign in
-
🛠️ Unlock Precision in Search: A New Blog on Mastering UDF Reranking for Tailored Results 🛠️ Enterprise developers, looking for more control over your search results? Vectara’s User-Defined Functions (UDF) give you the flexibility to rerank based on custom logic, from metadata to advanced filtering—whether it’s recency, location, or other criteria unique to your application. In this post, David Oplatka & Ofer Mendelevitch walk you through: 🔷 How UDF reranking works 🔷Tips for optimizing performance 🔷Real-world examples using Airbnb data Check out the full blog here: https://bit.ly/4eTuxcV
RAG with User-Defined Functions Based Reranking
https://vectara.com
To view or add a comment, sign in
-
Exploring the principles of efficient navigation in Google Maps or any mapping tool always intrigued me. As it turns out, the solution isn't as complicated as it seems — enter Dijkstra's Algorithm. Just published a new blog post where I dive into the world of weighted graphs and unveil the workings of Dijkstra's Algorithm, a powerful tool behind finding the shortest path in weighted networks → https://lnkd.in/dNcghRCy
Understanding Weighted Graphs: Dijkstra's Algorithm Revealed | João Melo
jopcmelo.dev
To view or add a comment, sign in
-
Our Data team has just released an article showcasing how fine-tuning the open-source Llama3 model can significantly enhance accuracy, scalability, and inference cost for domain-specific challenges like enhancing search result relevance. Dive into the team's firsthand experiences and observations on fine-tuning tasks with the newly released Llama3 mode ⤵️
Fine-tuning Llama3 to measure semantic relevance in search
craft.faire.com
To view or add a comment, sign in
-
We extended our priprint about late chunking, a novel method to make embeddings of chunks context-aware. We added: - An algorithm for long documents - A fine-tuning method to make late chunking more effective - A comparison to Anthropic's contextual embedding https://lnkd.in/ehWtzjBS
Part 2 of our exploration of Late Chunking, a deep dive into why it is the best method for chunk embeddings and improving search/RAG performance. https://lnkd.in/ewVspN-u
What Late Chunking Really Is (and What It’s Not): Part II
jina.ai
To view or add a comment, sign in
-
Part 2 of our exploration of Late Chunking, a deep dive into why it is the best method for chunk embeddings and improving search/RAG performance. https://lnkd.in/ewVspN-u
What Late Chunking Really Is (and What It’s Not): Part II
jina.ai
To view or add a comment, sign in
-
💡 Few-shot prompting to improve tool-calling performance I'm very bullish on few-shot prompting, but there aren't a ton of resources on best strategies for how to best do that We've done some exploration here - excited to share some initial results! Takeaways: - Semantic search over examples can help if you have diverse inputs - Few shotting via messages is better than concatenating into a string - A few good examples goes a long way, and there's diminishing marginal returns https://lnkd.in/gxRbnnH7
To view or add a comment, sign in
1,033 followers