SylphAI’s Post

SylphAI reposted this

View profile for Li Yin, graphic

Author of AdalFlow | AI researcher | x MetaAI

Manual prompting is a painful process; auto-prompt optimization is the future. Building the task pipeline accounts for only 10% of the work; the other 90% lies in optimizing it. LLM prompting is highly sensitive: the accuracy gap between top-performing and lower-performing prompts can be as high as 40%. It is also a brittle process that breaks the moment your model changes. Manual prompting is not the answer, but it is a good starting point for auto-prompt optimization. Here are two papers on auto-prompting you can read: - "Large Language Models as Optimizers" [DeepMind]: https://lnkd.in/gJSkbfn6 - "Automatic prompt optimization with gradient descent and beam search" [Microsoft Research] https://lnkd.in/gsV3vFkK #artificialintelligence #lightrag #llms ____________ This is part of the work of LightRAG, the "PyTorch" library for LLM applications. Follow hit 🔔 to stay updated. We are going to public beta next week!

Khalil Adib

Data Scientist @ Firemind | Applied AI and Machine Learning Solutions

1mo

Anthropic have prompt opmitization feature, I got the prompts and built tool internally, we use it, its really helpful!

Tom Keldenich

AI Engineer - Now building LLM-based app

1mo

Sur le même sujet : Le Méta-Prompt d’Anthropic est intéressant ! Si vous avez un prompt simple → il le change en méga-prompt optimisé. J'ai copié le Méta-Prompt d’Anthropic ici : https://tom-keldenich.notion.site/Anthropic-Prompt-Optimizer-8b5cf2ac881149c4b4baf5bd7704a6ed?pvs=74 Ce que vous avez partagé semble aussi pertinent. Je vais jeter un coup d'œil à ces deux articles ! Merci, Li.

Like
Reply
David Hedderich

Senior Data Scientist & Architect @ CARIAD 🏎️ | RWTH

1mo

I think everyone felt this when using any kind of LLM. A slight change in the prompt and the results are vastly different in quality. Looking forward to the public beta 🙂

Shahzaib Hamid

Client Oriented AI Innovations|[email protected]| 7 years in AI

1mo

This is quite an interesting take, sometimes it takes us days of manual prompting to accomplish one task of an application. In that process there is a chance of efficiency decrease as well. Do you think auto-prompting can give the same efficiency?

Like
Reply
Mana Agrawal

Software Engineer (AI accelerators) @Intel | MS CS @Indiana University, Ex-Arcesium (a DE Shaw company)

1mo

Really interesting, what are the metrics you are using to get this accuracy %?

Like
Reply
Sildolfo Gomes

Data Scientist @ Eldorado Research Institute | Deep Learning | Data Science

1mo

Did you look in DSPy?

See more comments

To view or add a comment, sign in

Explore topics