SylphAI reposted this
Manual prompting is a painful process; auto-prompt optimization is the future. Building the task pipeline accounts for only 10% of the work; the other 90% lies in optimizing it. LLM prompting is highly sensitive: the accuracy gap between top-performing and lower-performing prompts can be as high as 40%. It is also a brittle process that breaks the moment your model changes. Manual prompting is not the answer, but it is a good starting point for auto-prompt optimization. Here are two papers on auto-prompting you can read: - "Large Language Models as Optimizers" [DeepMind]: https://lnkd.in/gJSkbfn6 - "Automatic prompt optimization with gradient descent and beam search" [Microsoft Research] https://lnkd.in/gsV3vFkK #artificialintelligence #lightrag #llms ____________ This is part of the work of LightRAG, the "PyTorch" library for LLM applications. Follow hit 🔔 to stay updated. We are going to public beta next week!
Sur le même sujet : Le Méta-Prompt d’Anthropic est intéressant ! Si vous avez un prompt simple → il le change en méga-prompt optimisé. J'ai copié le Méta-Prompt d’Anthropic ici : https://tom-keldenich.notion.site/Anthropic-Prompt-Optimizer-8b5cf2ac881149c4b4baf5bd7704a6ed?pvs=74 Ce que vous avez partagé semble aussi pertinent. Je vais jeter un coup d'œil à ces deux articles ! Merci, Li.
I think everyone felt this when using any kind of LLM. A slight change in the prompt and the results are vastly different in quality. Looking forward to the public beta 🙂
This is quite an interesting take, sometimes it takes us days of manual prompting to accomplish one task of an application. In that process there is a chance of efficiency decrease as well. Do you think auto-prompting can give the same efficiency?
Really interesting, what are the metrics you are using to get this accuracy %?
Did you look in DSPy?
Data Scientist @ Firemind | Applied AI and Machine Learning Solutions
1moAnthropic have prompt opmitization feature, I got the prompts and built tool internally, we use it, its really helpful!