Focus is perhaps what makes agents work better than single LLMs
I like the phrase “goal drift” introduced by Rohit Krishnan. It’s the idea that an LLM loses focus, and drifts from the objectives over time, or with more involved context.
It also explains why agents make a lot of sense. You’re breaking a system down into more focussed objectives: do one thing, and do it well vs. trying to solve everything in a single prompt.
Plus, of course, the ability for agents to access just the tools they need (for more determinism and to take actions) and you’ve got something interesting.
Links:
- What can LLMs never do?, Rohit Krishnan, 23 April 2024.
- Today’s AI models are impressive. Teams of them will be formidable, The Economist, 13 May 2024.
I’ve been trying out crewAI, which feels like a layer on top of LangChain. But for what I’m doing, instead I think I’ll shift to the model of coding it more myself, following the pattern in ChemCrow — which is an example pre-dating Empowering Biomedical Discovery with AI Agents.