> DeepResearch is a cosmetic enhancement that wraps the results in a "report" - it looks impressive but IMO is much more likely to lead to inaccurate or misleading results.
I think that if done well deep research can be more than that. At a minimum, I would say that before "deep search" you'd need some calls to an LLM to figure out what to look for, what places would be best to look for (i.e. sources, trust, etc), how to tabulate the data gathered and so on. Just as deep search is "rag w/ tools in a loop", so can (should) be deep research.
Think of the analogy of using aider straight up going to code or using it to first /architect and then code. But for any task that lends itself to (re)searching. At least it would catch useless tangents faster.
At the end of the day, what's fascinating about LLM based agents is that you can almost always add another layer of abstraction on top. No matter what you build, you can always come from another angle. That's really cool imo, and something Hassabis has hinted lately in some podcasts.
Right - I'm finding the flawed Deep Research tools useful already, but what I really want is much more control over the sources of information they use.
Sadly I think that’s why non-open source commercial deep (re)search implementations are going to be largely useless. Even if you’re using a customized end point for search like Kagi, the sources are mostly garbage and no one except maybe Google Books has the resources and legal cover to expand that deep search into books, which are much better sources.
So I’ve started a thing with Jim’s, and the first effort I am doing is erring the “tone” meaning I’m building a project template that will keep the bots focused.
I think that one part of the deep loop needs to be a check-in on expectations and goals…
So instead of throwing a deep task: I find that bots work better in small iterative chucks of objectives..
I haven’t formulated it completely yet but as an example ive been working extensively with cursors whole anthropic abstraction ai as a service:
So many folks suffer from “generating” quagmire;
And I found that telling the bot to “break any response into smaller chunks to avoid context limitations” works incredibly well…
So when my scaffold is complete the goal is to use Fabric Patterns for nursery assignments to the deep bots.. whereby they constantly check in.
Prior to “deep” things I found this to work really well by telling the bots about obsessively development_diray.md and .json tracking of actions (even still their memory is super small, and I envisioned a multi layer of agents where the initial agents actions feed the context of agents who follow along and you have a waterfall of context between agents so as to avoid context loss on super deep iterative research…
(I’ll type out something more salient when I have a KVM…
I think that if done well deep research can be more than that. At a minimum, I would say that before "deep search" you'd need some calls to an LLM to figure out what to look for, what places would be best to look for (i.e. sources, trust, etc), how to tabulate the data gathered and so on. Just as deep search is "rag w/ tools in a loop", so can (should) be deep research.
Think of the analogy of using aider straight up going to code or using it to first /architect and then code. But for any task that lends itself to (re)searching. At least it would catch useless tangents faster.
At the end of the day, what's fascinating about LLM based agents is that you can almost always add another layer of abstraction on top. No matter what you build, you can always come from another angle. That's really cool imo, and something Hassabis has hinted lately in some podcasts.