Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You know, aside from this being a blatant feature-length advertisement for what they're selling, I almost thought this was a clever idea.

I thought it involved prompting the LLM to write SQL code to query a knowledge base of documents, and index into them, so that you'd know where to look in the original documents for your authoritative answer. So it would be a meta-search agent.

But apparently, they intend the queried documents to feed back into training the LLM? That's just gasoline on a dumpster fire.



I cannot figure out why LLMs are relevant to their solution. This whole thing comes down to a similarity search via vectors.

The LLM layer seems completely unnecessary. Why do you have a schema that requires an LLM to decide which column to query (which is the LLM's only unique value in this proposal)? Why are you not normalizing into a single column?


> so that you'd know where to look in the original documents

Oh, we have something similar: perplexity.ai

It provides a number of sources after prompting its textual result.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: