Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So the article is about what humans v LLMs can do, except in the article, LLM is taken to mean just a single output auto regressive model (no chain of thought). Since an LLM has a constant number of steps at each token generation, no it cannot do everything a human can. Humans can choose when to think and can ponder the next action interminably. That's my point. When we force LLMs to commit to a particular answer by forcing an output at each token generation, the class of problems they can solve is trivially less than the equivalent human.


I agree that a raw autoregressive LLM model with just a single output is (almost necessarily) less capable than humans. Not only can we ponder (chain of thought style), we also have various means available to us to check our work – e.g. for a coding problem, we can write the code, see if it compiles and runs and passes our tests, and if it doesn't, we can look at the error messages, add debugging, try some changes, and do that iteratively until we hopefully reach a solution–or else we give up – which the constraint "single output" denies.

I don't think anyone is actually expecting "AGI" to be achieved by a model labouring under such extreme limitations as a single output autoregressive LLM is. If instead we are talking about an AI agent with not just chain of thought, but also function calling to invoke various tools (including to write and run code), the ability to store and retrieve information with a RAG, etc – well, current versions of that aren't "AGI" either, but it seems much more plausible that they might eventually evolve into it.

I don't think we need to invoke Turing or Gödel in order to make the point I just made, and I think doing so is more distracting with irrelevancies than actually enlightening.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: