Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The notes it generates are too verbose for most medical notes even with all the customization enabled.

I've noticed that seems to be a common trend for any AI-generated text in general.



I think this might be because of what GP said later:

> it treats all the information equally as important when that’s not really the case

In the general case (and I imagine, in the specific case of GP), the model doesn't have any prior to weigh the content - people usually just prompt it with "summarize this for me please <pasted link or text>"[0], without telling it what to focus on. And, more importantly, you probably have some extra preferences that aren't consciously expressed - the overall situational context, your particular ideas, etc. translate to a different weighing that the model has, and you can't communicate that via the prompt.

Without a more specific prior, the model has to treat every information equally, and this also means erring on the side of verbosity, as to not omit anything the user may care about.

--

[0] - Or such prompt is hidden in the "AI summarizer" feature of some tool.


Are they charging per token


Same for AI coding assistants, most tools generate way too much unnecessary code. Scary part is that the code seems to be running OK.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: