The thing I don't understand is why everyone is throwing money at LLMs for language, when there are much simpler use cases which are more useful?
For example, has anyone ever attempted image -> html/css model? Seems like it be great if I can draw something on a piece of paper and have it generate a website view for me.
Perhaps if we think of LLMs as search engines (Google, Bing etc) then there's more money to be made by being the top generic search engine than the top specialized one (code search, papers search etc)
This is the real PVP of LLM for me. Compressing google search AND the internet into 8 GB and download is something unfathomable to me a two decades ago.
My hope now is that someone will figure out a way to separate intelligence from knowledge - i.e. train a model that knows how to interpret the wights of other models - so that training new intelligent models wouldn't require training them on a petabyte of data every run.
> has anyone ever attempted image -> html/css model?
I had a discussion with a friend about doing this, but for CNC code. The answer was that a model trained on a narrow data set underperforms one trained on a large data set and then fine tuned with the narrow one.
That's a thought I had. For example, could a model be trained to take a description, and create a Blender (or whatever other software) model from it? I have no idea how LLMs really work under the hood, so please tell me if this is nonsense.
For example, has anyone ever attempted image -> html/css model? Seems like it be great if I can draw something on a piece of paper and have it generate a website view for me.