I’m not an economist, but I know that ppl only invest in stocks if they think it will be worth more tomorrow than today.
As long as people are convinced that this tech will result in AGI someday, they will keep investing.
And the gameplan for convincing people is not to build not tech that is as useful as possible, as good at fact-checking as possible - but as human-like as possible. The more people anthropormorphize LLMs, the more it seems like it can do stuff it actually can’t (reason, understand, empathize, etc).
OpenAI, Anthropic and others exploit this to the fullest. And I think breaking that spell is key.
I’m not an economist, but I know that ppl only invest in stocks if they think it will be worth more tomorrow than today.
As long as people are convinced that this tech will result in AGI someday, they will keep investing.
And the gameplan for convincing people is not to build not tech that is as useful as possible, as good at fact-checking as possible - but as human-like as possible. The more people anthropormorphize LLMs, the more it seems like it can do stuff it actually can’t (reason, understand, empathize, etc).
OpenAI, Anthropic and others exploit this to the fullest. And I think breaking that spell is key.
There have been a lot of articles coming out recently (as in, in the last 24 hours) that indicates that spell might be breaking:
That’s interesting. Better sooner rather than later!
What will happen to all the datacenters? Crypto?