The AI in Her was able to pass as a full person. But, what we’re seeing now is that humans are not good at understanding the difference between a real person and a program designed to simulate a human.
IMO it’s like the mirror test which is designed to see if an animal recognizes itself in the mirror, or thinks it’s another animal. The LLM breakthrough is basically that we can now have a computer program that is in no way intelligent or self-aware, but it is able to simulate those things well enough to fool many / most humans.
Sam and other AGI in Her not only passed as persons, they transcended the human experience into a whole other level we couldn’t grasp.
But I realize your point is that the problem is more on the human side and how we so easily personify anything close to seeming human-like. It’s possible that we may even miss machine intelligence if it comes about because it will be so alien to us. Look at dolphin research and how little we understand their communication, and that’s still biological entities that have some things in common with us.
Humans are no more stupid than they always have been. It’s more that in the US the education system is becoming less and less effective, and the government is doing less and less regulation, allowing companies to get more effective at bypassing people’s critical faculties.
AGI that quickly transitioned to ASI (since that’s theoretically what would happen once the first happens). The term “AI” has been misused and marketed so much now it’s lost its previous connection to the actual “Artificial Intelligence” meaning.
AGI that quickly transitioned to ASI (since that’s theoretically what would happen once the first happens)
yeah according to people who also say that idiot plagiarism machines are gonna be machine gods one day, you will all see, and also coincidentally the same people who make them
I inferred in the next sentence that LLMs are not AI. If you want to debate if AGI is even possible, by all means, but I’m not sure you understand the different definitions since you missed my first point.
AI is the subset of math that concerns automated processes. AI has never been AGI, it’s always been “stuff people make up to make the computer seem smart”. Everything from chess computers to elevators to MML are all under - and have always been - included in the term AI.
openai/anthropic crowd/cult (the people that work there, not fans) will with straight face claim that both llms will form agi and that agi can self-improve recursively this way, and about nobody else does that
And yet plenty of other ML experts will say that LLMs can’t be the path to AGI simply by the limitations of how they’re built. Meaning that experts in the field do think AGI could be possible, just not in the way it’s being done with those products. So if you’re ranting against the marketing of LLMs as some Holy Grail that will come alive… again, that was my initial point.
The interesting thing is that you went after my line about AGI>>>ASI, so I’m curious why you think a machine that could do anything a human can do thinking or otherwise would stop there? I’m assuming AGI happens, of course, but once that occurs, why is that the end?
The concept of AGI has existed long before OpenAI. Long before Sam Altman was born even.
It’s a dream that people have been actively working on for decades.
While I will say that Binary architecture is unlikely to get us there, AGI itself is not a pipe dream.
Humans will not stop until we create a silicon mind.
That said, a silicon mind would have a lot of processing power, but would not have any sort of special knowledge. If they pull knowledge from the Internet, they might end up dumber for it.
That you won’t even discuss the hypotheticals or AGI in general indicates you’ve got a closed mind on the subject. I’m totally open to the idea that AGI is impossible, if it can be demonstrated that it’s strictly a select biological phenomena. Which would mean showing that it has to be biological in nature. Where does intelligence come from? Can it be duplicated in other ways? Such questions led to the development of ML and AI research, and yes, even LLM development, trying to copy the way brains work. That might end up being the wrong direction, and silicon intelligence may come from other methods.
Saying you don’t believe it can happen doesn’t prove anything except your own disbelief that something else could be considered a person. I’ve asked many questions of you that you’ve ignored, so here’s another: if you think only humans can ever have intelligence, why are they so special? I don’t expect an answer of course, you don’t seem to want to actually discuss it, only deny it.
even LLM development, trying to copy the way brains work
no, i’m gonna stop you right there. llms weren’t made to mimic human brain, or anything like this, llms were made as tools to study language. it’s categorically impossible for llms to provide anything leading to agi; these things don’t think, don’t research, don’t hallucinate, don’t have agency, cognition, don’t have working memory the way humans do; these things do one thing and one thing only: generate string of tokens that were most likely to follow given prompt, given what was in the training data. that’s it; that’s all that there’s to it; i know you were promised superhuman intelligence in a box but if you’re using a chatbot, all intelligence there is is your own; if you think otherwise you’re falling for massive ELIZA effect, a thing that has been around for fifty years now, augmented by blizzard of openai marketing propaganda, helped by tech journalists that never questioned these hypesters, funded by fake nerd billionaires of silicon valley that misremembered old scifi and went around building torment nexii, but i digress
Where does intelligence come from? Can it be duplicated in other ways?
i’m not saying that intelligence is exclusively always entirely biological thing, but i do think that state of neuroscience, psychology, and also computational side of research is woefully short of anything resembling pathway to solution to this problem. instead, this is what i think it’s going to happen:
llms are dead end in this sense, but also these things take bulk of ai/ml funding now, so all these other approaches are ignored in terms of funding. historically, after every period of intense hype of this nature comes ai winter; this one is bound to happen too, and it might be worse since it looks like it also fueled investment bubble propping up large part of american economy, so when bubble pops, on top of historically usual negative sentiment stemming from overpromising and underdelivering there’s gonna be resentment about aibros worming their way to management and causing mass layoffs, replacing juniors with idiot boxes and lobotomizing any future seniors pipeline etc etc.
what typically happened next is that steady supply of research in cs/math departments of many universities accumulated over low tens of years, and when some new good enough development happened, and everyone forgot previous failures, hype train starts again. this step will be slowed down by both current american administration cutting off funding to many types of research, and incoming bubble crash that will make people remember what kind of thing aibros are up to for a long time.
when, not if, most credulous investors’ money including softbank thrown into openai gets burnt through, which i think might take couple of years tops, i would be very surprised if any of these overgrown startups doesn’t become a smoking crater within five years, very few people will want to have anything to do with this all, and when the next ai spring happens, it might be well into 40s, 50s, and by then i guess that climate change effects will be too strong to ignore and just try and catch another hype train, there are gonna be much more pressing issues. this is why i think that anything resembling agi won’t come up during my lifetime, and if you want to discuss gpt41 overlords in year 3107, feel free to discuss it with someone else.
The movie is showing the inevitability. I heard some people giving personal names to ChatGPT like John. We are heading towards better AI that could, for better or worse, fill the gap to human loneliness.
The people falling in love with large language models still have breakdowns when the version gets updated or something they communicated shifted out of the context window.
The AI in Her was actually AI - a full person in most respects. That’s not what’s happening now.
The AI in Her was able to pass as a full person. But, what we’re seeing now is that humans are not good at understanding the difference between a real person and a program designed to simulate a human.
IMO it’s like the mirror test which is designed to see if an animal recognizes itself in the mirror, or thinks it’s another animal. The LLM breakthrough is basically that we can now have a computer program that is in no way intelligent or self-aware, but it is able to simulate those things well enough to fool many / most humans.
Sam and other AGI in Her not only passed as persons, they transcended the human experience into a whole other level we couldn’t grasp.
But I realize your point is that the problem is more on the human side and how we so easily personify anything close to seeming human-like. It’s possible that we may even miss machine intelligence if it comes about because it will be so alien to us. Look at dolphin research and how little we understand their communication, and that’s still biological entities that have some things in common with us.
The fact that humans are becoming increasingly more stupid and brain-dead helps too.
Humans are no more stupid than they always have been. It’s more that in the US the education system is becoming less and less effective, and the government is doing less and less regulation, allowing companies to get more effective at bypassing people’s critical faculties.
AI doesn’t have to become smarter if everyone just becomes stupid.
AGI that quickly transitioned to ASI (since that’s theoretically what would happen once the first happens). The term “AI” has been misused and marketed so much now it’s lost its previous connection to the actual “Artificial Intelligence” meaning.
yeah according to people who also say that idiot plagiarism machines are gonna be machine gods one day, you will all see, and also coincidentally the same people who make them
I inferred in the next sentence that LLMs are not AI. If you want to debate if AGI is even possible, by all means, but I’m not sure you understand the different definitions since you missed my first point.
Implied, not inferred
AI is the subset of math that concerns automated processes. AI has never been AGI, it’s always been “stuff people make up to make the computer seem smart”. Everything from chess computers to elevators to MML are all under - and have always been - included in the term AI.
openai/anthropic crowd/cult (the people that work there, not fans) will with straight face claim that both llms will form agi and that agi can self-improve recursively this way, and about nobody else does that
And yet plenty of other ML experts will say that LLMs can’t be the path to AGI simply by the limitations of how they’re built. Meaning that experts in the field do think AGI could be possible, just not in the way it’s being done with those products. So if you’re ranting against the marketing of LLMs as some Holy Grail that will come alive… again, that was my initial point.
The interesting thing is that you went after my line about AGI>>>ASI, so I’m curious why you think a machine that could do anything a human can do thinking or otherwise would stop there? I’m assuming AGI happens, of course, but once that occurs, why is that the end?
well i don’t assume agi is a thing that can feasibly happen, and the well deserved ai winter will get in the way at any rate
i’ll say more, if you think that it’s remotely possible you’ve fallen for openai propaganda
The concept of AGI has existed long before OpenAI. Long before Sam Altman was born even.
It’s a dream that people have been actively working on for decades.
While I will say that Binary architecture is unlikely to get us there, AGI itself is not a pipe dream.
Humans will not stop until we create a silicon mind.
That said, a silicon mind would have a lot of processing power, but would not have any sort of special knowledge. If they pull knowledge from the Internet, they might end up dumber for it.
That you won’t even discuss the hypotheticals or AGI in general indicates you’ve got a closed mind on the subject. I’m totally open to the idea that AGI is impossible, if it can be demonstrated that it’s strictly a select biological phenomena. Which would mean showing that it has to be biological in nature. Where does intelligence come from? Can it be duplicated in other ways? Such questions led to the development of ML and AI research, and yes, even LLM development, trying to copy the way brains work. That might end up being the wrong direction, and silicon intelligence may come from other methods.
Saying you don’t believe it can happen doesn’t prove anything except your own disbelief that something else could be considered a person. I’ve asked many questions of you that you’ve ignored, so here’s another: if you think only humans can ever have intelligence, why are they so special? I don’t expect an answer of course, you don’t seem to want to actually discuss it, only deny it.
no, i’m gonna stop you right there. llms weren’t made to mimic human brain, or anything like this, llms were made as tools to study language. it’s categorically impossible for llms to provide anything leading to agi; these things don’t think, don’t research, don’t hallucinate, don’t have agency, cognition, don’t have working memory the way humans do; these things do one thing and one thing only: generate string of tokens that were most likely to follow given prompt, given what was in the training data. that’s it; that’s all that there’s to it; i know you were promised superhuman intelligence in a box but if you’re using a chatbot, all intelligence there is is your own; if you think otherwise you’re falling for massive ELIZA effect, a thing that has been around for fifty years now, augmented by blizzard of openai marketing propaganda, helped by tech journalists that never questioned these hypesters, funded by fake nerd billionaires of silicon valley that misremembered old scifi and went around building torment nexii, but i digress
i’m not saying that intelligence is exclusively always entirely biological thing, but i do think that state of neuroscience, psychology, and also computational side of research is woefully short of anything resembling pathway to solution to this problem. instead, this is what i think it’s going to happen:
llms are dead end in this sense, but also these things take bulk of ai/ml funding now, so all these other approaches are ignored in terms of funding. historically, after every period of intense hype of this nature comes ai winter; this one is bound to happen too, and it might be worse since it looks like it also fueled investment bubble propping up large part of american economy, so when bubble pops, on top of historically usual negative sentiment stemming from overpromising and underdelivering there’s gonna be resentment about aibros worming their way to management and causing mass layoffs, replacing juniors with idiot boxes and lobotomizing any future seniors pipeline etc etc.
what typically happened next is that steady supply of research in cs/math departments of many universities accumulated over low tens of years, and when some new good enough development happened, and everyone forgot previous failures, hype train starts again. this step will be slowed down by both current american administration cutting off funding to many types of research, and incoming bubble crash that will make people remember what kind of thing aibros are up to for a long time.
when, not if, most credulous investors’ money including softbank thrown into openai gets burnt through, which i think might take couple of years tops, i would be very surprised if any of these overgrown startups doesn’t become a smoking crater within five years, very few people will want to have anything to do with this all, and when the next ai spring happens, it might be well into 40s, 50s, and by then i guess that climate change effects will be too strong to ignore and just try and catch another hype train, there are gonna be much more pressing issues. this is why i think that anything resembling agi won’t come up during my lifetime, and if you want to discuss gpt41 overlords in year 3107, feel free to discuss it with someone else.
The movie is showing the inevitability. I heard some people giving personal names to ChatGPT like John. We are heading towards better AI that could, for better or worse, fill the gap to human loneliness.
Not with LLMs we’re not… it’s only A, no I.
True, but LLMs will evolve to become AI.
Nope.
And Verbeist’s steam machine cart did not lead to cars.
I’ve let to see us approaching AGI.
The people falling in love with large language models still have breakdowns when the version gets updated or something they communicated shifted out of the context window.
And yet we have people treating chatbot as therapist or even romantic partner. Going to get worse as AI technology develops
or even as a pharmacist or doctor.
AI doesn’t stand for artificial personhood. One of the first big AI projects was teaching a computer to play chess.
The word AI as a technical term refers to a broad category of algorithms; what you’re talking about is AGI.