The thing is it is just impossible for the kinds of statistical based models that OpenAI is working on to turn into anything like intelligence, let alone the general intelligence they keep promising. So every time they make that claim they are 100% lying to pump the stock, which is fraud
Look, right now, ChatGPT and similar products are really nothing more than complicated and energy intensive auto-complete. That's it.
It's mathematical models of "What should the next word be?" based on scraped data.
It's not going to make the leap to a general intelligence from that.
I actually do understand how ML works, thanks.
I don't claim to know what's required for general intelligence and as such am not confident declaring an approach "impossible" and accusing its proponents of fraud. People here seem to think they understand how to build AGI enough to call them liars.
Pretty sure there isn’t enough language data on earth to build an LLM—as currently implemented—that presents general intelligence, particularly given that plenty of the training data is idiots/sarcasm/shitposting and the algorithm has no way to distinguish them.
I think there needs to be something like a fact/logic checker overlaid over an LLM before its output could be useful. Think of how a brain is structured. Lots of separate sections doing their things and interacting, and the frontal lobe to construct a coherent narrative/fact and reasoning check.
Arguably, minds are mathing, too. But a human brain is wildly more complicated than what we’re building with machines rn. Even individual neurons are significantly more complicated than our models.
Russell & Whitehead’s Principia set out to found mathematics on a basis of formal logic (cf. TTL gates) & proved 1+0=1 after a thousand pages
Lakoff & Nuñez took a fraction of the space to construct exp(iπ)=-1 from experiences of embodied toddlerhood via cognitive metaphor
Still “arguable” I guess
Sure, but “minds” are doing more than math! Machines are *only* doing math, regardless of the complexity. Machines will only ever do math, and to think one can reduce what goes on in a human brain to math is hubris.
I don't know what a mind is or whether a computer or humans have one. I know that humans have brains that do something physical that seems to interact with data and I know that computers also do that, but I could not tell you in what ways they are the same and in what ways they are different.
That a human brain seems to perform significantly better than an LLM on tasks such as “know the correct answer to a question, or positively know when it doesn’t” and manages to do this after only being exposed to 10e-8 as much text as an LLM seems to suggest an LLM is not a good model for a brain.
The fact so many people falling for it/propagating it are the same people who fell for/promoted crypto is going to make its inevitable fallout so chaotic, funny and just a tiny bit tragic.
i teach 8th grade and can usually tell immediately if my students have used AI because it's much, much worse than their usual writing (longer, less specific, and wayyyy less on topic)
This person has a Ph.D. His intelligence has been the subject of much debate (I, myself, am skeptical). Just for an example.
"Ph.D.-level intelligence" is a meaningless term that conflates a signifier of academic effort with whatever intelligence is.
Like my guess is she just means they got a license to toss a bunch of PhD thesises into it, so it'll be able to churn out a bunch of incorrect garage that sort of looks like that level of academic writing
She is delusional. She has no idea what goes into a PhD. For one thing, it involves adding *new* knowledge, not just chewed-up mush. I hope Dartmouth rescinds her honorary doctorate.
I've said for years dont make roadmaps beyond 18 months in tech, too much will change.
Sure, set a vision to aim for of what you want to achieve as a company... but not the tech that's gonna do it.
Anything beyond that is a lie.
I believe you. I still think of AI as just a fancy search engine.
Why won't they revive Ask Jeeves, but use AI to make it interact with you like an actual butler or something. That sounds fun.
Stop thinking of more ways to make money, and just have fun for a change.
I think you hit the nail perfectly on the head here. "A fancy search engine" is basically what it is.
Worst thing is that it won't show you WHERE it collected the data from. It may be a proper scientific post, it may be the ramblings of a raving lunatic or it may be my trolling posts on linkedin.
And evidently it can also just make shit up as what happened with that one court case where the lawyer was caught citing a case that does not exist, all thanks to ChatGPT.