So Hinton is taking exactly 1 component of the *potential* frameworks of human understanding, & claiming that 1) that is all of human understanding, 2) that generative "AI" understands in exactly the same way, & thus 3) that generative "AI" understands like humans.
Honestly that's just bad logic.
I still firmly maintain that we don't know what makes consciousness consciousness & that we have way too many examples of things we think "couldn't," "shouldn't," or "mustn't" be conscious on some definition, Being Conscious, for us to ever firmly claim machines "can't be conscious." But i will say…
…if large languange models are conscious, then they're like a depressed, nihilistic toddler raised on a steady diet of 8chan, bad wikipedia edits, and youtube autoplay rabbit holes: a mind with no motivation or desire, without the ability to either discern or care abourt consensus reality or truth.
I'm no carbon-/bio-chauvinist, but the number of things "AI" guys try to reduce to "data architectures" or "Formalized Axioms Only," even in the face of literally decades of work showing those are representations, At Best, & in no way necessarily causal structures, is really shocking & disheartening
Just a note to say that trying to label as "straw man" an argument against claims about "intelligence" and "consciousness" Explicitly made by technologists at major corporations is to both a) be woefully underinformed about the state of beliefs within "AI", and b) misunderstand what a "straw man" is
2004: we are monsters because we do something called autistic thinking that also means we are incomprehensibly stupid
2024: everyone thinks the same and a Macintosh ][ could emulate any brain
Why would consuming large amounts of data lead to consciousness? This is not a pre-requisite for any animal or thing to be conscious. The phenomenon occurs in the natural world without any external inputs.
I don't think it does; we take in external inputs from very early on in the process. Which is kind of my point: what we think of as the "right kinds" of inputs (the ones we can datafy, formalize, and systematize) are by no means the only kinds of inputs which can, will, or Do lead to consciousness.
Yeah, seems like. There's also some rampant anthropocentrism and supremacist thought throughout (even as he claims we're about to be surpassed by "AI")
No kidding! ChatGPT has NO experiences. It’s faster at writing because it’s a program being run on billion-dollar systems consuming a town’s worth of electricity to brute-force sentences into shape via statistical manipulation of symbols. There’s no knowledge or intent—just pattern-matching.
And it's google's lead VP making an ad about LLMs and consciousness. Like, to call arguing against claims made by lead technologists at major corporations a "straw man" is to both be woefully underinformed about the state of beliefs w/in the field, & misunderstand what a "straw man" fallacy is. Bye.
The West has been extremely bad at researching consciousness because the first chapter of Genesis separates man above animals - “Animals = instinct.”
That suppressed research up until the modern era and led to the idea that it was “anthropomorphism” to recognize emotions and cognition in animals.
A perfect exhibition of why computer scientists should shut the absolute fuck up about any other science.
This is like a freshmen psych student's understanding of cognition.
I think that's how he likes to portray himself, but he's credentialed as a computer scientist, he publishes in CS conferences, worked in a CS department, works for a big tech company, and is primarily cited in CS work.
I've also never heard a cognitive scientist say his name without exasperation
Computer scientist here. You're 100% correct. All of Hinton's serious work (some of it quite interesting and important) is in computer science, which is basically applied mathematics in a hat and trenchcoat.
A cognitive scientist he ain't.
BTW, I now work on problems in cancer research for a living, which has given me a profound respect for the difficulty of other sciences in general and human biology in particular.
I get your point, but, his speciality in CompSci has always been an aspect of CogSci. Cognitive *psychologist* he aint, but cognitive scientist, sure, unfortunately
I get what you’re saying but I’d still push back on that. He doesn’t publish science in cognitive science fields, therefore he is not a cognitive scientist of any stripe.
He doesn’t study brains, cognition, neurons, etc. He studies big non-linear function generators.
IRL, I've only ever heard one person rather timidly ask whether Hinton might have a point and that person rapidly backtracked when enlightened. I wish credulous journalists would stop giving him the time of day. He is a serious danger to public understanding of computing.
Researchers who have studied how the brain works for large portions of their lives: "we're not 100% certain how this works but we have a good idea"
Ai programmer: "don't worry fam, check this pizza recipe"
The fact that LLMs need to analyze all human literature and the entire public internet in order to make mostly comprehensible answers implies that it is absolutely not the same mechanism humans use to understand things.
Hinton has turned into AI companies’ favorite science mascot. I’m convinced he knows better but he got himself a lucrative gig in the lecturing circuit and he’s riding that wave until it crashes.
Once again, we meet Upton Sinclair's observation:
"It is difficult to get a man to understand something, when his salary depends on his not understanding it."
Neural net modeler and psychologist, here. Your ANN mechanisms are not really the same mechanisms as the ones in human brains, Geoff. They're not even good enough to simulate neural processes in fruit flies. And I am ready to fight anyone about this any time any place.
I really really want to know what he means by "hasn't worked". The models haven't worked? There's no science to back up that theory? It was too hard to model on a computer so he gave up? This guy should shut up forever but also I really REALLY want to know.
Have...have any of these language people read Sassure. It really feels like they haven't read sassure "the alternative is we have arbitrary strings in our head and rules to mainpulate them" yes. Yes we do thats what language is. Also no mention of theory of mind which is critical to understanding.
Lol, is that what they teach you in those useless meeja degrees? 🤪
I work in translation – we've had clients who demanded that the German text for software buttons be no longer than the English original. FYI, Outlook has a setting called "Desktopbenachrichtigungseinstellungen". 😭