So Hinton is taking exactly 1 component of the *potential* frameworks of human understanding, & claiming that 1) that is all of human understanding, 2) that generative "AI" understands in exactly the same way, & thus 3) that generative "AI" understands like humans.
Honestly that's just bad logic.
I still firmly maintain that we don't know what makes consciousness consciousness & that we have way too many examples of things we think "couldn't," "shouldn't," or "mustn't" be conscious on some definition, Being Conscious, for us to ever firmly claim machines "can't be conscious." But i will say…
…if large languange models are conscious, then they're like a depressed, nihilistic toddler raised on a steady diet of 8chan, bad wikipedia edits, and youtube autoplay rabbit holes: a mind with no motivation or desire, without the ability to either discern or care abourt consensus reality or truth.
Yeah, seems like. There's also some rampant anthropocentrism and supremacist thought throughout (even as he claims we're about to be surpassed by "AI")
No kidding! ChatGPT has NO experiences. It’s faster at writing because it’s a program being run on billion-dollar systems consuming a town’s worth of electricity to brute-force sentences into shape via statistical manipulation of symbols. There’s no knowledge or intent—just pattern-matching.