So Hinton is taking exactly 1 component of the *potential* frameworks of human understanding, & claiming that 1) that is all of human understanding, 2) that generative "AI" understands in exactly the same way, & thus 3) that generative "AI" understands like humans.
Honestly that's just bad logic.
I still firmly maintain that we don't know what makes consciousness consciousness & that we have way too many examples of things we think "couldn't," "shouldn't," or "mustn't" be conscious on some definition, Being Conscious, for us to ever firmly claim machines "can't be conscious." But i will say…
…if large languange models are conscious, then they're like a depressed, nihilistic toddler raised on a steady diet of 8chan, bad wikipedia edits, and youtube autoplay rabbit holes: a mind with no motivation or desire, without the ability to either discern or care abourt consensus reality or truth.
And it's google's lead VP making an ad about LLMs and consciousness. Like, to call arguing against claims made by lead technologists at major corporations a "straw man" is to both be woefully underinformed about the state of beliefs w/in the field, & misunderstand what a "straw man" fallacy is. Bye.