Post

Avatar
So Hinton is taking exactly 1 component of the *potential* frameworks of human understanding, & claiming that 1) that is all of human understanding, 2) that generative "AI" understands in exactly the same way, & thus 3) that generative "AI" understands like humans. Honestly that's just bad logic.
🫠
Avatar
I still firmly maintain that we don't know what makes consciousness consciousness & that we have way too many examples of things we think "couldn't," "shouldn't," or "mustn't" be conscious on some definition, Being Conscious, for us to ever firmly claim machines "can't be conscious." But i will say…
Avatar
…if large languange models are conscious, then they're like a depressed, nihilistic toddler raised on a steady diet of 8chan, bad wikipedia edits, and youtube autoplay rabbit holes: a mind with no motivation or desire, without the ability to either discern or care abourt consensus reality or truth.
Avatar
Why would consuming large amounts of data lead to consciousness? This is not a pre-requisite for any animal or thing to be conscious. The phenomenon occurs in the natural world without any external inputs.
Avatar
I don't think it does; we take in external inputs from very early on in the process. Which is kind of my point: what we think of as the "right kinds" of inputs (the ones we can datafy, formalize, and systematize) are by no means the only kinds of inputs which can, will, or Do lead to consciousness.
Avatar
Yes, because LLMs are just fancy dictionaries; they only resemble thought through mimicry of human creativity.