Post

Avatar
during grad school in the 90s, some cog sci folk built computer models in the hopes of understanding human learning. i saw an early version of LLMs, which is what "artificial intelligence" amounts to, even now. even as a novice, i saw the fundamental distinction between human and machine learning
So Hinton is taking exactly 1 component of the *potential* frameworks of human understanding, & claiming that 1) that is all of human understanding, 2) that generative "AI" understands in exactly the same way, & thus 3) that generative "AI" understands like humans. Honestly that's just bad logic.
Avatar
during the Q&A at a lecture where the researcher made hinton-like claims, i said, "but your model undergeneralizes categories and then expands. humans overgeneralize and then refine. those are completely different processes." the guy paused sheepishly and said, "yeah true."