Post

Avatar
I just can’t stop thinking about how much less data a human brain needs to build the necessary patterns for complex language and creativity. Like how can a machine read every scrap of digitized human existence and still not know how to write a coherent paragraph?
I don't know how solid this 2022 paper was, but it seems to be working out as predicted. They already scraped most of the non-garbage data on the planet back then, and it still sucked. www.technologyreview.com/2022/11/24/1... The obvious conclusion should be that these are not logic machines.
We could run out of data to train AI language programswww.technologyreview.com Researchers may have to get creative to make training data stretch further.
Avatar
If I am super generous about how much reading I have done, I probably have a few thousand (?) text references in my life between articles and books. I have a fraction of the data available to Chat GPT and yet I am so much more adept. It’s insulting to call it “intelligent.”
Avatar
Yeah, LLMs are statistical models predicting likely next word, and do really pretty well at simulating realistic grammar and plausible-sounding phrasings, but can’t reason or say smart stuff because that isn’t a function of statistically likely next word. 🤷‍♀️
Avatar
The problem isn’t the tech or the science behind it—it’s how it’s being marketed and described, and the things the absolute vultures running the companies are trying to convince people it can be applied to. Trying to patch the fundamental incompatibility of method and application with more data is 😬
Avatar
It amuses me when people are like "haha it doesn't know basic mathematics". Neither does my vegetable peeler, because it's not designed to, and trying to trick it only makes me look stupid.
Avatar
YES! But it knows what sentences about mathematics looks like and does a very good job of spitting some out when you ask it to! It is actually doing EXACTLY what it was designed to do, and very well! It’s just that it’s being MARKETED as doing something completely different.
Avatar
Which is why I think it’s the most perfect scam machine ever built… it tricks so many people into thinking it’s intelligent because it does it so damn well…
Avatar
They just keep shoveling more garbage in, and are astounded that it keeps outputting garbage. I've pointed out before that the core theory behind this kind of AI is irreparably flawed. It can never do what they want it to do.
Avatar
Most 5 year olds come up with comprehensible narratives and can construct natural language sentences, and they do that with a minimum of sight words. LLMs can’t even reliably, repeatably manage tasks we expect from preschoolers.
Avatar
One of my favorite stoned thoughts happened when I was lying in a dark room, trying to go to sleep, and I suddenly became aware of how many things I *knew* The wind picked up and, with no view, I knew which tree outside was making which sound; what light was being blocked to make leaf shadows dance
Avatar
on the closed shades. I was understanding so much about the world I could not see from shadows and sounds and smells from outside. Anyway I think of this because I think the AI people really don’t understand how much of cognition begins in the body, and how many kinds of information we synthesize
Avatar
CONSTANTLY to create understanding, and then from that, meaning. An LLM could never. Like asking a music box to compose.
Avatar
That mode of thinking requires introspection, and self-awareness, and those are terribly frightening modes of thought for many who end up in the techbro society. It’s hard to get to, to quote Timothy Leary and the Beatles, “Turn off your mind. Relax and float down stream. It is not dying”.
Avatar
I often wonder how LLMs perform in languages other than English. Would a language with more regular rules make it easier for the machine to assemble sentences? It probably doesn't even matter, since the machine can never form a sentence with intention
Avatar
Because human brains can contact words to *other things that are not words*: the word CAT to actual cats, the word HUNGRY to feeling hungry. LLMs only connect tokens to other tokens, they are fundamentally disconnected from the existence of meaning.
Avatar