Post

Avatar
i will never stop being furious at large language models being successfully branded as "artificial intelligence" they're not conscious & functionally can't be; they don't "hallucinate," they fail to generate appropriate outputs
Avatar
when i'm in a particularly bad mood, i'm reminded that public relations speech functions much as cult speech: available language is reduced in order to trap you within certain ideological claims and culture has thus far accepted the tech bro cult's terms
Avatar
This is a great frame. I have difficulty with the destruction of meaning by software marketing in particular.
Avatar
Avatar
half the time they're not even artificial! they're just underpaid workers in the Philippines or Kenya staring at a spreadsheet and telling the model what to think!
Avatar
"Artificial Intelligence" either means "stuff we don't have deterministic algorithms for" or else "full implementation of an equivalent to human intelligence and personality" and people are having a lot of fun Mott and Bailey-ing those two right now.
Avatar
It's The Hoople, all the way down.
Avatar
It's not how brains work. No matter how many machines you strap onto it, it won't simulate a brain.
Avatar
Avatar
Desperate push to get them rebranded "plausible bullshit generators"....
Avatar
They aren't performing anything like human cognition, which is so much to do with granular neurochemical processes, how mammals evolved to be open systems mediating one another's perception of reality. Consciousness is relational, and these programs have NEITHER an "I" nor a "we" to stand on.
Avatar
Concepts of AI replacing human input at the capacity of humans is a marketing tactic (lie) by LLM companies. They hit all the hallmarks that excite quick to deploy the latest in bleeding edge stupidity decision makers in corporations w/ hype & fast promises that seem to be built on a house of cards.
Avatar
The term itself was coined in 1955 by university comp sci professors . An aspirational term. Modeled after the human brain. Various mathematicians had been experimenting with ‘neurons’ for years. They’re logic gates that fire if the input is above a certain level.
Avatar
Early uses proved various mathematical theorems and were useful in academia where they continue to be useful. Commercial models followed and now we’re here. Agree it’s a weird term, see also ‘machine learning.’ But to me it’s really industry ruining a valuable technology by over promising.
Avatar
Generative algorithms gonna generate, no intelligence required (or actually present).
Avatar
Most accurate term i’ve seen in a paper is “confabulation”. Hallucination makes it seem aberrant when it’s an expected part of the process
Avatar
Mapping sentences into n dimensional retrievable vectors is not “thought”, no matter how “random” the connections. In fact it could be argued that randomization is definitely NOT intelligence because it lacks sensory context and motivation.
Avatar
It's why I find it weird how anyone treats them as "apocalyptic" threats when we are nowhere near them being close to human like intelligence. They're just good at looking like it and that's not intelligence lol
Avatar
Artificial Intelligence as a term existed long before LLMs. It's not meant to mean a program thinks or acts like a human, or that it has any kind of consciousness or thought process. Instead, it refers to programs that can act rationally, i.e. take the correct steps to accomplish some given task.
Avatar
No one in the field uses AI as any claim to consciousness or human-like intelligence. It's a term of art now that refers to a particular branch of computer science.
Avatar
The CEOs of the biggest AI companies claim exactly that
Avatar
Do they? Can you point me to an example of them doing that?
Avatar
Avatar
No one in that article is claiming LLMs are currently conscious or can reason like humans. They think we will eventually create AGI and that it will be an existential threat, which is very stupid but a different claim
Avatar
People are absolutely saying this. Read interviews with Hinton. Here’s a nice academic paper quote: “LLM-based assistants start to exhibit artificial general intelligence across diverse tasks” I come across this BS in many of the academic papers I read arxiv.org/pdf/2306.05685
arxiv.org
Avatar
I agree that's bullshit but I guess I should clarify that I'm not saying no one is making those claims at all, just that it's not what they mean when they say AI. In that paper they use AGI, which is what people in the industry use when they're actually referring to human-like intelligence.
OneSystemsone.is
Avatar
There already was a word for that: algorithm
Avatar
What does it actually say about humans when "non-intelligent" machines perform many cognitive tasks so much better than "intelligent" humans?
Avatar
It's getting tiring. Isn't it? Neural networks have been included as part of the field of AI for decades and these people think that the term being applied to them is a new marketing gimmick. They refuse to listen to people who have been studying and working in this and related fields for decades.
Avatar
These people know fuck all about the field. It's reminds me of antivaxxers who call mRNA technology "gene therapy." They are so confident in their understanding, and for no good reason. They fail in even basic understanding.
Avatar
That's any program. Everyone writes programs that take correct steps to accomplish a task. The difference with AI is that it often generates total fucking horseshit.
Avatar
Avatar
Correction: making shit up is in fact the entire point of LLMs. Plagiarizing Wikipedia is an inappropriate output for an LLM. A plausibly constructed sentence that has never been written before is an LLM behaving as expected.
There is a meaningful sense in which hallucination isn't "a problem" with generative AI, it's *the whole frakking point of it*. If you don't want a tool whose sole purpose is making shit up, generative AI is not the tool for you!
Avatar
I think the 'hallucinate' part is even worse, they generate outputs the user accepts. If it generates a wrong output, and you point that out, it will just generate something different. Like a very expensive and elaborate yes man.
Avatar
LPRM - Large Pattern Recognition Models
Avatar
Well, I always preferred Artificial Ignorance as a more correct term of art for this today, but clearly the other possibility is we are over-rating human intelligence ;). Many politicians and CEO's seem little more than defective llm's...
Avatar
Developing humans learn to produce intelligent and intelligible language outputs with repeated exposure to a relatively small chunk of just their native language. LLMs have been fed the entire Internet and still can't do it. AI it is not.
Avatar
I’ll never forgive Minsky for swiping all the analogue computational funding!
Avatar
Avatar
The media are at best credulous clowns, willing to promote anything the silicon vultures (or their wall street simulacra) want them to.