We need to stop calling it AI. Its not a fact bot. Its a generative language machine. Just making things that look like an answer. No fact checking involved. Thats not how they are programmed. People need to learn this.
“Stochastic parrot” is my go to, or “spicy autocorrect” for more normal vocabulary.
Both terms indicate that the LLM doesn’t actually know what it’s saying.
Oh yeah, “spicy autocomplete” was either something I overheard and forgot the source for, or was a parallel invention by like a few thousand people at a minimum.
Not an original observation on my part, but a useful turn of phrase regardless.
THIS. Ai does not exist yet, and grifters are buzz-wording multiple corporations into the dirt based on vaporware and illegal overvaluations. Its a classic business technique called 'lying to steal money'
There are so many people who are convinced this garbage can think. It cannot think. It also seems that a lot of humans cannot think, because they would rather BELIEVE these things to be real AI, despite all evidence (and DESIGN) to the contrary. It’s maddening. Worse: corporate execs are believers.
True, it is really an elaborate autocorrect. But I think encouraging people to see it that way is dangerous. Like saying that a gun is just a way of moving metal objects.
AI can produce real, factual answers. With fact check, awesome.
It is also dangerous but mostly because it helps *people* lie.
Government is so thirsty to offload its responsibilities... can't wait until someone gets thrown in jail after following a chat bot's business advice...then has to rely on an AI public defender...before going in front of an AI judge.
I can assure you, the majority of bodega cats are beloved and spoiled. (Most are neighborhood mascots.)
(I mistyped "mascats" and maybe I should have left that.)
I asked how to apply for a permit and it directed me to apply for a "Small Animal Boarding Establishment Permit," which is for "a facility other than an animal shelter where animals not owned by the proprietor are sheltered, harbored, maintained, groomed, fed, or watered in return for a fee." Hm!
That’s not why they do that, though.
It’s not a training data problem, it’s inherent to what the technology does, which is just predictively generate text.
I also think current text prediction “AI” just has no capability for ethos, in the way it’s being trained, maybe fundamentally.
It’s naively assumed it could be “instructed” to simulate that, but without it having really internalized ethos, values, it ain’t gonna be reliable at all.
Exactly. It's so many steps farther down than 'having an ethos' - this technology *does not work with concepts*, just text.
It doesn't know that the word 'apple' has *meaning*, it just knows that that series of characters is statistically often followed by words like 'pie' or 'laptop'.
even if it could “think”, the current models are pre-trained in a generic way, and on top of that instructed (not trained) to behave in a certain way for applications (like “you work for the city hall, and help small businesses” and “always answer accurately”).
…