tbh i think it works if you treat as *all* hallucinations
they're convincing themselves they're talking to some clockwork god when they're reading a text generator
When you use chatGPT you can tell exactly what it is, and exactly what it isn't. It's a nifty tool for a very specific digital task. It is not smart and it can break.
The irony of this is the one thing LLMs should be handling is auto-complete in spellcheckers, and it's not. Instead, I wrestle every day with a trickster ghost that swaps "it's" or "its" for the opposite of what I said, or substitutes perfectly entered basic words for others of a different meaning.
IMO those terms are less accurate. It's not a glitch or error, because the program is performing the function the programmers intended, it's just not the result we as human users expect from the input given.
Sort of? Obviously it's not intentional, but computer scientists and programmers are well aware that the approaches they're using will lead to "hallucinations", and they try their best to minimize it, but AFAIK it's impossible to get rid of completely with the algorithms we have
There is zero difference between when it fucks up versus when it doesn't. That's a qualitative assessment your applying to the statistical output it's generating. I think you're giving it too much credit for getting some things right
difference between a calculator and a random number sayer. put 2+9 in a calculator and 3 is "wrong" because duh. put 2+9 in a random number sayer and it still did its job to spit a random worthless "3" at you. and whoever made it made a shit program for bad toddlers, but it works. that's chatgpt.
I guess my point is if you had a million monkeys on a million typewriters, there's no point in celebrating the one monkey that banged out Shakespeare. He did not do anything different in his own mind than the other monkeys
It’s really irritating to me that people use the word “hallucinate” with chapgpt. It’s throwing out inaccurate shit because no one involved in this stupid technology is going to put in a condition in which it says “I don’t know/have an answer”
It’s interesting to me how nobody ever talks about how to program these bots to be discerning about information. Instead they just feed in as much data as possible indiscriminately and pretend what comes out the other side is supposed to be useful.
I can see a possibility of so-called AI being useful if it’s fed a vetted closed corpus but to just throw the kitchen sink of info at it and pretend you’re gonna get anything useful out of it strikes me as madness
This still won’t work. All the LLM algorithms now are just trying to output words that look like they “come next.” Some interesting things fall out of that! Sometimes the words are correct/appear insightful! But there is NO reasoning happening about meaning. Only about likelihood of a certain word
One thing I’ve heard folks use it for is non-native speakers running draft paragraphs through it and having it rephrase to sound natural. Apparently it does a better job than many grammar checkers. It does a shit job of writing from scratch but a decent job of smoothing.
Exactly. But even then there’s a lot of complexity. Even the “legit” press prints quotes from various sources, including ones that are clearly false, for context. A bot would have to get subtlety, nuance, and implication. That’s not really in the digital wheelhouse.
Yeah, these systems have nothing to do with information other than ingesting it as raw material to excrete sentence-shaped outputs. I think a system designed to evaluate data would have to be structured completely differently than an LLM.
The PR machine need not be well oiled when the media greedily consumes any tripe they put out without anything resembling a fact check or even a thorough listening
I don't even think it could be capable of saying "I don't know" - it doesn't *know* anything including the fact that it doesn't know anything! it just strings together words that seem to make people satisfied.