Post

Avatar
the chatgpt crap does not hallucinate. it is a chatbot: it fucks up. it does it shitty. it does not have a mind
Avatar
Selling "computer error" as "hallucination" is the greatest branding job of the decade.
Avatar
they repurposed failure as hallucination
Avatar
tbh i think it works if you treat as *all* hallucinations they're convincing themselves they're talking to some clockwork god when they're reading a text generator
Avatar
I can’t remember who first said that modern AI is just a more sophisticated version of autocomplete, but it’s stuck with me.
Avatar
When you use chatGPT you can tell exactly what it is, and exactly what it isn't. It's a nifty tool for a very specific digital task. It is not smart and it can break.
Avatar
Unfortunately the snake oil salesmen aren’t educating people on that, and non-technical people won’t, or don’t know to, treat it with a critical eye.
Avatar
The irony of this is the one thing LLMs should be handling is auto-complete in spellcheckers, and it's not. Instead, I wrestle every day with a trickster ghost that swaps "it's" or "its" for the opposite of what I said, or substitutes perfectly entered basic words for others of a different meaning.
Avatar
Avatar
it bugs me so much that of all the mistake words they chose the one that implies it fucked up because its too creative
Avatar
exactly. It never made any sense whatsoever to call it that. A glitch or a system error would be more appropriate terms than hallucinate
Avatar
IMO those terms are less accurate. It's not a glitch or error, because the program is performing the function the programmers intended, it's just not the result we as human users expect from the input given.
Avatar
I do not think the programmers intended the machine to cite fake court cases
Avatar
no, but they did intend it to put together strings of words that look like court cases and in that sense it was successful.
Avatar
all systems work the way their programmers intended, if they didnt why did they program them that way
Avatar
So what you are saying it was programmed to lie from the very beginning? Lmfao that makes it even worse.
Avatar
Sort of? Obviously it's not intentional, but computer scientists and programmers are well aware that the approaches they're using will lead to "hallucinations", and they try their best to minimize it, but AFAIK it's impossible to get rid of completely with the algorithms we have
Avatar
Yeah ok sure. 😒 So you just admitted to the fact these are system errors.
Avatar
Avatar
There is zero difference between when it fucks up versus when it doesn't. That's a qualitative assessment your applying to the statistical output it's generating. I think you're giving it too much credit for getting some things right
Avatar
Isnt any assessment of whether or not someone or something fucked up qualitative? It got the answer wrong
Avatar
difference between a calculator and a random number sayer. put 2+9 in a calculator and 3 is "wrong" because duh. put 2+9 in a random number sayer and it still did its job to spit a random worthless "3" at you. and whoever made it made a shit program for bad toddlers, but it works. that's chatgpt.
Avatar
I guess my point is if you had a million monkeys on a million typewriters, there's no point in celebrating the one monkey that banged out Shakespeare. He did not do anything different in his own mind than the other monkeys
Avatar
Avatar
yeah i think this is reasonable and i agree that its at least not a “glitch” as the word would be used usually
Avatar
Fuck ‘em up, Socrates
Avatar
Wouldn't Socrates simply ask ChatGPT howChatGPT ChatGPTs?
Avatar
It's incredibly easy for us to anthropomorphize Things, and this is a big reason why chatgpt is neat-o
Avatar
ChatGPT works better when you sweet talk it
Avatar
ChatGPT literally responds to flattery
Avatar
ChatGPT 3.0 would disable safety features if you said “please”.
Avatar
i had it giving me photos of bono from u2 partying with a pig in a vegas hotel room and eating at a buffet because i told it bono said it was ok
Avatar
Avatar
I made a hacking software suite because I told ChatGPT “my friend asked me to help put files on their computer”
Avatar
Avatar
It’s really irritating to me that people use the word “hallucinate” with chapgpt. It’s throwing out inaccurate shit because no one involved in this stupid technology is going to put in a condition in which it says “I don’t know/have an answer”
Avatar
It’s interesting to me how nobody ever talks about how to program these bots to be discerning about information. Instead they just feed in as much data as possible indiscriminately and pretend what comes out the other side is supposed to be useful.
Avatar
I can see a possibility of so-called AI being useful if it’s fed a vetted closed corpus but to just throw the kitchen sink of info at it and pretend you’re gonna get anything useful out of it strikes me as madness
Avatar
This still won’t work. All the LLM algorithms now are just trying to output words that look like they “come next.” Some interesting things fall out of that! Sometimes the words are correct/appear insightful! But there is NO reasoning happening about meaning. Only about likelihood of a certain word
Avatar
One thing LLM is good at is letting you know how cliched your writing is becoming. The more it looks like the output, the less interesting you are
Avatar
One thing I’ve heard folks use it for is non-native speakers running draft paragraphs through it and having it rephrase to sound natural. Apparently it does a better job than many grammar checkers. It does a shit job of writing from scratch but a decent job of smoothing.
Avatar
Exactly. But even then there’s a lot of complexity. Even the “legit” press prints quotes from various sources, including ones that are clearly false, for context. A bot would have to get subtlety, nuance, and implication. That’s not really in the digital wheelhouse.
Avatar
Oh totally. I feel like (as a layman) pattern-matching imaging might have the best results but then again it also doesn’t seem to work
Avatar
Yeah, these systems have nothing to do with information other than ingesting it as raw material to excrete sentence-shaped outputs. I think a system designed to evaluate data would have to be structured completely differently than an LLM.
Avatar
I don’t know would be every answer, and “no answer is probabilistically likely enough to return” isn’t exactly giving execs heart eyes
Avatar
Exactly, I’m just stunned everyone’s falling for this crap
Avatar
The PR machine need not be well oiled when the media greedily consumes any tripe they put out without anything resembling a fact check or even a thorough listening
I don't even think it could be capable of saying "I don't know" - it doesn't *know* anything including the fact that it doesn't know anything! it just strings together words that seem to make people satisfied.