Post

Avatar
"In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs." Hallucination is Inevitable: An Innate Limitation of Large Language Models arxiv.org/pdf/2401.11817
arxiv.org
Avatar
The word 'hallucination' is itself a 'hallucination'. The LLM got it wrong full stop. Please stop giving software human attributes. It is not human and will never be human. It may get better and better at 'understanding' and 'predicting' conclusions but it will always be software.
Avatar
would be cool if the tech world stopped using the word hallucinate. hallucinations are perceptual. humans who experience hallucinations can learn to recognize them as such, whereas LLMs never will be able to do that. these models are producing output exactly as they are programmed to
Avatar
And thus LLMs have no value to ordinary people, and in fact, they are toxic and a critical new problem for humanity.
Avatar
"Toxic"? No. They have uses, but not the ones some companies are putting them to.
Avatar
They are toxic in their material demands for endless water and power.
Avatar
Avatar
AI is created by humans so yes.
Avatar
Though it would be a mistake the blame the sins of the elite on an entire species
Avatar
Wikipedia hallucinates, as does journalism, as do humans. We have skepticism, a defense against hallucinations. If your mother says she loves you, check it out, said a wise philosopher. quoteinvestigator.com/2021/10/27/c...
If Your Mother Says She Loves You, Check On It – Quote Investigator®quoteinvestigator.com
Avatar
They key point in the paper is that LLMs don't know how to say they don't know (some lovely recursion there, eh?)
Avatar
Avatar
And some pretty good ideas have sprung from hallucinations. So there's that.
Avatar
Avatar