Post

Avatar
Brilliant. Fixing LLMs' inaccuracy by using already existing search technology to look up the real answer and then blend that answer into whatever the LLM generates, so the generated answer has a better chance of resembling the real answer, which was looked up but not shown to the querent.
“The LLM is still guessing answers, but with RAG [retrieval augmented generation], it's just that the guesses are often improved because it is told where to look for answers…” “There's still no deep understanding of words and the world…”
Can a technology called RAG keep AI models from making stuff up?arstechnica.com The framework pulls in external sources to enhance accuracy. Does it live up to the hype?
Avatar
Just like NFTs before them, all the ways people come up with to make LLMs useful involve stapling them to functional programs that don't need them and could do the job better without the unnecessary layer of overhyped scamware
Avatar
"Can we make randomized text output tend towards resembling the truth" would be an interesting experiment if it weren't environmentally ruinous, but the best outcome is never going to be intelligence, nor as effective and efficient as a search engine