Post

Avatar
computer scientists: we have invented a virtual dumbass who is constantly wrong tech CEOs: let's add it to every product
Avatar
Avatar
I Make Things Up! strangely omitted from the list.
Avatar
having no relation to or concept of objective reality limits the usefulness of current "AI"offerings
Avatar
And yet, when I ask a question with what appears to be good context (a properly engineered prompt, if you will,) I get apparently useful results. For example: This question was about what damp leasing means in the context of airline operations. I had no idea. Now I do.
chatgpt.com
Avatar
I've subscribed to Aviation Week for over two decades. My experience is that one of the biggest barriers to understanding the nuances of a specialty is not knowing what the various technical terms means, i.e. the jargon.
Avatar
I had never heard the terms damp leasing or wet leasing before. ChatGPT explained them well, given a sufficient prompt. I argue that makes ChatGPT useful.
Avatar
By contrast, here is the non-AI search via duckduckgo: You will notice that the results are much more verbose. ChatGPT has made an accurate and succinct summary of the various sources about airline industry damp leases, and extended the discussion to wet leases.
damp leasing at DuckDuckGoduckduckgo.com DuckDuckGo. Privacy, Simplified.
Avatar
I cannot argue that ChatGPT works well for all scenarios. Folks I respect here have reported that its performance for code generation has apparently gotten worse recently. But I have found it valuable in understanding the sometimes obscure language in many technical texts.
Avatar
I dunno... It took a massive amount of energy to return similar information. But without citing any sources nor adding significant value to the existing set of data. The definition wasn't hiding; it was readily available from myriad existing sources.
Avatar
Beyond that, LLMs continue to offer a very real risk of assembling a sequence of words that looks like a valid answer, but which is very much not one. With no cited sources nor context verification is nearly impossible. Your one example is accurate. But how can you be confident that's the case?
Avatar
I would like LLMs to have the ability to cite sources. But how do I know that what is posted on a website is true and accurate? One way is the reputation of the source. Another is the plausibility of the answer, which depends on common sense and lateral knowledge. A third is cross-checking.
Avatar
I haven't studied LLMs deeply enough to know all the reasons why they can confabulate. But one reason I know to be true is if they are trained on contradictory information. Low quality information and misinformation are deadly to a LLM. That's a policy and management issue.
Avatar
One LLM I use is QuantConnect's Mia. Mia is trained on the documentation and codebase of QuantConnect, not on the general web. Because its training dataset is restricted, Mia's tendency to confabulate is small enough that I haven't observed it in several months of use.
Avatar
You don't know that a website is accurate. But you have a much greater ability to figure it out. As for power, yes, search also uses a lot. But the AI training in massive compute, meaning massive incremental power consumption.
Avatar
Can you quantify or describe how humans determine the accuracy of a website? I have an intuition about how I do this, but no formal description. I think this is a cognitive science research question.
Avatar
No, the information wasn't hiding. But extracting the succinct explanation is a noticeable cognitive load. As for the massive amount of energy, are you familiar with how large scale search engines are implemented? If not, you might want to take a look.
Avatar
Google search architecture requires a massive number of individual servers scattered around the globe, constantly computing the inverse index. That inverse index is then available to a similarly massive number of individual servers, which is why they can return results so quickly.
Avatar
DuckDuckGo doesn't have their own data centers; they use AWS; but AWS is a massive collection of servers in its own right. When disparaging the energy use of AI, I'd like to see that compared to the energy use of, say Amazon (all of it) or Google search. Generalities don't tell us much.