Post

Avatar
Avatar
Of course it did. All these bots were trained on the Internet, where *confidently wrong* is an ethos.
Avatar
That’s not why they do that, though. It’s not a training data problem, it’s inherent to what the technology does, which is just predictively generate text.
Avatar
I also think current text prediction “AI” just has no capability for ethos, in the way it’s being trained, maybe fundamentally. It’s naively assumed it could be “instructed” to simulate that, but without it having really internalized ethos, values, it ain’t gonna be reliable at all.
Avatar
It can’t have an internalized ethos because it doesn’t have a mind It doesn’t think
Avatar
Exactly. It's so many steps farther down than 'having an ethos' - this technology *does not work with concepts*, just text. It doesn't know that the word 'apple' has *meaning*, it just knows that that series of characters is statistically often followed by words like 'pie' or 'laptop'.
Avatar
even if it could “think”, the current models are pre-trained in a generic way, and on top of that instructed (not trained) to behave in a certain way for applications (like “you work for the city hall, and help small businesses” and “always answer accurately”). …
Avatar
So even if there is any sort of “intelligence” in general-purpose LLMs, for specific applications its that of a professional actor playing a character described in a couple of instructions, vs. a trained professional who has agency, stake, reputation, ethos, experience and so on.
Avatar
It doesn’t think. It cannot follow the instruction “always answer accurately”.
Avatar
the marketing term "AI" really messes with people's ability to think about these machines nevermind that the term "accurate" presupposes a fixed world where truth is easily discerned and facts never change
Avatar
Yes, and it's programmed so that if it can't find a specific answer in its training data, it just makes shit up instead of saying "I don't know". That's a specific programming choice that they absolutely did not have to make.
Avatar
That is still not entirely correct. It uses an algorithm to guess what the next word should be based on probability from a massive dataset. It doesn’t know things or not know things. Therefore it is incapable of knowing that it doesn’t know something.
Avatar
That dataset, in a case like this, is pretty limited - it isn't the entire internet. Which means that when the bot's search returns a null result, it's programmed to use its "predict the next word" algorithm based on nothing instead of "if search result = null then "I'm sorry Dave, I don't know".
Avatar
Avatar
That’s not how it works. You keep assuming it works like people. It doesn’t. It doesn’t search. It cannot and will never think. It’s a sentence generator running on complex statistics. That’s it. That’s all. Nothing else.
Avatar
No, I'm not assuming it works like a human. I'm assuming it works like a search engine with an increased predictive complexity - search engines have been incorporating predictive text for AGES to help people refine their searches, but this applies the predictive function to the results.
Avatar
And here's the problem: when it's building its answer, it allows answers that are entirely predictive text without requiring any foundation from the training data set. So you get bullshit that's not based in fact without being identified as such. It's shocking it works at all.