That’s not why they do that, though.
It’s not a training data problem, it’s inherent to what the technology does, which is just predictively generate text.
I also think current text prediction “AI” just has no capability for ethos, in the way it’s being trained, maybe fundamentally.
It’s naively assumed it could be “instructed” to simulate that, but without it having really internalized ethos, values, it ain’t gonna be reliable at all.
Exactly. It's so many steps farther down than 'having an ethos' - this technology *does not work with concepts*, just text.
It doesn't know that the word 'apple' has *meaning*, it just knows that that series of characters is statistically often followed by words like 'pie' or 'laptop'.
even if it could “think”, the current models are pre-trained in a generic way, and on top of that instructed (not trained) to behave in a certain way for applications (like “you work for the city hall, and help small businesses” and “always answer accurately”).
…
So even if there is any sort of “intelligence” in general-purpose LLMs, for specific applications its that of a professional actor playing a character described in a couple of instructions, vs. a trained professional who has agency, stake, reputation, ethos, experience and so on.
the marketing term "AI" really messes with people's ability to think about these machines
nevermind that the term "accurate" presupposes a fixed world where truth is easily discerned and facts never change
In the city hall example accurate answers do exist, but nobody would, say, expect a 5-year old to answer small business questions accurately just by demanding reasonability strong enough.