The thing that’s broken is capitalism. Most of the shit sorta worked ok 10 years ago (after several decades of pretty shoddy function).
But the tech overlords can’t leave it alone. They must chase “innovation” ie the next big thing. Thus forced upgrades and planned obsolescence and AI and 🐂💩
OpenAI would love for people to believe AI is saturating things, but outside this use case of smarter-autocorrect, there has been no penetration. No one is adding this to their app to do things behind the scenes, it's all theater.
Just a little splash, absolutely no saturation
And TBF, there’s a *lot* of hidden AI in background business services, and growing rapidly. Stuff users never see and may not even have an interface or output long form text. It’s pretty huge.
Yes of course many machine learning methods are used extensively and are useful and effective and can often use less energy than a declarative program!
LLMs are not one of them.
I’m not defending the practice, but LLMs are used in quite a number of areas in backend systems now and continue to grow. Vector embedding w/ semantic matching on its own is pretty huge. I’ve been involved w/ a few large scale LLM projects where no consumer would ever know they were there.
That's one possibility. You'd have to repeat it with other sentences and have it consistently come up with one extra count to have stronger evidence though. LLMs have found to be very bad at basic math, so it could be another sort of error.
It took several tries, but I finally found a way to ask Copilot why it fucked up. It gave a pretty good answer – but an answer that implies that you just can't rely on an LLM to answer even basic questions.
There is no truth or falisity baked anywhere into the thing whatsoever. What it has learned is "grammar and naturalness of language". Which is, don't get me wrong, an enormous technical achievement. But the thing doesn't know anything at all, and especially with mathematical things, it really sucks
And that's worth getting at it, because math's grammar, especially, is very stripped down. It exists to sort out truth from falsity in a lot of ways, so "grammatically correct false things" are easy to pose in formal math. Exactly the task that chatgpt is bad at
I think you're both wrong. LLMs are actually good at math: feed them a complicated math problem, and they'll spit out an answer. But a lot of LLMs are bad at *counting*. Which is the thing you do to get numbers. Counting isn't math, it's a precursor to math.
I'm very much in the "it's a thing with a purpose, that can accomplish many things", but yeah, people's whose main interest is marketing and sales oversold it, and then the tech cheerleaders got involved.
And yeah, then people (rightly) started dunking on those two groups.
it's also worth contrasting the other technical task that ChatGPT is surprisingly good at - coding assistance. There, creativity is less valuable than a consistent following of dominant conventions -repeatable code is more valuable than hyperoptimized code. Programmers favor naturalness
Does the emoji mean it knows it's getting it wrong and just likes messing with us? And if it knows that, does it know some of us will ask that very question?