Post

Avatar
LLMs are useless for lawyering and other tasks involving actual analysis and understanding not because the technology hasn't gotten there yet, but because that's fundamentally not what statistical word association is capable of doing. Not only can't it understand, it's not even trying to understand.
Avatar
It doesn't matter how much better LLM chatbots get at the thing they're actually doing, the thing they're doing is categorically not what hype merchants in this vein are claiming. It's like claiming Photoshop is going to tell us how to reform the tax system. It's a nonsensical premise.
Avatar
Why is it so hard to get this across to people? So frustrating.
Avatar
Avatar
To be fair, if this is really what’s happening, it is very easy to be fooled even if you’re smart. Yesterday Claude basically wrote a me novel Rust module via step by step walkthrough, proposed improvements, implemented them when asked. It can be very hard to square this with dismissals,
Avatar
Maybe I should look at Claude, because ChatGPT, even the latest models often produces nonsense when I ask it to help with real life software dev problems.
Avatar
You can really tell there is no thinking or logic behind the solutions AI offers. Just lucky guesses.
Avatar
Mostly unlucky guesses.
Avatar
We need a better name for AI because it’s anything but intelligent
Avatar
Avatar
Avatar
Yes, that is a very good way of putting it.
Avatar
I don't know! And not trying to steer us all into a tired debate or anything, I'm sure we're all sick of it. But it's like, a stochastic parrot as opposed to what? How do humans prove understanding more deeply than by doing what LLMs do, e.g. produce reiterations via parallel semantic associations
Avatar
There are no semantic associations inside an LLM though. There’s no mechanism for associating words with meaning. It’s tokens all the way down and all the way back.
Avatar
I guess I don't understand the difference, then. If "frog" means "a small amphibian creature commonly found in ponds", don't the many-dimensional vector relationships needed to associate those ideas with each other constitute a semantic mapping that's at least akin to what's in our minds?
Avatar
Avatar
Sometimes, sure! I’ve had somewhat more mixed experiences. But of course I do avoid swinging for the fences with it, because I want it to actually help me. The #1 thing I use it for lately is parsing TypeScript errors, which can sometimes be almost impossible for a human, but is trivial for an LLM
Avatar
When I do really reach it does tend to fall short. The Rust module doesn’t quite run yet (and I haven’t had time to really try to figure out why; so yeah it actually failed my test of “can it do this for me for free”, but it did offer some of the learning opptys of a good pair programming sesh).