Post

Avatar
'LLMs to send birthday messages' is the latest fascinating example of 'a thing that is useful for governments and organisations being weirdly packaged as useful for households'.
Avatar
So much of, e.g. anti-fraud is going to become easier and more effective the more of it we can automate it (a lot of it is just 'we got this bit of the state to talk to this bit of a bank which talked to this mobile phone company' which obviously takes a lot more time the less automated it is.
Avatar
It is not going to usefully replicate the same person who works at the anti-fraud bit of a bank wishing their partner a happy birthday, come on.
Avatar
They say it’s the thought that counts, but thanks to AI, you no longer have to think at all!
Avatar
Avatar
as a developer, i do not see how llm would even enter this picture, all of this can be automated without it the key problem with automating these is always that you still need human supervision for cases where automation goes wrong, and if anything, llms make this task harder
Avatar
Agree that you are never going to, nor should you want to, automate humans out of the process. But turning the job of someone who works into anti-fraud into one where they spend more of their time making decisions and less time translating their own work to multiple audiences is a time-saver.
Avatar
I suspect that requires a cultural shift as much as a technological one. I’ve had some very surreal conversations with my building society’s fraud section, often centring on their contention that basically any external (cheaper) transfer service is “prone to scams”.
Avatar
here's the thing though: AIs have been in use for many years now to process INCOMING information so what you will probably end up with is machines producing untold shittons of slightly faulty paperwork in a human-ish language, which other machines will process (once again, with errors)
Avatar
at which point it's like using voice commands to write a text to someone who uses text-to-voice to "read" them you might as well just leave a voice message you don't need llms in inter-machine communication, their only possible use is in human-machine interaction, but the human side limits scale
Avatar
I mean - in this case, we're talking about a thing that real hospitals and government departments have done: they don't produce slightly faulty paperwork because they are still being inputted and finalised by clinicians or HUD officials, they just cut down the amount of time the paperwork takes.
Avatar
(I'm also a little bit...dubious about this idea that all paperwork done by organisations is flawless and didn't need a human eye looking over it to go 'hmm, is *that* right?' before.)
Avatar
i'm not saying that at all what i'm saying is that producing more paperwork isn't going to solve the problem of too much paperwork what it would produce is machines pretending to be humans writing stuff to be read by other machines pretending to be humans which is highly inefficient
Avatar
Will the machine still write the weirdly complimentary "Mr Jefferson was a pleasant patient to see" stuff that I assume is some sort of code.
Avatar
It sounds like you have better access than a lot of the people currently writing about AI. I don't recall any of your newsletters being about it. Have I missed one? If not that's definitely something I hope you cover in the future. I've never seen concrete info on real world uses of generative AI
Avatar
outside tech companies or consulting groups.
Avatar
This and many similar tasks. For example technical documentation for safety-critical systems. Are the AI bros confident it can safely do that?
Avatar
"as a developer, I do not see how llm would even enter this picture" through the marketing department, as per 🤷
Avatar
Right - 'machine learning for government bureaucracy' is not sexy, which is why the marketing is 'LLMs to write birthday messages'.
Avatar
As a good example of this, read up what the Dutch tax services did a decade ago to families making use of 'fraudulent' child care services and how dodgy algorithms and deliberately limiting human intervention in same meant tens of thousands of people got ruined.
Avatar
Or come to think of it, the Post Office scandal where 'computer says you're guilty' was accepted without question.
Avatar
Contrary to many people's imagination, LLMs aren't only used for producing new text, they're also used for processing incoming human-generated text. You could very well feed an LLM with sufficiently long context window size a number of documents and ask "do you see any inconsistencies between them?"
Avatar
Agree, I can't see how they improve things or add any value beyond novelty