'LLMs to send birthday messages' is the latest fascinating example of 'a thing that is useful for governments and organisations being weirdly packaged as useful for households'.
So much of, e.g. anti-fraud is going to become easier and more effective the more of it we can automate it (a lot of it is just 'we got this bit of the state to talk to this bit of a bank which talked to this mobile phone company' which obviously takes a lot more time the less automated it is.
as a developer, i do not see how llm would even enter this picture, all of this can be automated without it
the key problem with automating these is always that you still need human supervision for cases where automation goes wrong, and if anything, llms make this task harder
Agree that you are never going to, nor should you want to, automate humans out of the process. But turning the job of someone who works into anti-fraud into one where they spend more of their time making decisions and less time translating their own work to multiple audiences is a time-saver.
here's the thing though: AIs have been in use for many years now to process INCOMING information
so what you will probably end up with is machines producing untold shittons of slightly faulty paperwork in a human-ish language, which other machines will process (once again, with errors)
at which point it's like using voice commands to write a text to someone who uses text-to-voice to "read" them
you might as well just leave a voice message
you don't need llms in inter-machine communication, their only possible use is in human-machine interaction, but the human side limits scale
I mean - in this case, we're talking about a thing that real hospitals and government departments have done: they don't produce slightly faulty paperwork because they are still being inputted and finalised by clinicians or HUD officials, they just cut down the amount of time the paperwork takes.
(I'm also a little bit...dubious about this idea that all paperwork done by organisations is flawless and didn't need a human eye looking over it to go 'hmm, is *that* right?' before.)
i'm not saying that at all
what i'm saying is that producing more paperwork isn't going to solve the problem of too much paperwork
what it would produce is machines pretending to be humans writing stuff to be read by other machines pretending to be humans
which is highly inefficient
but here's an idea
we could use AI (the real thing, not the bullshit machine) to observe and map out the crisscrossing networks of paper trails in complex government scenarios, and optimise the structure
that would be a much better way of utilising research and computing resources in this area
Well, it's not a 'too much paperwork' thing. Discharge notes are actually important! It's just that they can and are be changed in a way that reduces the amount we ask of clinicians when they write them.
I assume so. (And yeah, it's a code by omission: there's no risk to you seeing 'Ed Jefferson is a kind person who you absolutely will not want to have an orderly on hand for, unlike the patient who does NOT have this in their notes'.)
It sounds like you have better access than a lot of the people currently writing about AI. I don't recall any of your newsletters being about it. Have I missed one? If not that's definitely something I hope you cover in the future.
I've never seen concrete info on real world uses of generative AI
I suspect that requires a cultural shift as much as a technological one. I’ve had some very surreal conversations with my building society’s fraud section, often centring on their contention that basically any external (cheaper) transfer service is “prone to scams”.
As a good example of this, read up what the Dutch tax services did a decade ago to families making use of 'fraudulent' child care services and how dodgy algorithms and deliberately limiting human intervention in same meant tens of thousands of people got ruined.
Contrary to many people's imagination, LLMs aren't only used for producing new text, they're also used for processing incoming human-generated text. You could very well feed an LLM with sufficiently long context window size a number of documents and ask "do you see any inconsistencies between them?"