When I say that the proliferation of "AI" bullshit being sold as fact-finding or truth-certifying machines is going to get people killed, "Google's GPT System Tells Someone To Mix Vinegar And Bleach (I.e. Make Chlorine Gas) To Clean Their Washing Machine" is exactly the kind of thing I mean
To people noting that this doesn't say "mix": You're right, my bad. It says "*use* bleach and vinegar." Totally different.
My point is a lot of people aren't likely to read past that which will harm them, & the way this system presents "Summaries" increases not decreases the likelihood of that harm
When I say that the proliferation of "AI" bullshit being sold as fact-finding or truth-certifying machines is going to get people killed, "Google's GPT System Tells Someone To Mix Vinegar And Bleach (I.e. Make Chlorine Gas) To Clean Their Washing Machine" is exactly the kind of thing I mean
So: Let's Clarify:
When I say that … "AI" bullshit being sold as fact-finding or truth-certifying machines [will] get people killed, "Google's ['AI' Says 'Use Bleach & Vinegar' (I.e. 2 Things Which In Mixture Make Chlorine Gas)] To Clean Their Washing Machine" is exactly the kind of thing I mean.
Someone (can't find it now) mentioned that this was scraped from the Gain website & repackaged as part of Google's "AI" "summary," & you know what? It was! But you know what's RIGHT NEXT TO IT on the same webpage & NOWHERE in the "summary"? This.
(And even THAT'S not an ideal design. But it's There)
As I've said before, "just don't talk about it" is not a viable strategy for the kinds of fucked up answers LLM/GPT-type "AI" is likely to give, because that strategy doesn't account for the structural reasons it gives those answers. Meaning it can't account for the question/topic you can't think of
Given all the evidence of LLMs - better known as plagiaristic drivel-creators - creating life-threatening results, what will be the legal liability for *not* recalling these products (i.e. removing them from updated versions of browsers)? Should we worry about dining outside (or even in) our homes?
These sorts of things are far more worrisome than the "add glue to your pizza" type issues. Obviously bad/wrong is one thing. But the non-obvious misinformation is what's going to do real harm
I was going to say "I wonder what malicious inputs caused it to produce such dangerous output." I suppose I shouldn't be surprised that it could synthesize such output from benign inputs.
There's perhaps a morbid analogy to naively mixing common household chemicals to produce something deadly.
This statement of clarification is an example of greater exercise of clarity in communication and responsibility of instruction than anything being applied to AI-generated information searches.
That disparity itself seems to be an example of the larger problem.
Yeah I wouldn't roll the dice that all the bleach or vinegar gets rinsed out after one cycle. Accidentally gassed myself a few times in the lab back in the day trying to clean things.
Sincere question: I don’t use them together but I have a spray bottle of diluted vinegar and one with diluted bleach stored next to each other under the sink. Problem?
I feel like this is going to be a go to example now for what social sciences are and do and are useful for: being able to identify that the instructions aren't unsafe AND a process giving these instructions is unsafe
I wouldn't assume the chlorine bleach would be entirely gone after the cycle. I mean it would be diluted enough that it wouldn't hurt you, but small amounts of chlorine gas are still bad.
I think that's super bad advice.
It doesn’t matter at all that it doesn’t instruct ppl to mix tbh you were still correct. It’s encouraging people to open these containers at around the same time in probably an underventilated space that already had 1000+ ppm CO2 and god knows what fungal life
The entire idea of summaries I worry drives in opposition to people reading and understanding by promoting viewing and following without ever thinking.
Exactly correct. It's an offloading of responsibility for knowledge and expertise onto supposedly "objective" computer systems, without any understanding of why that objectivity is an illusion and most trust in these systems is Woefully misplaced
I honestly think many people in many situations will offload onto *anything*. Quora threads. Stackoverflow without checking that the problem matches theirs. Marginalia from the previous student who owned the book, without checking if they passed the course. This just makes it easier and worse.
Vicious cycle even more than most are imagining bc the society of the synopsis will encourage, somehow, even more epistemologies embracing illiteracy to invade the K12 curriculum (and yet another reason to teach Ling 1 in K12)
The people arguing with you that you didn’t use the word “mix” sound like the exact kind of people who would read an AI answer and end up with a lung full of chlorine gas (or eat rocks, glue, etc)
I get caught out at least a few times a year by not reading an unfamiliar recipe all the way through before starting to cook so I can definitely see how this would lead to some folks mixing those chemicals together if they just did a quick skim.
I think this is the point - we're racing past our typical "lowest common demonimator" safety nets. Like labeling rat poison "do not eat" and such.
AI is assuming the masses are reasonably thoughtful.