Post

Avatar
Recently, I was peer reviewing a paper, and it cited one of my papers. Except... it wasn't anything I had written. The title sounds like something I'd write. It included coauthors I work with, and was in a journal I've published in. But it wasn't real. AI is not good for science.
Avatar
1. We need firm policy on this. 2. There need to be consequences. 3. The journals need to lead the way.
Avatar
Yup Yup and We can't even get journals to agree on whether or not the volume number in citations should be bold or italicized so I am not holding me breath on a unified policy
Avatar
Bold? BOLD? Does ink grow on trees??
Avatar
But also, research misconduct announcement followed by a K99 award to the same person the following week So, like, it’s not all journals’ fault. Nature is basically a preprint server with a $30K processing fee nowadays; biorxiv puts more effort into typesetting than Nature does lately
Avatar
Have discovered “lazy” referencing several times during editing /proofing manuscripts by grads to PIs. Many errors in author names, journal names, etc. Citations garner the least attention in pre-submitted manuscripts.
Avatar
Ok but you get how a typo in the author name of a real paper is not the same thing as citing a completely made up paper, right?
Avatar
In principle we all ought to use DOIs or PMIDs and have the reference manager or typesetter format it. Likewise ORCIDs for authors. Would save everyone a lot of time and embarrassment. Too bad academia is such a disastrous mess; dinosaurs will come up with some ridiculous excuse for doing it wrong🙄
Avatar
Ink is one of the few things the publisher actually pays for
Well Actually if you're using soot ie burnt organic material as your black base it does grow on trees aaaaand I'll just see myself out immediately.
Avatar
I don't understand why the author should have so much control over formatting anyway. Submissions should just be plain text plus like LaTeX for formulas with XML fields for citations. The idea that it's even possible to submit something that can "format incorrectly" is wild
Avatar
Half my job as a production editor in the 00s on sci/medical journals was checking every single reference, hunting them down and correcting them. (Chinese ones were fun) Automated shit was just kicking in when i left and it caused so many obvious mistakes.
Avatar
Reminds me of law students using chatGPT only for it to cite court cases that don’t even exist. We’re surrendering to the dissolution of the body of human knowledge. We need bans on this, and AI companies need to stop lying about what LLM even do.
Avatar
Bans taking what form? As in, treat using these systems as a form of academic dishonesty just like plagiarism?
Avatar
Bans on using AI to create works in other people’s names. This is as bad as Deepfakes, it harms trust in scientists, public figures
Avatar
I think this is referring to citations, not whole submissions with fake authors
Avatar
I know. It’s just as bad because it still lies on someone’s behalf. A bunk paper citing a legitimate individual’s name on a paper or study that doesn’t exist erodes that person’s credibility. The only solution is an absolute ban on any part of a paper being from genAI.
Avatar
But can people innocently cite a nonexistent paper? There would have to be a fake paper in a fake journal, because course the citer would have to have read the paper. amirite?
Avatar
the most unsettling one of these created a paper i definitely hadn't written, co-authored by someone who at the time i had never published with or even worked with but who was senior author on a paper we'd just submitted it was really weird
Avatar
I am choosing to believe that this is dumb luck
Avatar
i *think* the explanation is that i had previously published with an undergrad who went on to be their grad student, but it is still unnerving!
Avatar
I've had students come and bring me citations to "check" as part of their research. I do checking, and can't find them. When I ask the student, "oh, I got them on the (insert AI machine here)." But yea, my campus is going all in on AI, sending people to "take classes" on it, so on. Fuck!
Avatar
Should be an immediate 'Reject. No resubmission'
Avatar
I see this as a librarian all the time. Doctoral students bringing me citations they can’t find the full text for. “Where did you find the citation?” “ChatGPT.” Ugh. GenAI is a tool, but not for this. It’s like trying to drive a nail with a screwdriver.
Avatar
This is very bad but isn't the problem also that they shouldn't cite something they haven't read? Or that they haven't at least downloaded and added to the pile of things they swear they're going to read? they obviously didn't even try to do that
Avatar
You should not cite something you haven't read And it's impossible to have read this, because it isn't real. If you cite something real that you haven't read, someone else can at least trace the citation and find the info in it
Avatar
right but if they hadn't tried to get away with not reading it, they'd've found out the citation was fake. I mean, assuming they were guilty of that lesser crime and weren't deliberately citing a fake paper.
Avatar
Avatar
Can you elaborate on your response? This is basically submitting falsified research for publication. This is how the next Wakefield will get into print.
Avatar
The masses were becoming too informed and unruly. ai was specifically developed to destroy human culture and knowledge. it's a huge disinformation machine, tearing at society's fabric. A Babel Tower.
Avatar
This happened to a colleague of mine in an application he and I were shortlisting for. People do not DO NOT think when they reach for these LLM tools.
Avatar
Had the privilege of sharing this with my students: I gave ChatGPT the same assignment I gave them. In 2/4 cases, ChatGPT chose to make up references that seem plausible (authors that indeed study that organism; real journals; etc) but do not actually exist or exist somewhere entirely different.
Avatar
Cagey (KG) Human? Cheeky!
Avatar
Avatar
I know University are though on Plagiarism. But what about false information? Any punishment?
Avatar
AI is great for science, if used correctly. The problem is that rather than focusing on teaching tech literacy on a topic, there is irrational opposition to the technology.
Avatar
How is it "great for science"? The bulk of this thread is all about how it delivers false information, which is the opposite of "great for science".
Avatar
Garbage in -> garbage out. Don't learn how to use a search engine, a calculator (esp. a complicated graphing calculator like a TI-89) and you're going to get a lot of issues.
Avatar
Use it properly and you're fine. First and foremost, if you're trying to "find" something and a traditional search would suffice, then you're using it wrong.
Avatar
If you're searching against a large amount of data, you might want to pipe the results INTO an LLM, but you should not be using an LLM for the search itself. So if you're looking for publications on X then you're not going to be just using an LLM, and if you try, you've used it incorrectly.
Avatar
But can you include it for your next annual/promotion review?
Avatar
Avatar
And it means the person (?) who wrote the paper clearly did not read it - undermining credibility of the paper as a whole.