Recently, I was peer reviewing a paper, and it cited one of my papers.
Except... it wasn't anything I had written. The title sounds like something I'd write. It included coauthors I work with, and was in a journal I've published in.
But it wasn't real.
AI is not good for science.
Yup
Yup
and
We can't even get journals to agree on whether or not the volume number in citations should be bold or italicized so I am not holding me breath on a unified policy
But also, research misconduct announcement followed by a K99 award to the same person the following week
So, like, it’s not all journals’ fault. Nature is basically a preprint server with a $30K processing fee nowadays; biorxiv puts more effort into typesetting than Nature does lately
Have discovered “lazy” referencing several times during editing /proofing manuscripts by grads to PIs. Many errors in author names, journal names, etc.
Citations garner the least attention in pre-submitted manuscripts.
In principle we all ought to use DOIs or PMIDs and have the reference manager or typesetter format it. Likewise ORCIDs for authors. Would save everyone a lot of time and embarrassment. Too bad academia is such a disastrous mess; dinosaurs will come up with some ridiculous excuse for doing it wrong🙄
I don't understand why the author should have so much control over formatting anyway. Submissions should just be plain text plus like LaTeX for formulas with XML fields for citations. The idea that it's even possible to submit something that can "format incorrectly" is wild
Half my job as a production editor in the 00s on sci/medical journals was checking every single reference, hunting them down and correcting them. (Chinese ones were fun) Automated shit was just kicking in when i left and it caused so many obvious mistakes.
Reminds me of law students using chatGPT only for it to cite court cases that don’t even exist. We’re surrendering to the dissolution of the body of human knowledge. We need bans on this, and AI companies need to stop lying about what LLM even do.
I know. It’s just as bad because it still lies on someone’s behalf. A bunk paper citing a legitimate individual’s name on a paper or study that doesn’t exist erodes that person’s credibility.
The only solution is an absolute ban on any part of a paper being from genAI.
But can people innocently cite a nonexistent paper? There would have to be a fake paper in a fake journal, because course the citer would have to have read the paper.
amirite?
the most unsettling one of these created a paper i definitely hadn't written, co-authored by someone who at the time i had never published with or even worked with but who was senior author on a paper we'd just submitted
it was really weird
I've had students come and bring me citations to "check" as part of their research. I do checking, and can't find them. When I ask the student, "oh, I got them on the (insert AI machine here)."
But yea, my campus is going all in on AI, sending people to "take classes" on it, so on.
Fuck!
I see this as a librarian all the time. Doctoral students bringing me citations they can’t find the full text for. “Where did you find the citation?” “ChatGPT.” Ugh. GenAI is a tool, but not for this. It’s like trying to drive a nail with a screwdriver.
This is very bad but isn't the problem also that they shouldn't cite something they haven't read? Or that they haven't at least downloaded and added to the pile of things they swear they're going to read? they obviously didn't even try to do that
You should not cite something you haven't read
And it's impossible to have read this, because it isn't real.
If you cite something real that you haven't read, someone else can at least trace the citation and find the info in it
right but if they hadn't tried to get away with not reading it, they'd've found out the citation was fake. I mean, assuming they were guilty of that lesser crime and weren't deliberately citing a fake paper.
Can you elaborate on your response?
This is basically submitting falsified research for publication.
This is how the next Wakefield will get into print.
The masses were becoming too informed and unruly.
ai was specifically developed to destroy human culture and knowledge.
it's a huge disinformation machine, tearing at society's fabric.
A Babel Tower.
Had the privilege of sharing this with my students:
I gave ChatGPT the same assignment I gave them.
In 2/4 cases, ChatGPT chose to make up references that seem plausible (authors that indeed study that organism; real journals; etc) but do not actually exist or exist somewhere entirely different.
AI is great for science, if used correctly. The problem is that rather than focusing on teaching tech literacy on a topic, there is irrational opposition to the technology.
Garbage in -> garbage out. Don't learn how to use a search engine, a calculator (esp. a complicated graphing calculator like a TI-89) and you're going to get a lot of issues.
Use it properly and you're fine. First and foremost, if you're trying to "find" something and a traditional search would suffice, then you're using it wrong.
If you're searching against a large amount of data, you might want to pipe the results INTO an LLM, but you should not be using an LLM for the search itself.
So if you're looking for publications on X then you're not going to be just using an LLM, and if you try, you've used it incorrectly.