The omnipresence of AI and some of its more concerning outputs was inevitable, but could scholarly publishing be a David to some of its Goliath tendencies? One might envision it, through the lens of SSP’s 45th conference theme—Transformation, Trust, and Transparency.
For the last couple of decades, open access has been scholarly publishing’s core theme and steady drumbeat. Along with its many iterations and the need for publishing speed came some uninvited guests—bad actors in the form of bogus research, predatory journals, and paper mills, churning out and selling false content in ever more sophisticated ways. Layer on top of all of this the global topic du jour – artificial intelligence.
The theme of SSP’s conference couldn’t have been more perfectly timed.
The opening keynote for the conference was apt. Elisabeth Bik, PhD, a solely crowd-funded science consultant/image investigator, shared an eye-opening and very impassioned view into biomedical publications image fraud. She revealed alarming data and visual examples and explored the many reasons for what feels like an ongoing epidemic of falsified content.
As Dr. Bik shared the depth of her single-handed efforts to identify falsified scientific images, then bring her findings to the publisher’s attention, and ask for retraction/correction/clarification, her frustration at being unable to directly and consistently reach the proper contact – presumably a journal’s EOC or a Research Integrity Officer of a University Press – was palpable. And even when the correct contact was identified, so often her findings and requests have been ignored. As an example, she pointed to a set of papers she reported to publishers as having suspect imagery, of which 65% had not been corrected or retracted five years after the reporting. An “expression of concern” (acknowledgment of the problem), at a minimum, would be a step in the right direction. Keep in mind that image falsification is but one aspect of nefarious content manipulation, as Dr. Bik pointed out. AI-generated text and made-up citations are out there, spreading like weeds.
Among the examples of falsified images shared, this one showed an image that had appeared in roughly 600 papers – likely generated from the same paper mill – featuring the same blot background across all papers. It’s concerning because the background is homogeneous and the blots appear to float above the pixels, unlike a true blot image that has some bleeding around the edges of the blots. These bands appear to be generated through a primitive form of AI – generative adversarial networks.
Analyzing the three conference theme components (Transformation, Trust, Transparency), Dr. Bik underpinned each with situational activity, acknowledgment, and advice:
A recent issue of Nature reports that AI is “complicating publishers’ efforts to tackle the growing problem of paper mills…” “Generative AI tools, including chatbots such as ChatGPT and image-generating software, provide new ways of producing paper mill content, which could prove particularly difficult to detect.”
Dr. Bik did acknowledge that there are some AI tools that can spot image duplication, such as ImageTwin, Proofig, FigCheck, and Forensically. So, while AI can certainly be applied as a tool for good, it can also be a nefarious culprit, as we are seeing with alarming rapidity. Many leading sources (NYT, The Guardian, and others) report almost daily of fake papers generated by AI chatbots fooling scientists – some filled with wildly inaccurate data. Scientific paper mills are indeed selling authorships on already accepted papers, selling fake papers written by ghostwriters with fabricated data, and most concerning, utilizing AI to completely create false content from the ground up – these practices have been labeled “the invisible foe” and “huge, organized fraud”.
Said Dr. Bik, “I’m very worried about what this technology could do. I think as a scientific publishing society we should be extremely worried about that and think about ways to detect that. In the hands of the wrong person, it could lead to a lot of damage.”
At the heart of her work is this bedrock belief: Publications are the foundation of science, and science is about finding the truth. This is a singularly compelling point when one considers that scientists build upon each other’s work, and if information is fake or fraudulent, all data built upon it would very likely be compromised. Dr. Bik’s presentation was unique in that it exposed a very ugly problem in the publishing community in a new and different way and challenged the audience to participate in the solution.
Is it both hopeful and realistic to think that this challenge will herald an era of revitalized content and research integrity – what publishers owe to the general public and to their industry?
Is AI an unstoppable tsunami – rolling over original human-crafted content, music, imagery, art, legal documentation, etc. – or will there be a reckoning, brought about by conscientious publishers, creators, and lawmakers, putting painstaking measures in place to identify and reject false content in the name of authentic work?
The 2023 SSP conference offered a stunning array of studies, solutions, and points of view on topics ranging from influencing policy to career development to sustainable and equitable publishing to building faith with underrepresented groups – and of course, the good, bad, and ugly of AI. There was even a highly entertaining musical about Metadata. (Yes, really.) The common thread – this year’s theme – was skillfully woven throughout. I found it an aspirational theme that will surely inspire all stakeholders to work towards greater integrity and trust in publishing.
Bonus reading: if you find yourself intrigued about the good-or-bad AI debate from a scholarly publishing perspective, there is a fascinating Scholarly Kitchen post summarizing a lively conference session that you shouldn’t miss, here.