Home » On Topic » Is There a Place for Artificial Intelligence in Your Peer Review Process?

Peer review. It’s the cornerstone of scientific publishing and the process that ensures rigorous review and validation of research results and methodology. A journal editor sends a researcher’s manuscript to a team of experts that examines the research methodology, results, analysis, and conclusions to determine and ensure material accuracy, relevance, and importance — and decide whether it is a good fit for the publication.

But peer review has its issues. There are some scientists who advocate abandoning the process, but most believe it is a necessary practice that could be improved. And some scientists are looking to science itself to provide answers: the science of artificial intelligence.

What is artificial intelligence?

According to ComputerWorld, AI is a computer science sub-field whose goal is to enable the development of computers to do things associated with people acting intelligently. AI technology goals range from getting systems to behave like humans to actually simulating human reasoning and learning how humans think. The middle road and model for which most research is being done uses human reasoning as a guide but is not driven by the goal to perfectly model it.

Can AI improve the peer review process?

The peer review process has several inherent challenges, from the potential for bias among peer reviewers to process integrity to length of time to publication. Some researchers and publishers suggest AI could eliminate or reduce some of these challenges. But can a computer algorithm do a better job of evaluating a research paper than a person? In some cases, maybe.

  • Peer reviewer selection — Journal editors typically select half a dozen potential reviewers based on academic credentials, fields of expertise, prior reviews, possible conflicts of interest, workloads, and any other relevant criteria the editor desires. AI technology can also search for and identify potential new peer reviewers from online sources. The system tracks and stores relevant data on which to base its selections.
  • Peer reviewer bias — In some cases, reviewers may trade favorable reviews for one another’s works or others may have personal biases for or against certain authors. AI addresses the issue of reviewer bias by using specific criteria to select the reviewers and screen them for any bias toward a particular author or topic. Journal publishers can also address reviewer bias by publishing the actual review and the reviewer’s name.
  • Pre-screening — Publishers can use AI algorithms to check for plagiarism, author verification, impact factor prediction, proper methodology, fake data, data gaps, and faulty analysis and conclusions.
  • General automation — In addition to improving the reviewer selection process and data checking, AI can help with general tasks such as maintaining databases of authors, manuscripts, and reviewers; maintaining workflows such as sending reviewer-author communications, notifying authors of paper status, sending thank you notes, selecting alternate reviewers and resending manuscripts; and other basic recordkeeping tasks.
  • Time to publish Science can advance quickly — if it is not bogged down by a slow publication process. The time from when an author submits a scientific manuscript to when it is finally published can be months, even years. The back-and-forth between reviewers and author and the resulting corrections and modifications are time consuming, not to mention the ever-increasing number of papers. By streamlining the reviewer selection process; prescreening research papers for objectivity, improper methodology, and incorrect or poor results; and automating many of the recordkeeping tasks, the time to publication can be whittled down to days or weeks, which speeds up subsequent research based on scientific findings.

Will AI replace human peer reviews?

Despite the promise of AI overcoming or minimizing failings in the peer review process, AI will never entirely replace the human element. There are some things humans do better — or can at least oversee.

  • Computers can and do make mistakes and malfunction.A human should be involved at various stages to prevent poor research from being accidently published or good research from being wrongly denied publication.
  • Hacking produces dangers. As with any computer involvement, hacking is an ever-present threat and can compromise the review process.
  • Authors may change their writing style. Authors may learn how AI reviews research and change their writing to meet what they believe the algorithms are looking for.
  • Scoring can be biased. An AI algorithm will weigh scoring criteria but will need to be able to determine which criteria it should weigh more heavily.
  • Quantity versus quality is an important factor. AI is good for most quantitative analyses, but to analyze something qualitatively is more problematic. The quality of an article depends on more than just the validity and analysis of the data. It also depends on significance to existing research, innovation, impact on the field, writing style, and other nuances not easy for a machine to discern.

Certainly, AI can improve upon certain aspects of traditional peer review, and publishers are already using it for some basic tasks within the process. It will require definite policies and protocols for choosing which aspects of the review process can and should be automated and when they need to apply human guidance or oversight. As the technology continues to mature, more and more parts of the peer review process will likely benefit. However, there will — at least for the foreseeable future — always be a need for human input and final decision-making.

No matter what the future holds, we’re here for you. Contact your Sheridan representative for a consultation or visit our contact page to learn how Sheridan experts can help streamline and simplify your journal publishing processes.

Active Blogs