AI’s place in a broader discussion.
Reviewer fatigue in scholarly publishing represents a complex challenge that extends beyond mere workload concerns. This opinion piece examines the multifaceted nature of reviewer burnout, arguing that the issue stems from systemic biases in reviewer selection, lack of meaningful incentives, and inequitable distribution of review requests. We analyze how historical biases in academic publishing have contributed to a concentrated reviewer pool, primarily in Western countries, despite increasing global research output. The paper explores how artificial intelligence (AI) can address these challenges while emphasizing the importance of maintaining human judgment in peer review. We propose a comprehensive approach that combines AI-enabled efficiency improvements, bias awareness, expanded reviewer networks, and reformed incentive structures. By reimagining peer review as a human-AI collaboration, we suggest pathways to create a more efficient, equitable, and engaging review process that benefits the entire scholarly community.
Keywords: peer review, bias in peer review, scholarly publishing, artificial intelligence (AI), generative AI, diversity, citation gap, gender gap, reviewer burnout, reviewer fatigue, reviewer selection bias, reviewer training, peer review quality assurance, reviewer incentives and motivation, reviewer engagement, innovation in peer review
The peer review process is fundamental to maintaining the quality and integrity of scholarly publications. However, there is growing concern about the sustainability of this system, particularly regarding reviewer engagement. Incentives for peer review are essential for motivating scholars to participate actively in this critical process.1,2
One significant challenge in peer review is reviewer burnout, a phenomenon that leads many scholars to decline requests to review. Goring (2021) discusses the reasons behind this trend, noting that the increasing demands on researchers’ time, coupled with a lack of recognition for their efforts, contribute to a decline in available reviewers.3 Addressing these issues is crucial for maintaining the integrity of the peer review process.
What is burnout, really? Is it more cognitive than physical? Is it merely a
result of long working hours, or does it stem from the repetitive, mechanical, and tedious nature of certain tasks? Could it be tied to engaging in activities that feel devoid of meaning? Reviewer burnout isn’t just about the sheer volume of requests; at its core, it often stems from a deeper issue: a lack of motivation. Without clear incentives, the act of reviewing can feel burdensome, leading to delays, declining acceptance rates, and lower-quality reviews.4,5 Reviewer fatigue occurs when individuals feel overwhelmed, lose interest, procrastinate, and become less engaged in the process.
If we all agree on the fact that anyone who has published a paper has gone through the peer review process by preparing a revised version of their manuscript and carefully addressing each reviewer’s comments, then there should not be a lack of reviewers Scopus data6 shows that the continuous annual growth rate of authors is outpacing that of the published papers in the past few years. So, why is it that we keep hearing about reviewer fatigue all the time? Would this be a problem if published authors were not selected as peer reviewers but randomly added to a manuscript to verify and validate it? This question brings us to the point of continuous and unconscious biases within all of us as human beings, specifically in roles like editorship that require decision-making.
Historically, peer review in its modern form, which requires at least two expert opinions, started in the UK, Europe, and North America.7-10 Yet 70 years on, the majority of journal editors across different disciplines and impact factor quartiles are White middle-aged men.11-15 One of the functions of journal editors is to bring their network along as peer reviewers. As a result, while most journals have an international authorship representation, their peer reviewer network composition hardly reflects it. In the authors’ opinion, there is no global reviewer fatigue; there is just an imbalance of the load, mostly due to editor choice.16-18
Not only does editor selection bias play a role, but also article citation biases influence who gets the invitation to review. The article citation count of an author is, by many academics, considered an indicator of their level of expertise. Historical biases19-21 also play a part in the reviewer selection process through citation bias. Studies show that marginalized authors receive fewer citations and visibility for their papers. As such, any tool (AI or not) that is based on such counts will systematically offer a lower chance to marginalized authors to be listed as a potential candidate to review a specific manuscript.
And then there is the problem of ghost reviewing, for which we as publishers and journal editors don’t seem to have a good solution. Ghost reviewing happens when an invited reviewer agrees to review but asks a lab/team member to review the manuscript and then submits the review back to the journal. While this practice, in full transparency, can naturally extend the reviewer network of a journal and reduce the burden on the system, the lack of transparency and recognition may demotivate the unseen “ghost” reviewers and turn an opportunity into a risk.
The peer review system is facing a critical bottleneck due to an over-reliance on a small, predominantly Western reviewer base. A concentrated group of reviewers—primarily from North America and Europe—currently shoulders the majority of review responsibilities. This imbalance not only contributes to reviewer burnout but also limits the diversity of perspectives in scientific discourse, with 80% of review requests directed toward researchers in these regions. Emerging research powerhouses such as China, India, and Brazil remain underutilized, and language barriers often exclude qualified reviewers from non-English-speaking countries. Furthermore, time zone differences present coordination challenges for global participation.
Several systemic barriers complicate this issue, including limited visibility of researchers from developing nations, implicit bias in reviewer selection, and a lack of formal reviewer training programs. Disparities in technology and resource access, along with payment and compensation challenges across different regions, exacerbate the situation.
To address these challenges, several strategies can be implemented:
Continuous Education and Training: Regular education and evaluation checkpoints for journal editors can help mitigate both conscious and unconscious biases.
Data Collection and Transparency: Publishers and journal editorial offices have a responsibility to collect accurate data on the composition of their editorial teams and reviewers. By fostering education, transparent communication, proactive decision-making, and monitoring progress, they can work towards closing representation gaps.
Bias Awareness in Tools: Any tool built on historical data must acknowledge the inherent biases in its training datasets. It should raise awareness about these biases in its user interface, continually seek user feedback, and implement corrections as necessary.
Outreach and Workshops: Journal editors and publishers should organize more peer review workshops in the Global South and actively engage researchers from non-Western countries in the peer review process. Inviting recently accepted authors from these regions and providing structured guidance in peer review can be a valuable investment in expanding the journal network.
Encouraging Co-Reviewing: Journal editors should promote co-reviewing practices, encouraging reviewers to be transparent about their collaborations and proactively seek out partnerships.22
By implementing these strategies, the peer review system can expand its network, address biases, and ultimately enrich scientific discourse with diverse perspectives.
As the volume of academic publications increases, reviewers often experience fatigue and declining motivation to accept review requests.3 This phenomenon, known as reviewer fatigue, can undermine the quality of the peer review process and lead to delays in publication. Therefore, implementing collaborative and innovative incentive structures could enhance reviewer participation and improve overall publication quality.1
A foundational challenge scholarly publishing faces is encouraging reviewers to accept invitations. Peer review is often perceived as a distraction from core academic tasks that drive recognition and career advancement. Researchers, especially those at the peak of their careers, may view peer review as a thankless task with little to no benefit to their professional journey. For too long, we’ve relied on the outdated notion of “good karma”—the belief that reviewing for journals is an obligatory act of community service. But “good karma” alone isn’t enough anymore. There needs to be a stronger sense of recognition and prestige associated with peer review. We must position it as a key professional milestone, where the act of reviewing is as prestigious as being invited to speak at a conference or write an editorial.
To address this, we should rethink the incentives for reviewers. While financial rewards are an option, recognition—tangible and meaningful—may go further. Could peer review be tied to formal recognition systems that lead to career benefits? By integrating it into career growth and making it clear that reviewing is integral to professional advancement, reviewers can feel appreciated and see a direct benefit in their professional lives.
Recent discussions have explored new approaches to incentivizing peer review, including recognition programs, financial compensation, and integrating peer review into performance evaluations.2 By acknowledging the contributions of reviewers and providing tangible rewards, the academic community can foster a more supportive environment that values the essential role of peer review.
Establish a "Peer Review Credit" (PRC) system where reviews earn points
Make PRCs count towards tenure and promotion decisions
Include review contributions in annual performance evaluations
Weight reviews based on journal impact factor and complexity
Create tiered reviewer certification levels (e.g., Associate, Professional, Expert)
Require minimum review quality scores to advance
Offer special privileges at each level (priority publishing, reduced APCs)
Partner with academic institutions to recognize certifications in hiring
Develop public profiles showcasing review contributions
Integration with ORCID and other academic identity systems
Annual "Excellence in Peer Review" awards at institutional and publisher levels
Featured reviewer spotlights in journal issues
Priority access to editorial board positions for top reviewers
Mentorship programs pairing experienced reviewers with early-career researchers
Invitations to join grant review panels based on review track record
Special consideration for conference speaking opportunities
Include peer review metrics in university ranking criteria
Develop department-level review contribution targets
Create dedicated funding pools for active reviewers
Establish peer review fellowships for outstanding contributors
Free access to publisher resources and databases
Sponsored conference attendance for top reviewers
Professional development workshops and training
Priority access to research tools and services
Create reviewer networks and communities of practice
Organize reviewer-specific conferences and workshops
Establish mentor-mentee relationships through review activities
Develop collaborative review projects and special issues
Enhancing the peer review process requires a systematic approach across multiple dimensions. First, robust quality metrics must be established to evaluate review effectiveness, including reviewer performance, timeliness, author/editor satisfaction, and the review's ultimate impact on published work. Second, technological infrastructure needs to seamlessly integrate automated tracking systems, intuitive portfolio management interfaces, and existing academic platforms. Advanced technologies can also create more engaging review environments that promote focus and efficiency.
Industry-wide standardization is essential for establishing universal review recognition systems, quality metrics, and portable credentials that work across publishers. This standardization ensures consistent evaluation of review contributions and their impact across the academic ecosystem.
Perhaps most importantly, transforming peer review requires a fundamental cultural shift within academia. This can be accomplished by showcasing successful implementations, actively involving stakeholders in system development, and cultivating institutional champions. By elevating peer review from an obligation to a valuable professional activity, institutions can create a self-reinforcing cycle where quality reviewing enhances academic careers and, in turn, strengthens the scholarly publishing process.
Through this multi-faceted approach, reviewer recognition can address both short-term motivational challenges and long-term cultural change, ultimately elevating peer review to its rightful place as a cornerstone of academic contribution.
Another of the key challenges in peer reivew is ensuring timely completion of reviews, and here AI can make a transformative impact. By alleviating the more tedious aspects of peer review, AI can reduce friction, making the process less daunting for reviewers who have agreed to participate. AI-enabled checks conducted at earlier stages, before the manuscript reaches reviewers, can ensure the integrity and reproducibility of research.
For example, AI can assist with manuscript-reviewer matching, ensuring that reviewers are assigned papers relevant to their expertise, thus speeding up the acceptance process. AI can also help with administrative tasks such as data validation, error checking, and even language translation for non-native English speakers. AI-powered Research Integrity Reports can further support reviewers, allowing them to focus solely on engaging with the research.
Additionally, AI can provide initial summaries or key points of a manuscript, giving reviewers a head start. This doesn’t replace human reviewers; rather, AI acts as a support system—a “JARVIS to Iron Man”—handling the administrative details so reviewers can focus on higher-order analysis.
AI’s role in improving the review process can make it more efficient and enjoyable by fostering distraction-free, immersive environments for reviewers. If we can make reviewing enjoyable, procrastination and delays will diminish. The goal is for reviewers to immerse themselves in a flow state of attention and creativity, actively participating in the review process.
The most valuable outcome of integrating AI into peer review could be transforming the reviewer experience, making it more efficient and enjoyable while increasing engagement. Reviewer notes, which are insights, questions, and observations that reviewers make while reading a paper, are an important part of this process. Imagine an AI-driven system that analyzes these notes in real-time, organizing them into an initial review structure that highlights key insights and raises relevant questions, setting a foundation for reviewers to build upon. This approach would allow reviewers to focus on the critical aspects of their analysis rather than spending time on organizational tasks, ultimately enhancing the quality and depth of their feedback.
AI could further support the peer review process by creating interactive training environments for new reviewers. For example, AI could provide feedback on practice reviews and guide new reviewers in identifying and evaluating key aspects of a manuscript. By integrating gamified systems that reward progress and skill development, AI could help foster a new generation of engaged, skilled reviewers, making the process of peer review both a valuable professional experience and an enjoyable one.
The challenge of reviewer fatigue necessitates a holistic solution that addresses both systematic biases and operational inefficiencies in the peer review process. While AI offers promising opportunities to streamline workflows and reduce administrative burdens, its implementation must be approached thoughtfully, in conjunction with human-centered reforms. Revitalizing peer review hinges on three critical areas:
Addressing inherent biases in reviewer selection.
Creating meaningful incentives beyond “good karma.”
Leveraging AI as an enabler rather than a replacement for human judgment.
To achieve sustainable change, publishers and journals should:
Actively diversify their reviewer pools, particularly by engaging researchers from the Global South.
Implement transparent recognition systems that integrate peer review into career advancement.
Deploy AI solutions that enhance rather than replace human expertise.
Foster inclusive practices such as co-reviewing and structured peer review training.
Regularly monitor and address biases in reviewer selection processes.
The future of peer review relies on our ability to create a more equitable, efficient, and engaging process that values diverse perspectives while upholding the highest standards of academic rigor. By combining human wisdom with technological innovation, we can establish a peer review system that not only addresses current challenges but also adapts to the future needs of the scholarly community.
Nature Research. (2018). Incentivizing peer review: A collaborative approach. Research Integrity and Peer Review, 3, 49. Link.
Science Editor. (2021). Incentivizing peer review: Exploring new approaches. Link.
Goring, H. (2021). Reviewer fatigue: Why scholars decline to review their peers’ work. PS: Political Science & Politics, 54(3), 612-616. Link.
O’Brien, D., & Warne, H. (2023). Incentives for peer review: A roadmap for scholarly publishing. Eon, 1(1), Article 2. https://doi.org/10.5322/eubrjb76
Gunn, E., Ketelhut, A., & Webster, A. (2024). Early Engagement of Journal Editors and Reviewers: Two Case Reports. EON. Retrieved from https://eon.pubpub.org/pub/ce1mb1oq
Scopus. Search Form.
Zhang, J., & O’Brien, J. (2019). The effectiveness of peer review: A systematic review. Australian & New Zealand Journal of Psychiatry, 53(3), 245-256. Link.
Kuehn, B. M. (2017). The surprising history of peer review. American Council on Science and Health. Link.
Editage Insights. (n.d.). The evolution of peer review: A timeline. Link.
Van Noorden, R. (2016). The science behind peer review. Nature, 532(7599), 306-307. Link.
Jubb, M. (2022). Peer review in a time of change. Nature Human Behaviour, 6(2), 159-165. Link.
Fenton, N., & Muir, M. (2020). The role of peer review in academic publishing: A review of the literature. Ecology and Evolution, 10(21), 12561-12569. Link.
Kearney, P. (2019). The impact of peer review on clinical practice: A systematic review. The Lancet, 393(10191), 2226-2235. Link.
Elsevier. (n.d.). Editorial policies and guidelines for reviewers. Link.
Springer Nature. (n.d.). Journal editor diversity: A necessity for academic publishing. Link.
Demeter, M. (2020). Academic Knowledge Production and the Global South: Questioning Inequality and Under-Representation. Palgrave Macmillan https://link.springer.com/book/10.1007/978-3-030-52701-3
Golder, S., et al. (2021). Understanding citation practices: A systematic review. Philosophical Psychology, 34(6), 927-949. Link.
Wang, Y., et al. (2022). The future of peer review in academic publishing. Proceedings of the National Academy of Sciences, 119(10), e2113067119. Link.
Zhang, L. (2022). The evolution of peer review: Past, present, and future. Nature Reviews Physics, 4(9), 522-535. Link.
IOP Publishing. (2022). IOP Publishing extends co-review policy to entire owned journal portfolio. Link.
BM is an employee of Elsevier, the publisher of the journals discussed in this paper, and the owner of the submission system used for piloting structured peer review and collecting the data and reviewer responses analyzed in this study. Elsevier also owns the Scopus database, which was utilized to select journals from various impact factor quartiles and subject areas.
AG represents Integra, a publishing services and technology company that provides peer review management and editorial office services.