Peer Review in the Age of Large Language Models – Call for Abstracts

We are inviting you to submit an abstract for the 'Peer Review in the Age of Large Language Models' workshop taking place on 14th May 2025 at the University of Bath.

‘Peer Review in the Age of Large Language Models’ is an interdisciplinary workshop taking place on 14th May 2025 at the University of Bath. Dr. Harish Tayyar Madabushi from the University of Bath, and Dr. Mark Carrigan from the University of Manchester, will be giving keynotes. We are inviting you to submit an abstract for the workshop by Friday 14th February 2025, please see details below:

Call for Abstracts

With the emergence of large language models (LLMs), some scholars have begun to experiment with the use of these tools in various academic tasks, including peer review (Hosseini et al., 2023; Wedel et al., 2023; Liang et al., 2024). Recent studies have suggested LLMs could play some legitimate role in peer review processes (Dickinson and Smith, 2023; Kousha and Thelwall, 2024). However, significant concerns have been raised about potential biases; violations of privacy and confidentiality; insufficient robustness and reliability; and the undermining of peer review as an inherently social process.

Despite the relatively large volume of literature on peer review (Bedeian 2004; Batagelj et al., 2017; Tennant and Ross-Hellauer, 2020; Hug, 2022), we still know relatively little about key issues including: the decision-making practices of editors and reviewers; how understandings of the purposes and qualities of peer review vary between journals, disciplines, and individuals; and the measurable impact of peer review in advancing knowledge. Tennant and Ross-Hellauer (2020) suggest there is “a lack of consensus about what peer review is, what it is for and what differentiates a ‘good’ review from a ‘bad’ review, or how to even begin to define review ‘quality’.” Many commentators have also noted the negative effects of time and productivity pressures on the quality and integrity of peer review in practice. LLMs enter into a context of peer review fraught with both ambiguity and (time) scarcity.

Recently, many relevant entities, including the Committee on Publication Ethics (COPE), have published specific guidance on the use of AI tools in decision-making in scholarly publication. These frameworks address issues such as accountability, transparency, and the need for human oversight. The adoption of such guidance raises important questions about whether and how LLM technologies can be used responsibly to promote knowledge production and evaluation. Can these policies and guidelines, for example, fully address the technical limitations of LLMs? Can the use of LLMs ever be compatible with the purposes and qualities of academic research, writing, and authorship? What potential oversight responsibilities should editors have?

The aim of this workshop is to provide an opportunity to collectively and critically explore these possibilities and limitations from various disciplinary vantage points. We welcome scholars from all career stages, particularly doctoral researchers and early career academics. We welcome contributions on a wide range of topics, from across all disciplines, related to the use of LLMs in the peer review process. Topics may include, but are not limited to:

  • Empirical studies examining the nature and extent of LLM adoption and
    use in peer review;
  • Studies examining variations in disciplinary orientations towards LLMs
    in peer review;
  • Theoretical discussions of the limits and compatibility of LLMs as tools
    in peer review;
  • Papers considering LLMs in peer review from the perspective of social
    epistemology;
  • Papers considering LLMs in peer review from the perspective of Science
    and Technology Studies (STS);
  • Critical reflections on the ethics of LLM adoption and use in peer review;
  • Papers engaging with the politics and political economy of LLM use in
    peer review;
  • Sociotechnical evaluations of LLM systems used for peer review;
  • Value-sensitive design of peer review LLM tools;
  • Methods for audit and assurance of LLM in peer review;
  • Proposals for development of policies and standards for ethical and
    responsible use of LLMs in peer review;

Selected authors will be invited to present on a panel at the workshop. Each panel will have a chair (who will introduce the panel and lead the audience Q&A), and a discussant (who will ask questions, having read papers in advance). Following the acceptance of their abstracts, participants will then be asked to send draft papers no later than three weeks before the workshop to give the panel discussants enough time to prepare questions and feedback. Draft workshop papers should be approximately 5,000 words. We do not expect papers to be in final, publishable format. The aim of the workshop is to provide constructive and timely feedback on draft papers. Following the workshop, and in consultation with participants, the organisers will consider the most suitable options for future collaboration (e.g., a network, a Special Issue, or an edited volume).

A travel stipend is available to support participants who do not have access to funding to support conference attendance through their own institutions. If you wish to apply for this stipend, please state this on your application and indicate where you would be travelling from. Unfortunately, we cannot cover the costs of major international travel.

Please submit your title, abstract (200-300 words) and a short bio (~150 words) to [email protected] by Friday 14th February 2025. The organising committee will communicate decisions by Friday 7th March 2025. Workshop papers (approx. 5,000 words) should be sent by Wednesday 23rd April 2025.

Main Photo by Christin Hume on Unsplash


Event Info

Date 14.05.2025
Start Time 9:00am
End Time 5:00pm

Add to Google Calendar