The topic originates from a 2025 study on Detecting LLM-Generated Peer Reviews . Researchers developed a watermarking system that uses fabricated citations to flag reviews created by AI instead of human experts.
: The system prompts an LLM to start its review with a specific phrase, such as: "Following [Surname] et al. ([Year]), this paper..." . 109989
Based on recent research regarding the detection of AI-generated content, refers to a specific dataset of 109,989 possible watermarks used to identify peer reviews written by Large Language Models (LLMs). Overview of Topic 109989 The topic originates from a 2025 study on
: It has proven effective even against common "reviewer defenses," such as light editing or rephrasing. ([Year]), this paper
As a tool for academic integrity, this framework offers several notable advantages and limitations based on the study findings :