: It achieves a high success rate because LLMs are highly likely to follow instructions appearing at the very beginning of a prompt.
The topic originates from a 2025 study on Detecting LLM-Generated Peer Reviews . Researchers developed a watermarking system that uses fabricated citations to flag reviews created by AI instead of human experts. 109989
Based on recent research regarding the detection of AI-generated content, refers to a specific dataset of 109,989 possible watermarks used to identify peer reviews written by Large Language Models (LLMs). Overview of Topic 109989 : It achieves a high success rate because
: The system prompts an LLM to start its review with a specific phrase, such as: "Following [Surname] et al. ([Year]), this paper..." . this paper..." .