
If reviewers are merely skimming papers and relying on LLMs to generate substantive reviews rather than using it to clarify their original thoughts, it opens the door for a new cheating method known as indirect prompt injection, which involves inserting hidden white text or other manipulated fonts that tell AI tools to give a research paper favorable reviews. The prompts are only visible to machines, and preliminary research has found that the strategy can be highly effective for inflating AI-generated review scores.








