Artificial intelligence tools such as OpenAI’s ChatGPT have been promoted as powerful productivity assistants capable of boosting creativity and speeding up workflows. However, a new study published on April 27 by the editors of Organization Science suggests that AI may be increasing the quantity of academic research without improving its quality.
The analysis, led by Lamar Pierce, Editor-in-Chief of Organization Science and professor at Washington University in St. Louis, is the first major attempt to measure AI’s impact on academic journal submissions and peer reviews.
Researchers used the AI-detection platform Pangram to examine nearly 7,000 manuscript submissions over five years. The study reviewed 6,957 submissions written by 11,887 authors and evaluated through more than 10,000 peer reviews from over 2,500 reviewers. The years before ChatGPT’s late-2022 launch were used as a baseline for comparison.
Surge in Submissions, Decline in Quality
According to the findings, journal submissions increased by 42% after ChatGPT became widely available. Despite this sharp rise, the quality of writing declined significantly. Editors reported that many AI-assisted papers were rejected during the initial screening stage because they lacked originality, clarity, or strong academic contribution.
The study also revealed that manuscripts with very low AI-generated content scores were the most likely to be accepted for publication. Editors fear that the rapid increase in AI-written research is putting serious pressure on the peer-review system.
In addition to manuscript submissions, the analysis found that more than 30% of peer reviews contained some level of AI-generated writing. These reviews were often harder to understand and focused more on theoretical discussion while paying less attention to data analysis and evidence.
Researchers warned that unclear AI-generated reviews may make it more difficult for editors and authors to improve manuscripts effectively.
Editors Concerned About “Publish or Perish” Culture
Pierce explained that many editors already suspected AI was influencing academic publishing, but until now there had been little evidence showing how widespread the issue had become.
He argued that universities’ “publish or perish” culture is partly responsible for the growing dependence on AI tools. Young researchers face intense pressure to produce more papers in order to secure promotions, funding, and career advancement.
According to Pierce, academic institutions should stop relying heavily on publication counts and instead focus on evaluating the quality and long-term impact of a scholar’s best work. However, he admitted that judging research quality fairly is more difficult and time-consuming than simply counting publications.
AI Can Help — But Human Oversight Remains Essential
Although the study highlights major concerns, the authors emphasized that they are not against AI itself. Pierce acknowledged that AI tools can be extremely useful for coding, research assistance, searching literature, and challenging ideas during the writing process.
However, he stressed that researchers must fully understand and verify the work produced by AI systems. Without human oversight, scholars risk losing valuable opportunities to build expertise, improve writing skills, and develop deep subject knowledge.
The study also raised concerns about junior researchers relying too heavily on AI without learning the underlying research methods themselves.
A Call to Rethink Peer Review
Pierce believes academic publishing may need a complete redesign rather than small policy adjustments. Instead of simply modifying current peer-review systems to handle AI-generated content, journals and universities should rethink how great research is evaluated and promoted in the modern era.
The discussion has already attracted global attention. Within its first week, the article was downloaded more than 10,000 times and sparked conversations across major publications including Nature, Forbes, and Financial Times.
FAQs
What did the new study discover about AI in academic research?
The study found that academic journal submissions increased by 42% after ChatGPT’s release, but the overall writing quality declined significantly.
Who led the research?
The analysis was led by Lamar Pierce, Editor-in-Chief of Organization Science and professor at Washington University in St. Louis.
How are researchers using AI tools?
Researchers are using AI for coding, literature searches, writing assistance, and generating peer reviews.
Why are editors worried about AI-generated papers?
Editors believe many AI-assisted papers lack originality and quality, leading to higher rejection rates and more pressure on the peer-review process.
Can AI still be useful in academic work?
Yes. Experts say AI can save time and improve workflows when used responsibly with proper human oversight.
What is the “publish or perish” problem?
It refers to the pressure on academics to constantly publish research papers in order to secure promotions, funding, and career advancement.
How does AI affect peer reviews?
The study found that AI-generated reviews are often harder to understand and focus less on data and evidence.
What solution do researchers suggest?
Experts recommend redesigning the academic evaluation system to focus more on research quality rather than publication quantity.
Conclusion
The rapid rise of AI tools like ChatGPT is transforming academic research at an unprecedented pace. While these technologies can improve efficiency and assist researchers in many ways, the study from Organization Science highlights the growing risks of prioritizing quantity over quality. Editors warn that excessive dependence on AI-generated writing may weaken peer review, reduce originality, and place long-term pressure on the academic publishing system. Experts believe the future of research will depend on finding the right balance between AI assistance and human expertise.
