Abstract
This pilot study explores the potential role of artificial intelligence (AI) technologies in enhancing the academic manuscript-to-journal matching process, focusing on Large Language Models (LLMs). Through a focused evaluation of LLM-based recommendation systems, the study analyzes their performance across 40 papers from four distinct disciplines: law, psychology, exact sciences, and engineering. The research uniquely compares LLM-generated journal suggestions to expert human evaluations, providing insights into LLM’s strengths and limitations. Findings reveal that while LLMs excel in fields with well-established publishing norms, such as psychology and exact sciences, they struggle with interdisciplinary research, niche topics, and emerging fields, particularly in law and engineering. The study contributes new evidence by identifying specific patterns in LLM’s performance across disciplines and highlighting critical challenges, such as regional journal biases and the inability to fully address innovative or complex methodologies. These insights establish a foundation for improving AI systems and emphasize the importance of integrating AI capabilities with human expertise for a balanced, efficient, and effective approach to journal selection.