New research from the Wharton School at the University of Pennsylvania indicates that artificial intelligence (AI) can significantly enhance learning, but its effectiveness depends on how assistance is delivered. The study, published by Knowledge at Wharton, concludes that structured, system-regulated AI support leads to superior long-term skill development compared to unrestricted, on-demand help.
The research involved a three-month experiment with over 200 chess learners. Participants were divided into two groups, each receiving the same total amount of AI assistance. One group received guidance at predetermined, system-controlled intervals, while the other could request AI help at any time.
Results showed a marked difference in outcomes. Learners with structured support improved their performance by approximately 64% during the training period, compared to a 30% improvement for those with on-demand access. Follow-up assessments conducted weeks later revealed that the advantage for the structured group persisted, suggesting deeper knowledge retention rather than temporary gains.
Researchers attribute the stronger results to the educational principle of “productive struggle”—the process of grappling with challenging problems before receiving assistance. When learners attempted difficult positions independently and received calibrated guidance, they were more likely to internalize strategies and strengthen decision-making skills.
In contrast, participants with immediate, self-directed access often solved problems more quickly but engaged less deeply with the underlying reasoning. Interviews indicated that many learners recognized frequent assistance could hinder development, yet still opted for help when faced with challenges. The study suggests that frictionless, instantly available support can undermine self-regulation.
The implications extend beyond chess to academic, professional, and corporate training environments where AI tutors and copilots are increasingly integrated. The findings provide guidance for developers and institutions, indicating that systems featuring calibrated prompts, staggered hints, and bounded access to solutions may better preserve the cognitive effort required for durable learning while still leveraging AI’s adaptive capabilities.
A related perspective is offered by Anthropic’s AI Fluency Index, which measures how individuals develop proficiency with AI tools in daily tasks. Early data from the index indicates that the most productive interactions involve users iterating and building upon AI responses, rather than accepting initial outputs without scrutiny.





