The excitement surrounding artificial intelligence tutors is palpable, yet evidence suggests a balanced approach is necessary. Concerns arise as research indicates that students often rely too heavily on chatbot tutors, thus impeding genuine understanding. Some AI tutors do not yield superior outcomes compared to traditional learning methods.
Despite the skepticism, researchers are exploring innovative ways to refine AI tutoring systems. A notable approach focuses on the type of practice AI tutors assign rather than how they explain concepts. At the University of Pennsylvania, a study involving around 800 Taiwanese students learning Python tested this method.
Students interacted with the same AI tutor, yet they were split into two groups: one followed a fixed sequence of problems from easy to hard, while the other experienced a personalized progression based on their interaction with the AI. This personalized approach echoes the “zone of proximal development,” where tasks are challenging yet achievable to maintain student engagement.
Results showed students with personalized sequences performed better on exams, equating to 6-9 months of additional schooling. However, Angel Chung, the AI tutor’s inventor, acknowledged the statistical estimation was not precise. This finding suggests that personalizing difficulty levels can significantly impact learning outcomes.
Chung highlighted that while AI responses feel personal, they lack in guiding students to the right questions. To enhance this, Chung’s team integrated a machine-learning algorithm to analyze student interactions, which informed the AI on the next suitable problem.
How different students interact with the chatbot tutor

Personalization transcends tailored explanations, focusing instead on customizing the learning path, a concept with roots predating generative AI. Earlier intelligent tutoring systems offered hints and feedback, proving effective but less engaging.
Today’s AI tools, with their conversational nature, may boost student interest. The study revealed personalized group students spent more time engaging with practice problems, enhancing their learning experience. New Python learners and students from less elite schools benefited most from personalized sequences.
How students’ background affected results

Participants were motivated students aiming to enhance college applications, often with coding experience. The study leaves open questions about the AI’s effectiveness for less motivated students needing additional support. A potential solution involves blending AI with human intervention.
Ken Koedinger from Carnegie Mellon University is exploring AI models to signal remote human tutors to assist struggling students. He notes, “We are having more success,” underscoring the ongoing necessity of human involvement in education.
Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or barshay@hechingerreport.org.
This story about AI tutors was produced by The Hechinger Report, a nonprofit, independent news organization focusing on education. Sign up for Proof Points and other Hechinger newsletters.
Was this story helpful? Leave a tip to support your education reporters.
The Hechinger Report is a nonprofit newsroom powered by reader support
Republish our articles for free, online or in print, under a Creative Commons license.
—
Read More Kitchen Table News








