Leigh Ann Sudol-DeLyser AbstractTutor: Increasing Algorithm Implementation Expertise for Novices Through Algorithmic Feedback Degree Type: Self-defined Advisor(s): Mark Stehlik, Sharon Carver Graduated: December 2014 Abstract: The translation of algorithms and abstractions to formalisms, most often code, is a fundamental task of a computer scientist. Both novices and experts use development environments to provide feedback about the code they have written as a part of the iterative process of solving a problem. Almost all of the environments available were designed for the expert and his or her feedback needs while writing code. In this thesis, I begin with an analysis of the feedback needs of the novice, and discuss the need for explicit feedback focused on the algorithmic components novices should use when practicing the construction of simple array algorithms. As a proof of concept, I pilot test a model of algorithmic components and feedback for four easily generalizable problems using think aloud protocols. After validating the model and feedback, and showing initial gains after practice, I discuss the implementation details of AbstractTutor, a pedagogical IDE using static analysis techniques to give pre-compilation feedback to students using the system. The implementation details include a series of case studies highlighting the successful evaluation of student code, and discussing the few situations where student mistakes produce code that is temporarily unable to be correctly parsed by the system. To increase the granularity, accuracy, and specicity of analysis, I present two metrics for evaluating student code from a body of submissions. These metrics represent a new way of thinking about novice progression through a coding problem, and allow analysis to focus on individual learning components within the larger algorithm students are constructing. Finally, I detail the result of two online, multi-institutional studies where novice programmers use the AbstractTutor system. Students who received the algorithmic feedback were more likely to make productive edits, and less likely to repeat errors on subsequent problems. This thesis offers contributions to the learning sciences, computer science, cognitive psychology and computer science education Thesis Committee: Mark Stehlik (Co-Chair) Sharon Carver (Co-Chair) Ken Koedinger Frank Pfenning Carsten Schulte (Freie Universität) Frank Pfenning, Head, Computer Science Department Andrew W. Moore, Dean, School of Computer Science Keywords: Computer Science Education, Pedagogical IDE, Algorithmic Abstraction, Novice Feedback CMU-CS-14-145.pdf (1.75 MB) ( 183 pages) Copyright Notice