Current Projects

Exploiting keystroke logging and eye-tracking to support the learning of writing

Much is now known about the psychology of writing processes, yet most writing instruction focuses on the products of writing, with limited ability to support the moment to moment enactment of writing. This NSF-funded project seeks to integrate technologies that can tell where a writer is looking (reading) and what they are typing in a scaffolded interactive online writing environment so as to capture information about writing processes and render it in the form of feedback to student writers. Through a co-development process with English as a Second Language (ESL) students of introductory college writing, we are exploring the pedagogical strategies that real-time analysis of eyetracking and keystroke logging facilitate, the constraints and affordances of using such technology in the context of writing instruction, and the types of formative feedback that have the greatest impact on writing performance and outcomes.

Effects of real-time AWE feedback on cognitive writing processes and writing quality

The purpose of this study is to investigate how feedback provided in real-time by a tool for automated writing evaluation (AWE) influences the duration and timing of cognitive writing processes during the act of writing, as well as the quality of the final written product. Most AWE tools provide feedback in response to the submission of a draft, but some can provide feedback as learners write: a simpler case being the spell checker in Microsoft Word and a more sophisticated case being Grammarly, an AWE tool  that claims the ability to check a writer’s adherence to more than 250 grammar rules. Among the research questions addressed in this project are: (1) When real-time AWE feedback is provided, how much time do students spend interacting with such feedback? (2) How are the duration and timing of other writing processes affected by the provision of real-time AWE feedback? and (3) How does the provision of real-time AWE feedback affect writing quality according to general and discrete measures? The study uses thinkaloud, keystroke logging, screen-capture, and questionnaire data in addition to expert raters’ judgments.

Accuracy, usefulness, and efficiency of AWE feedback

This study evaluates feedback provided by Criterion, an AWE tool widely used in writing classrooms, from two perspectives: argument-based validity and instructional design. The evaluations were based on a task comprising replications of authentic Criterion feedback to error types frequently flagged by the program in a corpus of ESL student writing collected at Iowa State University. The task required learners to use the feedback for error-correction while also rating its clarity, usefulness, and the mental effort they perceived in using it. Additional measures included expert ratings of the quality of participants’ corrections. Results showed generally high ratings for clarity and usefulness and low ratings for perceived mental effort but an error-correction success rate of only about 60%. A measure of instructional efficiency (Paas & Van Merrienboer, 1993) combining the mental effort and performance data showed lower-level participants making more efficient use of the feedback than a higher-level group.

[See related presentation given at the 2014 Teacher’s College, Columbia University Roundtable in Second Language Studies, which addressed the theme Learning-Oriented Assessment.]

Advertisements