All Publications
Policy Proposals

Learning what works in educational technology with a case study of EDUSTAR

March 28, 2016

 

The Problem

In the current educational technology environment, there is no convincing or cost-effective way to determine what digital learning activities work best, and for which students. Not only is this problematic for students, teachers, and parents, but it also presents challenges for developers of digital learning activities, who cannot demonstrate the value of their products, and researchers, who can only conduct limited research on educational technology tools—and often at great cost.

The Proposal

The authors offer five key principles to guide the development of effective evaluation tools for educational technology. These are: (1) randomized control trials are essential means for the rigourous evaluation of learning tools; (2) evaluation of learning technologies must be rapid and continuous; (3) evaluation systems built on existing user-friendly content platforms have substantial advantages; (4) scale unlocks transformative opportunities; and (5) the evaluator must be trusted and report the results transparently. Following these five principles will improve the evaluation of digital learning activities and help students, parents, and teachers to identify the tools best suited to their needs. It also will make it easier for developers to improve new learning activities and for researchers to conduct low-cost, rapid experiments. The authors also offer an update on EDUSTAR, a Web-based platform for evaluating digital learning activities that they first proposed in their 2012 Hamilton Project paper.

Abstract

Despite much fanfare, new technologies have yet to fundamentally advance student outcomes in K–12 schools or other educational settings. The system that supports the development and dissemination of educational technology tools is falling short. The key missing ingredient is rigorous evaluation. No one knows what works and for whom. This policy memo articulates general principles that should guide the evaluation of educational technology; these evaluations have the promise to fill in critical information gaps and leverage the potential of new technologies to improve learning. Aaron Chatterji and Benjamin Jones also present a case study of a new platform, EDUSTAR, which they conceived and implemented with a national nonprofit organization. The results from the platform pilot examples reveal several lessons for the future of educational technology.