There were 1,864 press releases posted in the last 24 hours and 398,982 in the last 365 days.

Eight Lessons Learned From Piloting the Rapid Cycle Evaluation Coach

  For the last 18 months, the Office of Educational Technology at the U.S. Department of Education, in partnership with the Institute of Education Sciences (IES) at the Department, has been working with Mathematica Policy Research and SRI International to build the Rapid Cycle Evaluation Coach (the RCE Coach). The RCE Coach is a free, open-source, web-based platform to help schools and districts generate evidence about whether their educational technology apps and tools are working to achieve better results for students. The platform was released in Beta in October 2016 and updated in January 2017. The RCE Coach currently includes two types of evaluation designs:

  • matched comparison, which creates two similar groups of users and non-users of an ed tech application already in use at a school site and;
  • randomized pilot, which randomly assigns participants to groups of users and non-users of an ed tech application that has not yet been implemented.

While the tool is free and open for any school or district to use (and many have done so already, with over 1,700 individuals registered), we worked closely with 12 districts to pilot the RCE Coach, and six of the pilots are already complete. The pilots spanned the two evaluation designs and studied how selected math or literacy products affected student academic achievement. Below are eight lessons we’ve learned from these initial pilots.

Lesson 1. The central problem addressed by the RCE Coach — the need for better evidence for making decisions about whether to use ed tech in schools to inform implementation of best practices and procurement — has broad resonance in the field.

In conversations with district staff, we heard repeatedly that people want to know whether the technology they use is making a difference for students and is worth the cost, and that evidence should be more rigorously and systematically generated.

Lesson 2. Moving from broad to narrow research questions is an important part of the process.

Rapid-cycle evaluations — rigorous, scientific approaches to research that provide decision makers with timely and actionable evidence of whether operational changes improve program outcomes — work best for narrow questions that address specific implementations of technology, but most districts start from a different point.

Many of our pilot districts stated their research questions in very broad terms. For example, they wanted to know whether technology in general is moving the needle or whether a school wide technology-based intervention is working. Rapid-cycle evaluations can be most useful in examining whether components of a school improvement plan are having the desired effect on student outcomes or whether the desired effect is coming from one particular technology for a targeted group of students.

Lesson 3. The RCE Coach needs to have the flexibility to meet districts where they are.

Many districts want to know if the technology they are already using is helping students, but they lack the ideal conditions for a causal impact study. For example, a school may have rolled out a new app to all students, but only certain teachers actually used it with their students.

Therefore, it is important that the RCE Coach help users determine what types of analyses are possible and appropriate given their unique circumstances. It’s also important to be clear about the strength of the evidence provided under these different cases so that districts can use the information appropriately.

Lesson 4. Having a champion in the right role at the school or district is crucial.

Rapid-cycle evaluations can fall into the tricky space of being perceived as important but not urgent. Thus, they are susceptible to delays when more pressing tasks arise.

We hypothesize that districts are most likely to complete the evaluations when there are staff dedicated to data analysis or curriculum directors who have less exposure to the pressures of day-to-day school operations. Over the next year, we hope to learn more about the skill sets necessary to successfully navigate the RCE Coach independently and how the RCE Coach can best be embedded into existing operations.

Lesson 5. Large systems may see the RCE Coach as a resource for local capacity building.

A large district with a central data analysis, program evaluation or research unit may choose to train staff in schools to use the RCE Coach in order to build local capacity and enable the study of more technologies than one central team could test alone. Several state departments of education also expressed interest in disseminating use of the RCE Coach into their districts.

Lesson 6. The RCE Coach can support common approaches to evaluation.

At present, within a district, people may use inconsistent approaches to evaluating the effectiveness (and cost-effectiveness) of ed tech. Consequently, weighing the relative effectiveness of different technologies and prioritizing use of resources can be challenging. One pilot district views the RCE Coach as a tool for supporting common approaches to evaluation across a school, district, or state so that decisions can be made based on more comparable information.

Lesson 7. Practices associated with collecting, reporting, and interpreting usage data are still emergent.

In theory, detailed information about whether and how students, teachers, and other users interact with systems, as well as about their performance on embedded evaluations, should be a treasure trove for ed tech evaluations. In practice, several obstacles impede routine use of these data for evaluation purposes.

Key obstacles include the following: (1) There is substantial variation in what types of user interactions are captured and how they are presented by developers; and (2) With usage data, there is wide variation in terms of availability and ease of use.

From a policy perspective, it may make more sense to encourage developers to invest in standardized reporting functionality than to encourage responsiveness to requests for customized reports. For the RCE Coach, we have developed several templates with common indicators of use, progress, and performance. However, we recognize that the development of standards for system data reporting is likely to be a long-term, more organic process.

Lesson 8. Ed tech developers are important partners for RCE.

Districts can in theory conduct RCEs without developer assistance, provided that they have information about who is using the technology and who is not. However, RCEs will often provide more meaningful insights about effectiveness and strength of implementation with the cooperation of developers. Moreover, a productive partnership can facilitate the process of assembling data sets and make best use of usage data.

A number of developers have shown interest in getting involved with the RCE Coach in order to demonstrate the value of their products and deepen engagement with districts. However, we have also encountered reluctance from a number of developers to participate in RCEs, primarily due to the risk of unfavorable results, the potential drain on time and staff, and their lack of control over implementation.

We hope that these fears will abate as RCEs become more established. We also hope that developers will come to view RCEs as an opportunity to learn how and when their products are most effective and to build their evidence base.

In the coming months, we are soliciting more districts to pilot with us. We are also collecting and building resources aimed at helping schools and districts determine concrete outcome measures for ed tech applications that fall outside of the student academic achievement realm. These include identifying outcomes for student non-academic achievement (like grit, motivation, self-awareness, engagement, etc.), and outcomes for ed tech that facilitate teacher professional development and staff productivity.

Additionally, as we continue to pilot the RCE Coach, we are planning to document, in detailed case studies, the areas that cause the most confusion in the rapid-cycle evaluation process. For example, be on the lookout for an upcoming resource that will be embedded directly in the RCE Coach that details how to design a successful pilot. This resource covers topics like randomization, number of participants, unit of assignment, data availability, and selecting meaningful probability thresholds. Additionally, we’ve added a facilitator’s guide on how to demonstrate the platform for those school and district leaders who would like to lead their own trainings on the RCE Coach.

We hope to see more schools and districts pilot the RCE Coach and continue to help us learn and grow from the lessons we’ve already gleaned. For those interested, you can fill out a brief survey here.

Jackie Pugh was a research fellow in the Office of Educational Technology at the U.S. Department of Education for a year, through May 2017.

Alexandra Resch is an associate director and deputy director of state and local education partnerships at Mathematica Policy Research, specializing in rapid-cycle evaluation and evidence-based decision making.

Rebecca Griffiths is a senior researcher in SRI International’s Center for Technology in Learning.

Cross-posted at: https://medium.com/@OfficeofEdTech/eight-lessons-learned-from-piloting-the-rapid-cycle-evaluation-coach-1f7f681af96f