With more than $3 billion spent on ed tech products last year and more than 2,500 programs on the market, educators across the country need to know which products are likely to make a difference in teaching and learning. In a research study into this issue, Dr. Ryan S. Baker, Director of the Penn Center for Learning Analytics focused on what aggregate data can tell us about the programs we use and how effective the apps we use for math, ELA and science are.
Join us for What Ed Tech Apps Work Best for Learning?
Monday, November 5, 2018 12:00pm Pacific time, 3:00pm Eastern
He used aggregate data from BrightBytes Learning Outcomes module to learn what programs students are using, how much they’re using them, and how the investment in programs correlates to improved student learning.
After looking across 48 school districts with 392,603 learners using 258 apps, he found that there was sufficient test data to analyze student improvement for 177 of these apps. He focused only on 150 apps used by learners that were used for a total of 1.48 million hours.
These districts were varied in type: large cities (3), medium cities (6), small cities (7), suburbs (24), rural/ small town (12) and student population: more than 30,000 students (5), 10,000-30,000 (9), 5,000-10,000 (13), 1,000-5,000 (19), fewer than 1,000 students (4).
The team focused their efforts in three areas: adoption, engagement, and impact. Each plays a role in understanding the overall effectiveness and financial sustainability of apps that students are using.
Adoption tracks how many licenses were purchased, the cost of licenses, how many active users there are, and who at the school is using it. Engagement measures student usage (how often students are using a particular app and to what degree) and student perceptions (how much do students like/dislike a particular app.) Impact tracks the link between student usage on a particular app and how that may or may not influence their performance on assessments.
Student Improvement was measured by standardized test score improvement between one test (fall, winter) to another test (winter, spring). Scores were collected from standardized tests in math, science, and ELA.
They defined student usage in two ways: the total number of minutes that students used the system and the total number of distinct days that students used the software. They determined the average cost of an app in terms of the cost per license, the cost per user, or the cost per intensive user.
Results of the Study
1.8 million licenses were purchased. The average number of licenses per student was 5.02. When a district purchases licenses, it is typically with the expectation (or at least hope) that they will be widely used.
However, the study showed that not all licenses are used. License use varies by app, and there is a range of unused licenses. The study also looked at licenses that were employed in classrooms but not used intensively. There was a wide range in the degree of adoption of different apps and limited overlap between the apps with the largest number of licenses purchased and the largest number of intensive users.
There is a wide range of costs, even when not including free apps. Cost per license varies from $0.14 per license to $367.00. There is an even larger range of costs when looking at the cost per user, which varies from under $0.1 per user to $1000.00. The largest range of costs is cost per intensive user, which varies from $0.2 per user to $5000.00.
Impact on Math and ELA Learning
Perhaps the most important question to discern from the data is what apps had the most impact on learning. The study examined use of apps in math, English Language Arts and science. The data on time indicated what one would expect -- the more minutes students used certain apps, the more their test performance improved between assessments. The data on the number of days of use is also logical -- the more days students used certain apps, the more their test performance improved between assessments.
Impact on Science Learning
While there were fewer apps and fewer assessments available for the study in science, correlations existed between the time and number of days certain apps were used and test improvement. Apps also had geographic variation for where they were most effective.
General data on use is valuable but even more relevant is an analysis of what apps were most widely adopted, used, used intensively, and were successful in promoting student learning in math, ELA, and science.
Read the previous entries in this blog series:
As the costs for devices and applications decrease, technology use is increasing In classrooms across the country. Growth is particularly high in web-based devices. Read more.
School districts across the country spent more than $3 billion on ed tech products last year and chose from an assortment of 2,500 apps on the market. With so much money at stake and so many choices, educators need to know which products have a positive impact on student outcomes and the conditions under which impact is the greatest. Read more.