Does educational
software make a difference? A summary
of: Campuzano, L., Dynarski, M., Agodini, R., and Rall, K. (2009). Effectiveness of Reading and Mathematics Software
Products: Findings From Two Student Cohorts—Executive Summary (NCEE 2009-4042). Washington, DC: National Center for
Education Evaluation and Regional Assistance, Institute of Education Sciences,
U.S. Department of Education.
S. Krashen, May, 2014
Hat-tip: Susan Ohanian
Hat-tip: Susan Ohanian
Campuzano et. al.
investigated whether software for reading and math had any effect of achievement,
as measured by standardized tests. The reading software was used in grades 1
and 4, and the math software was used in grade 6 (pre-algebra) and for algebra
(mostly grade 9).
The study lasted one
year, and was replicated the second year.
The first year, 16 softward products were tested in 33 districts, 132
schools, and with 428 teachers in either classes that used the softeware or comparison
classes that did not.. The second year included 10 products, 23 districts, 77
schools, 176 teachers, and 3,280 students. A variety of well-known standardized
tests were used. For reading, the SAT-9 was used in grade 1 and the SAT-10 in
grade 4 for reading and grade 6 for math.
The study used the Educational Testing Services’ End-of-Course Algebra Assessment for grade 9
(algebra I). Other tests were also used to confirm that the groups had similar
levels of competence in reading and math beore the treatment began.
With the exception of scores
on the ETS algebra test, scores were converted to normal curve equivalent (NCE)
units. Algebra I scores for the ETS test are reported as percent correct.
The results
NO IMPACT OF USING SOFTWARE
There was no significant
difference between test scores of students who used the software and those who
did not at the end of the first year.
First year results
READING
|
software
|
comparison
|
grade
1
|
50.2
|
49.5
|
grade
4
|
42.1
|
41.7
|
MATH
|
||
grade
6
|
52.2
|
50.8
|
grade
9
|
37.3
|
38.1
|
Results presented in
normal curve equivalents, except for grade 9, in percentiles.
NO CONSISTENT EFFECT OF
EXPERIENCE WITH SOFTWARE
Some teachers (n = 115) taught both years with the
software. For reading, these teachers' scores
were not significantly higher the second year. For math, the 6th
grade scores got significantly worse, and the 9th grade scores got
significantly better.
Difference between first
and second year scores for teachers who used software for both years
READING
|
year
1
|
year
2
|
grade
1
|
0.86
|
-1.28
|
grade
4
|
2.65
|
4.67
|
MATH
|
||
grade
6
|
-0.44
|
-1.24
*
|
grade
9
|
-0.34
|
2.58
*
|
* statistically significant
difference
LITTE DIFFERENCE AMONG
THE PROGRAMS
A third analysis was the
impact of ten of the software programs used in either the first or second year.
Only one difference out of ten was statistically significant and in this case
software group's test score was only two NCE's (about three percentiles) higher
than the comparison group.
Impact of reading programs
grade
1
|
|
Destination
Reading
|
1.91
|
Headsprout
Early Reading
|
0.29
|
PLATO
|
0.5
|
Waterford
|
0.42
|
grade
4
|
|
Academy
of Reading
|
-0.16
|
Leap
Track
|
1.97
*
|
· statistically significant difference
·
Impact of math programs
|
|
grade
6
|
|
PLATO/Achieve
now
|
-0.58
|
Larson
Pre-Algebra
|
2.37
|
grade
9
|
|
Larson
Algebra I
|
-0.1
|
Cognitive
Tutor
|
-1.28
|
CONCLUSION
Out of 18 comparisons, 15
showed no difference, two showed a significant but small effect in favor of the
software, and one showed a small negative effect. We would expect one comparison out of 20 to
be significant because of chance.
The results are consistent
with the conclusion that these educational software programs did not enhance
learning. The results also indicate that experience with the programs did not
improve their effectiveness. Nine of the
ten programs investigated had no positive effect. The only one that was
significantly effective did only slightly better than the comparison group.