The need for objective measures of student achievement to assess the
effectiveness of schools has crossed over from K-12 academic
institutions into the realm of higher education. The number of college
students being asked to take skills tests is growing with the results
having an increasing impact on educational policy.
However, the results of the exams infrequently have any impact on the
students themselves, and a new paper from the Educational Testing
Service looks at whether that has a bearing on the relevance of the exam
results. In other words, if the students don’t care about the outcomes
of the exams, does that mean that the exams don’t serve as an accurate reflection of their knowledge?
The study, designed and led by Lydia Liu, Brent Bridgeman and Rachel
Adler, seems to point to that conclusion. The authors recruited 757
students to take an abbreviated version of the multiple-choice ETS
Proficiency Profile, an essay portion of the same exam, and a brief
survey to determine how motivated they were to do well on their exams.
Before the tests were administered, students were told that their scores
would be used to assess the quality of the school they were attending,
and might, as a result, influence how their diploma was perceived.
The students were then assigned into three random groups. One was
assured that their scores would remain anonymous and would only be used
for further research. The second thought that faculty at their
university would be privy to their scores. A third was told that the
results would be provided to potential employers to allow them to judge
the students’ level of knowledge.
It turned out that being extra motivated did wonders for one’s test
scores. Even when other factors were controlled for, students who
thought that their scores might have a real impact on their academic
careers received scores that was .86 of a standard deviation higher than
those who thought their scores would remain anonymous.
Most strikingly, after controlling for SAT scores, the score differences between sophomores and seniors, namely the value-added learning that could be attributed to students’ college education, varied dramatically from negative gain (-0.23 SD) to substantial gain (0.72 SD) across motivational conditions and tests.
The students’ assessment of their own motivations proved to be fairly
accurate as well. Those whom the short survey identified as being more
motivated tended to perform better. These conclusions have led the
researchers to offer several recommendations to schools who hope to use
these kinds of tests to assess student knowledge.
Limited college learning reported in prior research is likely an underestimate of student learning, if students received test instructions similar to the instruction used in our control condition. There are practical strategies that institutions can use effectively to enhance students’ test-taking motivation. Institutions that employ different motivational strategies should be compared with each other with great caution, especially when the comparison is for accountability purposes.