AAUP Survey Suggests The Role of Student Evaluations Needs to Change
Posted By Abby Perkins on June 17, 2015 at 8:45 am
The American Association of University Professors recently concluded its annual meeting in Washington on June 14, 2015. One of the most anticipated presentations addressed a recently completed study on the merits of student evaluations. Do they measure what they intend to measure? Are they of any value? How do they affect instructors’ careers?
In the fall of 2014, the AAUP’s Committee on Teaching, Research and Publication invited 40,000 tenure-track and non-tenure-track faculty members to take an online survey about teaching evaluations. Approximately 9,000 professors responded.
While 69 percent of respondents saw a need for student feedback, only 47 percent felt that teaching evaluations were effective, primarily due to:
- Diminishing student response rates for course evaluations. Institutions using online evaluations reported a 20-40 percent return rate, versus 80 percent or higher for paper evaluations.
- Over-emphasis on evaluations in personnel decisions, especially for non-tenure-track faculty.
- A spillover of the sometimes abusive personal comments seen on teacher rating websites into formal evaluations.
The low rate of return is especially problematic, since the results skew to extremes: students very happy with their grade and course experience, and those very unhappy. Most evaluations are done in the last weeks of the semester, according to the survey, sometimes even after students have received their final grade, making objectivity questionable.
Punishing tough professors
The survey responses indicate that faculty who do not challenge students or hold them to higher standards and merely “teach to the test” consistently receive higher evaluations. This is no surprise, and was clearly illustrated in a 2010 study of Air Force Academy cadets cited by Psychology Today.
The 2010 study measured student evaluations and initial performance in students of less experienced calculus professors, compared to those of more experienced calculus professors. The less experienced and less qualified instructors’ students did better on the introductory course exams, and the professors received the highest student evaluation scores. However, students of the more experienced and qualified professors performed better in advanced calculus classes.
The authors’ takeaway was that the professors who instilled the deepest learning in their students, who did not teach to the test, fared worse in student evaluations – and their students did worse in the introductory course. Because they tended to “broaden the curriculum and produce students with a deeper understanding of the material,” the students performed better in the long run, but the professors were punished with poor evaluations.
Penalizing women and minorities
Another discouraging finding in the survey was the kind of gender bias and abusive bullying that occurs when online anonymity is allowed. According to the report, “Women faculty and faculty of color report negative comments on their appearance and qualifications, and it appears that anonymity may encourage these irrelevant and inappropriate comments and attacks, which are sometimes overtly discriminatory.”
These findings support another recent study, entitled “What’s in a Name: Exposing Gender Bias in Student Ratings of Teaching,” published in the journal Innovative Higher Education in December 2014. Students in online classes rated male instructors higher than their female counterparts, and male faculty were described as “brilliant, awesome and knowledgeable.” Females were described as “bossy and annoying,” and beautiful or ugly.
Changing roles for student evaluations
While studies show that student evaluations do have some value, the results of the most recent AAUP survey led the committee to suggest that faculty – not administrators – should develop their own more holistic teaching evaluations. They also called for an end to student anonymity, and promoted paper evaluations over online formats. Most importantly, the committee argued, student evaluations should be only one indicator of teaching quality. The committee voiced strong support for all administrations “to stop the lazy practice of making contract renewals on the basis of such partial, biased and unreliable data.”