1. The professor gets to choose the time and day of administration. It can be very important to pick a day when his lecture has been particularly fun, and to not pick a day when a hard assignment was returned. But not all professors game the system in this way.
2. Students do not realize the importance of their evaluations. They take this quite casually.
3. Every student's opinion is weighted equally. It does not matter whether a student is smart or stupid, has come to class often or seldom (though if he skips the day of the evaluation his opinion counts for zero), or has a grievance against the professor.
4. Average evaluations are high. The average score is somewhere around 5.5. Thus, a vengeful student who puts down a 1 can have quite an impact.
I would far prefer to have a committee of three randomly chosen students evaluate
Then they would take it seriously, and have some accountability-- if only via shame for
doing a bad job and being publicly named. But of course there are obvious problems with
having any kind of student evaluation at all, especially when the students are ones who
are currently taking the course:
1. It is hard to be unbiased when you have been graded by the professor you are evaluating.
2. Students have little basis for comparison-- they have taken only IU courses, and there is a serious danger that they will be hostile to any unfamiliar sort of teaching.
3. Students have no idea what they have missed by bad teaching. Suppose a teacher has chosen to only cover 20% of what is covered by the typical college course in the subject. How would the students know? I remember hearing complaints about this at Chicago back in 1990. Chicago students were much less happy with their finance course than Northwestern students-- but using the same book, the Chicago students were covering far more material.
4. Students don't like feeling ignorant. Thus, a course that teaches hard material will leave them unsatisfied. If they understand 9 of 10 ideas, they will feel they have been better taught than if they understand 18 of 40 ideas, even though the second course has taught them twice as much. This is a particular problem in MBA programs, in which business schools flatter incoming students outrageously, knowing that it is happy students who give them high magazine ratings. A flattered student who discovers how ignorant he is on his first midterm will not be happy.
These problems are much the same as you would get if you tried rating doctors by asking their patients how good their doctors are. By this method, quacks would do very well, since they specialize in "bedside manner" and few patients know any science. You would indeed learn something about bedside manner, which is a good thing, but bedside manner is a trivial part of being a good doctor-- if by "doctor" we include anybody who might provide health care, including chiropractors, homeopaths, faith healers, and natural remedy druggists. I would guess that a M.D. degree, for example, would be a much better indicator of quality than a high score from patients, and I certainly would prefer to be treated by an M.D. heavily criticized by patients for being cold and snobbish than by someone who dropped out of college to learn New Age medicine who had rave ratings from his patients. The analogy in academia is the disliked professor with a Ph.D. versus the popular lecturer.
This is the time for me to write these things, because I had rave ratings this spring in my two small undergraduate courses, something around 6.5 out of 7 in both. I have to admit that these high ratings somewhat undercuts my argument because I did make the students feel ignorant by giving them pass-fail tests on basic grammar and economics that most of them flunked on the first try. But in another way, they confirm the silliness of ratings. Just two years before, I had ratings in one course that were close to just half of these ratings. I jumped from being in the bottom 10% of teaching quality in my business school to being in the top 10%, if one believes the evaluations. But of course my teaching quality did not change that much, if at all. It is just that the mood of the classes was different, and I perhaps have an teaching style out of the ordinary, which according to student mood can be viewed as good or as bad.
Why, then do we rely so heavily on student evaluations? It is hard to believe that professors and administrators do not realize how weakly they measure the amount a teacher has taught his students. Even if they did not, if good teaching was the objective, surely we would pay some attention to the syllabi and what kind of tests were given and use objective evaluators-- students or faculty observing single class sessions-- which we do not do in any serious way. Rather, I think that "good teaching" means "contented students" for the people who rely on student evaluations. Student evaluations are indeed a good way to measure this. And it is a reasonable objective. Administrators are trying to sell a product, and if you view the student as a customer rather than as someone to whom you have a moral obligation, you want to design a product that he wants. The student will likely want a course that has a low workload and gives him a pleasant feeling of accomplishment while being described as difficult course on an advanced topic. Professors have incentives similar to administrators-- it is more fun teaching contented students, and while it is quite difficult to know how to make students learn (I know that after 20 years I still don't know when I have succeeed and when I have failed, or even whether I, as opposed to the students' own efforts, make much difference), it is much easier to figure out how to make students pleased.
This question will have growing importance. Why, indeed, do we have people with PhD's, or people who have scholarly credentials, teaching at all? If student satisfaction is the key, universities should hire cheaper teachers who know more about presentation than they do about substance. And, indeed, maybe teacher quality is unimportant, and this would work out fine.
There is one dimension, however, in which I think it is clear that teacher quality is tremendously important, though it is a dimension that barely enters the student evaluation process: course planning. If the right material isn't on the syllabus, we can be sure the students won't learn it. So if we do move to nonscholarly teaching, ti will be important to have scholars still designing the courses.
[This page is
To return to Eric Rasmusen's weblog, click http://www.rasmusen.org/w/0.htm.