THE NINE WINS AXIOM AWARDS

PHIL SIMON

Award-winning author, dynamic keynote speaker, trusted advisor, & workplace tech expert 

Student Evaluations

How my ASU students viewed my performance.

ASU provides its professors with evaluation data in a raw format. That is, the data doesn’t lend itself to interactivity.

I decided to change that.

A few of my rock-star analytics students (including Scott Fitzgerald) created a neat set of data visualizations under my supervision. This allows me to see how I am doing across a number of dimensions: class, semester, year, etc. I can easily see areas of strength and opportunities for improvement.

Phil should be the role model for expected behavior from staff at W. P. Carey. He is very respectful to students. He engages his students and communicates well. He has been my favorite and most exemplary professor throughout my undergraduate curriculum.

—Former student

Notes

My feelings on student evaluations are decidedly mixed. My primary objection stems from Goodhart’s Law. Sure, I’m happy with the trends in my evaluations, but being an effective professor entails more than kowtowing to students and making them happy. Handing out A’s might help my evals but students who don’t learn how to think critically ultimately suffer.

Next, there’s anything but unanimity about the effectiveness of student evaluations. There’s strong evidence that student responses to questions of effectiveness do not measure teaching effectiveness. One of my colleagues believes that student evaluations offer little if any value to universities.

On a more technical level, the default view below contains both in-person and on-line courses—although you can filter in whatever way you like. As Suzanne Young and Heather E. Duncan have demonstrated , for most professors there’s about a one-point ratings gap between the former and the latter. That is, students consistently rank online professors lower than their face-to-face (F2F) counterparts. I suspect that this is just a limitation of online courses.

Student evaluations take place in a vacuum. That is, I can’t see how student view my performance as a professor in relation to my peers. I strongly suspect, though, that I compare favorably to my colleagues. See the Slack poll on the right for more here.

Finally, the data visualization looks best on a desktop or laptop. You can view it on a tablet but there’s just too much data to see it all.

Version History

12.31.19: Updated with Fall 2019 evaluations.

06.06.19: Flipped bar chart on a few tabs. The data looks better horizontally than vertically. Added some color. Added tab for student response rates. I find it interesting how online students are less likely to fill out evals, making the numbers more suspect to outliers.

06.05.19: Updated with Spring 2019 evals. Added tab at end comparing in-person and online courses. As I suspected, the differences are pretty stark.

01.02.19: Updated with Fall 2019 evals. Fixed sorting issue. It’s interesting to see the differences between teaching 400-level classes over which I have a fair degree of discretion and 200-level ones in which I don’t.;

09.01.18: Updated with Summer 2018 evals. Note that not enough students filled out the 450 evaluation to present results.

01.05.19: Updated with Fall 2019 evals.

09.01.18: Updated with Summer 2018 evals.

07.13.18: Updated with Spring 2018 evals.

10.03.17: Fixed issues with decimals.

05.01.17: Initial version with 2016 and 2017 evals.