Dear Ohio State Mansfield Colleagues,
A few weeks ago, I attended an Ohio State workshop on peer evaluation of teaching (PET). This is a critically important topic for any institution of higher learning, of course. Appraisal of teaching is even more salient for the Ohio State regional campuses, however, as a significant portion of our mission is framed by the delivery of high quality instruction in the classroom and beyond. Therefore, I wanted to focus the attention of this Bi-Weekly Report on the PET process in order to spark some renewed dialogue on our campus regarding this extremely important topic. While this conversation will become more formalized in Autumn Semester when Professor Jones returns from his sabbatical, I would welcome the opportunity to join in on more informal discussions about PET in the meantime.
The workshop facilitators posed the central guiding question of the workshop as this: why bother to evaluate? The emerging answer from the panelists and participants surrounded the need to document any and all areas of instruction that presently are being done well in combination with the need to identify areas for future improvement. It was readily acknowledged, however, that current PET activities do more of the former (documenting present effective practices) and little of the latter (identifying areas for future improvement). Outsiders to this process might well wonder how an evaluative process had devolved to this imbalanced state of affairs, but the answer is common knowledge among faculty members. Most simply put, the pressures surrounding promotion and tenure decisions have played a decisive role in undermining this process for the majority of participants (both as evaluators and those being evaluated).
So the fundamental question may perhaps be best reconsidered as follows: how do we create a PET process that accomplishes both the need to document present effective practices and areas for future improvement without unduly raising anxieties among faculty, and especially untenured faculty members? Importantly, different colleges and units are adopting some innovative practices that have gained the full support of the Office of Academic Affairs (OAA). For instance, the College of Law now utilizes a discussion format between evaluators and faculty members that allows for the co-creation of documentation that is placed in the core dossier for purposes of promotion and tenure (thus granting more balanced influence in terms of what material does and does not go into the dossier). This attempt to rebalance control is taken to an even greater level by the College of Veterinary Medicine, which forms a mentor committee that guides a candidate's reflection of their evaluative feedback over time. Perhaps even more interestingly, the only written material that goes into the dossier is the faculty member’s own contemplations on this process.
At the very least, this should provide some food for thought regarding the way that we handle the PET process on the Ohio State Mansfield campus. I am interested in exploring the potential helpfulness of some focused yet informal attention on this more balanced approach to evaluation during the annual review process. Hence, Associate Dean Tovey and I will ask faculty members to comment on the areas of instruction you excel in as well as the areas you will be working to improve in the year ahead when we have our annual review meetings later this Spring Semester.
There were quite a number of additional interesting points made during the workshop. For instance, there is a body of research literature indicating that direct observation is the least effective way to evaluate teaching. This of course raises questions about how to raise expectations that the PET process will link classroom observation with other evaluative methods (student evaluation of instruction, syllabus review, etc.). Further, the relatively modest student participation rate in evaluation of instruction has called into question the validity of current practices, something that OAA is now attending to in partnership with Faculty Senate. I anticipate that we will receive at the very least a preliminary report from the working group tasked by OAA and the Senate to address the SEI issue by the beginning of Summer Semester.
In turn, fundamental questions also are being raised about how to best evaluate hybrid and online courses. What are the right questions to ask students in terms of their experiences in and with these sorts of classes? Are they the same questions we will be asking of students in traditional classrooms? How does a faculty peer “observe” an online class? And how do we get our arms around the evaluation of MOOCs (Massive Open Online Courses)? We are faced with an exciting new frontier regarding these evolving forms of instruction, to say the least.
My reflections following the workshop lead me to this last set of questions that I will pose in closing this report. Do we know what teaching excellence looks like in actual practice? Is this a shared definition that has a high degree of consensus among our faculty members? Do definitions vary as a function of scholarship area? Have we ever had discussions about what good teaching is on our campus? Can we have them now, and would they be productive or divisive conversations? And finally, should regional campuses be leading the charge on the issue of PETs, given our mission? I look forward to hearing from you about this important topic in the days, weeks, and months ahead!