Best Practices in Using Student Feedback for Teaching Evaluation

When used appropriately, student feedback has a place in summative evaluation. Whereas formative evaluation is focused on improvement, summative ratings can provide indirect evidence of effective teaching. Ratings should be collected from every course, but not necessarily every semester, so that evaluators can look for global trends in the data, such as steady performance, declines, or improvements, as well as certain courses that may need attention.

 

In using student feedback for personnel decisions, the following best practices are recommended, in the context of policy determination that SU uses student feedback firstly to empower individual lecturers to improve their own teaching; and only thereafter for any other purpose, and then with great circumspection.

 

Use Multiple Measures
Any evaluation of teaching effectiveness should incorporate multiple measures, such as peer ratings of course goals, design, and assessments; direct student outcome measures (e.g. creations, projects, papers); lecturer self-reflections; and so forth (Hoyt & Pallett, 1999; Halonen, Dunn, McCarthy, & Baker, 2012).

  • Peer review is a credible source for evaluation of course goals and objectives, intellectual content, methods and materials used in teaching, quality and appropriateness of evaluation practices, and evidence of student learning. But, the validity of peer review improves when faculty undergo some training. Ratings by colleagues can be unreliable and less valid when done by untrained observers and with an unsystematic approach to the evaluation (Marsh, 2007; Marsh & Dunkin, 1997).
  • External recognition from outside experts can provide evidence that the faculty member’s teaching is exemplary. Nominations for a teaching award, invitations to write a chapter or book about teaching, and requests to speak about teaching practices or share course materials are possible indicators that a lecturer’s teaching is praiseworthy.
  • Participation in professional development activities demonstrates the desire to improve when evidenced by lecturer self-reflection about how the activity led to modifications in the course or approaches to teaching.
  • Exemplary contributions to the department are shown when teaching large sections, developing curriculum and aligning it with accreditation standards, and helping colleagues improve their teaching.
  • Embedded assessments (i.e. student completion of class assignments and activities aligned with learning outcomes) signal whether specific learning outcomes were accomplished. Examples include student writing samples, self-reflection on service-learning projects, comparisons between students’ subject matter knowledge before and after teaching and learning, and student portfolios of completed work.

 

Vary Feedback Schedules
How frequently and for how many courses to administer student feedback, ought to depend on the purpose of the evaluation and the employment status of the lecturer  (Hoyt & Pallett, 1999). For new lecturers and those on contract, it might make sense to collect student rating data for every course and section.  If student feedback is considered when making employment recommendations, at least two sets of ratings should have been collected for the lecturer beforehand, with fair opportunity after the first set to empower the lecturer to improve his/her teaching.

 

Use Written Comments Only Formatively
Their chief value lies in the contributions they can make to improving teaching or the course (Braskamp et al, 1981). If sophisticated qualitative data analysis is done, written comments can be interpreted more systematically. It is advisable to separate thoughtful comments that represent the majority sentiment of the class from attitudes of a vocal minority or those with personal biases.

 

Protect Student Confidentiality
Just as faculty must have confidence in the system, students must be assured their responses will remain confidential. Inform students that data will be held in a secure environment, will only be analyzed at the class level, and that results presented to the lecturer will not be associated with any identifying information.

 

Encourage Good Response Rates
Lecturers can create value for student feedback by placing relevant objectives alongside specific course objectives in the syllabus, informing students about modifications made in the course based on previous student feedback, encouraging them to complete the ratings, distributing a copy of a sample report given to lecturers, and assuring confidentiality of responses. Institutions can communicate reminders through social media, university portals, learning management systems, department web sites, student publications, radio, flyers, and posters. Ongoing feedback should be championed as part of the institutional culture of enhancing student learning and ensuring program quality.

 

Employ Standardized Administration Procedures
Faculty must have confidence that ratings are collected similarly across courses and lecturers. Written instructions to students should be standardized. Lecturers should leave the room because ratings tend to be higher when the lecturer is present (Braskamp & Ory, 1994; Centra, 1993; Feldman, 1979; Marsh & Dunkin, 1992). At SU, ratings are collected by a neutral party and the data is taken to a location where they remain unavailable to the lecturer until after grades have been submitted (Cashin, 1999). To ensure monitoring of procedures, students should be informed of policies and provided the means to report  violations of them.  All participants in the feedback system can report any breaches of the policy and the rules to the Student Feedback Office.