Preparing Preservice Music Teachers for Teacher Evaluation During Student Teaching

 

Joseph Michael Abramo, Neag School of Education, University of Connecticut

joseph.abramo@uconn.edu

Cara Faith Bernard, Neag School of Education, University of Connecticut

cara.bernard@uconn.edu

 

With the implementation of No Child Left Behind (NCLB), Race to the Top (RttT), and Common Core State Standards (CCSS), assessment of student outcomes and evaluations of teachers behaviors to determine teacher quality has gained increasing use in public schools (Chambliss, Alexander, & Price, 2012; Kohn, 2011). As of September 2013, 35 states and the District of Columbia require that student achievement data be a significant or the most significant factor in teacher evaluations (National Council on Teacher Quality, 2013). As a result, while evaluation systems vary, they require that teachers track and document student growth through data and that teachers are observed by administration.

 

In light of the current climate of public education based on RttT, this presentation describes a flagship public university’s efforts to prepare preservice music education students during the student teaching experience for teacher evaluation systems using the components of RttT and CCSS through a) observation protocols that emulate those found in current evaluation systems, and b) data that mimic “Student Learning Objectives” (SLOs). These small changes to the curriculum better prepare students for in-service teacher evaluation and to leverage the system for increased student learning.

 

For student teaching observation protocols, faculty use a professional practices rubric, which mimics and prepares preservice teachers for the observation and dialogue that they must engage in with administrators as outlined by current teacher evaluation systems. A modified version of the Danielson Framework for Teaching (2013)--which is commonly used for in-service evaluation systems-- facilitated dialogue on pedagogy, reflection, and dispositions. This dialogue is similar to post-observation meeting that in-service teachers must complete with administration.

For student data, student teachers are required to devise a unit, administer pre- and post- assessments, modify lesson plans based on those assessments, and document student growth through collection of artifacts. Students must then display student growth through a visual or aural demonstration that will be intelligible to non-music administrators and a general public. This process prepares students for the commonly employed SLOs found in teacher evaluation systems.

 

Through these processes, students questioned the role of assessments, observation, and evaluation in music education. For example, in ensembles, students began to question current music education practice by examining the difficulty of tracking individual progress; creating assessments that were not time consuming; creating authentic assessments that captured the “messiness” of student growth; and documenting student growth in accessible ways. Conversely, they questioned current concepts of teacher evaluation by noting that current mimetic conceptions of learning (Jackson, 1986)--where acquisition of content is the measure of growth--do not fit the multifaceted nature of growth in music education. As a result, through “approximations of practice” (Grossman, et al., 2009), which modify evaluation practices found in professional teaching through scaffolding from teacher education faculty and cooperating teachers, preservice teachers negotiate different agendas and foci and create a compromise that worked for their unique context to form "personal practical knowledge" (Connelly, Clandinin, & He, 1997) within the current climate of “teacher accountability.”

 

 

References

Chambliss, M. J., Alexander, P. A., & Price, J. (2012). Epistemological threads in the fabric of pedagogical research. Teachers College Record, 114, 1-35.

Connelly, F. M., Clandinin, D. J., & He, M. F. (1997). Teachers' personal practical knowledge on the professional knowledge landscape. Teaching and teacher education, 13(7), 665-674.

Danielson, C. (2013). The Framework for Teaching Evaluation Instrument, 2013 Edition: The newest rubric enhancing the links to the Common Core State Standards, with clarity of language for ease of use and scoring. The Danielson Group.

Grossman, P., Compton, C., Igra, D., Ronfeldt, M., Shahan, E., & Williamson, P. (2009). Teaching practice: A cross-professional perspective. The Teachers College Record, 111(9), 2055-2100.

Jackson, P. W. (1986). The practice of teaching. New York, N.Y.: Teachers College Press.

Kohn, A. (2011). Feel-bad education and other contrarian essays on children and schooling. Boston, MA: Beacon Press

National Council on Teacher Quality. (2013). State of the states 2013: Connect the dots: Using evaluations of teacher effectiveness to inform policy and practice. Accessed May 15, 2014, at http://www.nctq.org/dmsStage/State_of_the_States_2013_Using_Teacher_Evaluation_NCTQ_Report.