This page contains a Flash digital edition of a book.
& Rothstein, 2012; Koretz, 2008; Lavigne & Good, 2014). Depending on their specific use, VAMs also may engender competition and stress among teachers instead of collabo- ration (Darling-Hammond, 2013; LaVigne, 2014). These problems notwithstanding, the MCEE suggests that VAMs provide more reliable alternatives than what local districts are able to design.


It is difficult to tell how the use of VAMs would apply to music educators under the new recommendations. For example, the report notes that, “In subject areas for which there are no VAM data available, educators should be evaluated based on alternate measures of student growth and achievement” (p. 20). The MCEE encourages that the state “develop or select assessments aligned to state-adopted content standards in high-volume non-core content areas where state-adopted content standards exist (e.g., arts, health and physical education, career and technical educa- tion, and many high school electives)” (p. 1). This seems to suggest that music teachers would be exempt from VAMs. However, other language in the recommendations is less clear on this point. The MCEE suggests teachers could re- ceive value-added scores for their students’ work in classes outside their purview and a building-level VAM score (no more than 10% of their overall student growth score) based on students they never teach. Because the language in these latter recommendations does not expressly preclude music teachers, final legislative adoption of these recommenda- tions must be watched closely.


There are other concerns for music educators who teach part-time in other areas. These professionals may receive a value-added score in those classes and not in music, making their work in music underrepresented in the VAM. If VAMs do become a reality, consideration of the uneven student contact time that music teachers experience also may be- come problematic.


Recommendation 3: Use a Three-category rating system


Though PA 102 (2011) required school districts in Michigan to rate “teachers as highly effective, effective, minimally effective, or ineffective” (p. 2), the MCEE has recom- mended that the state change to a three-ratings system of “professional,” “provisional,” and “ineffective.” MCEE Chair Deborah Ball explained that two factors prompted this choice. First, the MCEE felt the measurement error in a four-category ratings system was too great. In other words, four categories made misclassifying a teacher (especially in the middle) far too likely. Second, the change was intended to create a system based on improvement for all teachers. Ball noted that the previous “highly effective” category sug- gested that professional growth was unnecessary for those teachers (MCEE, 2013b).


11


Besides simply moving from four to three categories, it is important to understand how these new ratings function. In terms of certification, teachers with provisional certificates would only be able to obtain professional certification if they receive “professional” ratings for three years in a row (or three non-successive “professional” ratings and a prin- cipal’s recommendation). In terms of job security, teachers who receive an “ineffective” rating two years in a row, or receive a “provisional” rating for three consecutive years, “should be counseled out of his or her current role” (MCEE, 2013a, p. 3).


Use of three categories is exceedingly rare across the country. Only five states currently use three categories, and an additional three states require “at least 3” categories (NCTQ, 2013). All other states are currently using four or five categories. While the MCEE’s goal in reducing the number of categories is to lessen the chances of misclas- sification, it is also important to note that other factors are likely to be involved in classifying teachers into categories. For example, a Michigan Department of Education report by Keesler and Howe (2012) noted that in 2011-2012, 98% of Michigan teachers were rated as “highly effective” or “effective,” and that less than 1% were rated as “ineffec- tive.” They suggest probable causes including evaluator’s lack of experience in assigning ratings, and principals’ concerns over litigation for assigning “ineffective” ratings (pp. 8-9).


Recommendation 4: Observation framework and observers


Four observation tools/frameworks were investigated and piloted by the MCEE: Charlotte Danielson’s Framework for Teaching, the Marzano Teacher Evaluation Model, The Thoughtful Classroom, and 5 Dimensions of Teaching and Learning. The council declined to choose a single tool for recommendation to the state, noting that among the thirteen local education agencies (LEAs) that piloted the various tools, “little significant difference” was found (MCEE, 2013a, p. 10). The council has recommended that one tool be chosen by the state through a competitive bidding pro- cess. The state would then provide base funding for support, training, and data analysis for the chosen observation tool— LEAs that chose a different observation tool would have to cover such costs.


What does this mean for music educators? In a survey of 330 band and orchestra teachers (Shaw, 2013), I found that many music educators felt any generic observation tool was somewhat inappropriate for addressing the particularities of music instruction. Therefore, it may be true that music educators will not fully support any of the four options pre- sented to the state for adoption. Keesler and Howe (2012) noted that in 2011-2012, 50% of districts were using the


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48