This page contains a Flash digital edition of a book.
Danielson rubric and many others were using some lo- cal adaptation of that model. Danielson’s model has some traction with music education groups at the national level as well, with the National Association for Music Education (NAfME) recently releasing workbooks for music teacher evaluation that are modeled on Danielson (NAfME, 2013a; NAfME, 2013b). If the state adopts a different observation tool, it could require new training for the majority of the state’s music teachers and their evaluators.


One stipulation that could be especially promising for music teachers is the MCEE’s recommendation to include quali- fied peers as observers. The council suggests that, “When possible, at least one of a teacher’s multiple observations should be completed by someone who has expertise in the subject matter/grade level or the specialized responsibility of the teacher” (MCEE, 2013a, p. 11). Because past re- search has found that music educators support evaluations by people with music expertise (Maranzano, 2002; Shaw, 2013; Taebel, 1980), this could be an especially encourag- ing step.


Conclusions


While it is important to scrutinize the MCEE’s final report, one must remember these are only recommendations. How and when these recommendations may move into legisla- tive format is unknown, though the frantic pace of change over the last few years suggests final legislation will come quickly. So far, the recommendations have received the support of the Michigan Board of Education, which drafted a resolution urging full adoption. Teachers unions, includ- ing the Michigan Education Association and the Michigan chapter of the American Federation of Teachers, have expressed general support for the MCEE’s plan while also noting some concerns. As reported elsewhere in this issue, a new Michigan “think tank,” the Partnership for Music Education Policy Development (PMEPD), has also issued a mixed review of the recommendations.


Even if adopted, questions remain for certain aspects of the new system’s implementation. The system is intended to focus on teacher growth instead of punishment, and the MCEE report includes numerous instances of language stressing this point. However, it is unclear how profes- sional growth would be mandated and measured in the new system. Experts stress that changes to evaluation should be tied to changes to professional development for teach- ers (Darling-Hammond, 2013), but the MCEE report does not discuss what mechanisms would guarantee meaningful growth. Second, significant funding would be required to adequately support the process of training administrators to conduct evaluations and for designing databases to col- lect and report evaluation results. The MCEE report gives some guiding principles but does not specify how this work


would be approached or the source of funding.


Until recently, music educators had been able to remain just outside the spotlight of NCLB-era accountability reforms. The lack of standardized tests has kept music from figur- ing prominently into school-level accountability data such as yearly progress results. However, as accountability has shifted to educators in the form of revised teacher evalua- tion, music educators have found themselves squarely under the microscope. Many questions remain for how adminis- trators will ask music teachers to demonstrate growth. Will music teachers be given the opportunity to set student learn- ing objectives? Will their growth ratings be tied to math and reading test scores? Are there options for music assessments that are rigorous, valid, and reliable?


There is currently a project underway to develop volun- tary, model arts assessments for the state of Michigan. The Michigan Arts Education Instruction and Assessment pro- gram (MAEIA) is a joint venture of the Michigan Depart- ment of Education, the Michigan Assessment Consortium, and the Data Recognition Corporation (MAEIA, 2012). In 2013, teams from the four arts areas (music, visual arts, dance, and theatre) drafted assessment specifications—that is, guidelines for how to design model assessments. In 2014, teams of developers will draft and field-test sample assess- ment items. At the national level, the forthcoming National Core Arts Standards will include embedded cornerstone assessments. It is unclear at this time whether the state will formally adopt any of these assessments for Michigan arts teachers. While educators may prefer the flexibility of stu- dent learning objectives, such assessments would certainly be significant improvements over tying music teacher evalu- ations to MEAP scores.


Music educators should closely examine the MCEE report and follow the actions of the Michigan legislature in adopt- ing its recommendations. If proposed legislation includes provisions that are unfair or invalid, music educators should speak up both individually and through their professional organizations.


References


Darling-Hammond, L. (2013). Getting teacher evaluation right: What really matters for effectiveness and im- provement. New York: Teachers College Press. Darling-Hammond, L., Amrein-Beardsley, A., Haertel, E., & Rothstein, J. (2012). Evaluating teacher evaluation. Phi Delta Kappan, 93(6), 8-15.


Keesler, V. A., & Howe, C. (2012). Understanding educa- tor evaluations in Michigan: Results from year 1 of implementation. Report of the Michigan Department of Education Bureau of Assessment and Accountabil-


12


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48