Addendum Tools for Assessing Student Growth and Achievement
1. PA 102 (2011) calls for the use of assessment tools that measure student growth through pre- and post-tests (p. 6). Possible choices could include: a. Teacher-designed tests, rating scales, and rubrics. b. Psychometric tests.
c. Assessment functions accompanying software such as SmartMusic (2011) or Music Ace (Harmonic Vision, 2010).
d. Assessment tests or units included as part of music textbook series.
2. PA 102 (2011) defines one aspect of teacher pedagogical knowledge as the ability to check and build higher- level understanding (p. 2). Assessments of higher order thinking skills in music may include:
a. Assessment of students’ musical compositions or arrangements written within specific guidelines and graded with a criterion based rubric.
b. Assessment of student portfolios where student work is evaluated using specific, objective criteria. Portfolios could include samples of student compositions, worksheets, written essays, self-evaluations, and performances in written, audio, or multimedia format.
For additional ideas, consult the publication, Performance Standards for Music: Assessment Strategies for Music (MENC, 1994). This free online resource is available of the National Association for Music Education web site (see References for link) and provides strategies for assessing students in each of the nine national content standards.
Festival Ratings and Music Teacher Evaluation
The National Association for Music Education (NAfME) states the following regarding the use of festival ratings in teacher evaluation: Successful music teacher evaluation must, where the most easily observable outcomes of student learning in
music are customarily measured in a collective manner (e.g., limit the use of these data to valid 2011).
adjudicated ratings of large ensemble performances), and reliable measures and should form only part of a teacher’s evaluation (NAfME,
The Michigan SMTE agrees with this statement, adding that: 1. All organizations that sponsor rated festivals should establish and periodically calculate statistical reliability (consistency) for ratings generated at these events, and provide data indicating the average rating and frequency counts for each final rating (I-V) issued within a particular classification, and for all participants combined. These data will serve as norms used to compare individual results with those of similar groups. This effort may require the assistance of college faculty or others knowledgeable in statistics and education research.
2. Festival ratings are valid to the extent that they measure an ensemble’s performance of two or three selections, and sight-reading ability, at one point in time. They furthermore only provide assessment for one of the five Michigan Music Standards and related benchmarks (Michigan Department of Education, 2011). A complete assessment of student growth requires multiple and varied measures of musicianship and musical understanding.
3. Teachers never should be required to attend a particular festival or use the results of these events as value-added data in their annual evaluation. Music educators who choose to use this data as part of their evaluation should do so voluntarily and as one of multiple measures of student growth.
4. Teachers, administrators, and other stakeholders in music education should be aware of the numerous factors that can influence performance adjudication. According to the extant research, these might include (a) conductor and performer appearance, (b) performance order, (c) repertoire selection, (d) adjudicator experience and background,
12
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29