As we plan for this school year, most administrators are revising our observation schedules and developing plans to help teachers set growth goals for the academic year. In order to complete all evaluative requirements in a timely manner—and to help teachers complete their years with effective scores (or higher)—we should provide relevant information to our staff members now.
Most teachers, even when provided with the rubric for their state evaluations, do not understand how the system works, and even when they score effective or higher, they are often frustrated rather than being pleased . . . because they don’t know why they scored well or how to continue scoring well in the future.
Under value added systems, sophisticated formulas are used to make evaluations equitable regardless of student populations. According to Dyarski (2014), “Kids learn from others beside their teacher, including: other kids and teachers, parents and siblings, friends, churches, the local YMCA, and social media, to name a few. And some kids live in less affluent households, or have a problem or disability that affects learning.” Value added systems are designed to normalize these variables (outside factors) and provide equitable scoring for what teachers add to the system (inside factors). Unfortunately, most teachers are never told their system works in such a manner.
Most state licensing sites provide no information on how their evaluation system works, and when asked, most state employees do not know how the system works, either. David Gardner, a New Mexico principal, respects the modern teacher evaluation system and considers that component of teachers’ scoring quite reliable. He worries, however, about using value-added systems to measure teacher success through student test scores: “Value-added does not take [outside] factors out of students’ test scores. We receive NO training on how the formula works—no one in PED in NM can actually answer that question because they don’t know, either, and most of them are honest about it. So teachers have no idea how that 50% of their evaluation happens at all. It’s supposedly based on a PROJECTION of how much students should have grown, not on how much they actually grew from where they entered the room, and the data that comes back is not reflective of the growth of actual students at all—it’s just not, and it’s not fair” (2018).
When we can’t change the system, which is often unfair, we must ensure that teachers have the information (and training) to be successful, regardless of poorly-designed systems. To help our teachers, we should provide the rubric for their formal observations and require them to list specific practices we should see under each area of the rubric if they want to receive an effective or advanced rating. If they must break the rubric down, they will understand it better than if we’re lecturing to them about what it means. Second, we need to provide checklists of research-based practices, which have been proven to raise students’ scores, and we need to require them to implement these practices (Wilson, 2015). A good checklist to begin from is Goodwin and Hubbell’s The 12 Touchstones of Good Teaching(2013). Staff members could implement this checklist and revise it for their unique needs with ease, and they would raise students’ scores in the process. Finally, we need to help teachers understand that using research-based practices is their job, not worrying about what system their state has adopted, and focusing on their practices will ultimately give them the improved scores they need.
If we can help our teachers focus on research-based instruction, better lesson-planning, and using their data to modify their plans for students’ needs, then we will help them raise not only their students’ test scores, but also their own scores on evaluations. In the process, we’ll also teach them how evaluation systems work—and how to replicate their successes.