MAP test manufacturer warns: MAP test should NOT be used to evaluate teachers. — So why is Seattle Public Schools doing just that?

Scrap the map9

The manufacturer of the Measures of Academic Progress (MAP®) student assessment test, the Northwest Evaluation Association (NWEA),  sent a memo to the Charleston County School Board in Oct. 2010, warning that  it should NOT use the test to evaluate its teachers.

The Seattle School Board needs to get this memo too.

Center warned Charleston school board about using test score data

By Diette Courrégé
Wednesday, October 27, 2010

A research center warned Charleston school officials that it might be a mistake to gauge teachers’ effectiveness with a test that was designed to measure students’ progress.

Before the school board abandoned a proposal that would have evaluated teachers on their students’ performance, it received a two-page memo from the Kingsbury Center cautioning it from moving forward with the draft policy because it could “expose the district to potential legal liability.” (Article continues here.)

Here are some excerpts from the memo, by John Cronin of the Kingsbury Center, an affiliate of NWEA (bold emphasis mine):

Recently, it came to our attention that a database was published by the Charleston Post and Courier that reported on the academic growth of students taught by Charleston County School District teachers.
In this database, CCSD teachers were ranked on the basis of the percentages of their students whose growth matched or exceeded the growth of Virtual Comparison Groups of students that were created by the Kingsbury Center for the district. As a result of that publication,  the CCSD school board is now considering implementation of a teacher accountability policy in which 60% of a teacher’s evaluation would be dependent on student growth and achievement data of this type. When we learned of this, we asked the district administration for permission to send our comments on the proposed policy to the board, and we were encouraged to do so.

The Kingsbury Center supports efforts to implement accountability for both schools and teachers. We believe that Charleston students deserve no less. We also believe that student achievement data can inform the teacher evaluation process. But nearly all experts in our field agree that test results should not supersede the principal’s role in teacher evaluation.

The Kingsbury Center supports efforts to implement accountability for both schools and teachers. We believe that Charleston students deserve no less. We also believe that student achievement data can inform the teacher evaluation process. But nearly all experts in our field agree that test results should not supersede the principal’s role in teacher evaluation.

The memo goes on to list seven reasons why the school board should not use the MAP® test in this manner, and then offered the following recommendations:


We would recommend that the board refrain from adopting a policy in this area until the district is able to study and propose concrete strategies for addressing these issues which, unaddressed, may expose the district to potential legal liability:

a. To address the question of exactly what data would be used for evaluation of teachers who are not teaching in subjects in which value‐added measurements are currently used, particularly social studies, history, science, art, vocational education, and music.

b. To establish explicit criteria for performance on the tested measures that are validated as reasonable by using data from test publisher norms, the district or state’s past performance, or other legally defensible standards.

c. The policy should also require that statistical error be considered when applying these criteria.

d. We would recommend that student test data not receive more weight than the principal’s evaluation in importance.

e. The impact of measurement error, while relatively large when measuring individual student growth, decreases dramatically when that growth is aggregated to large groups. When the groups under consideration are several hundred, such as school level aggregations, measurement error has a much more negligible impact. Consequently, we can support using value‐added metrics as one factor among others in identifying under‐performing schools.

Subsequently, the Charleston, S.C., school board shelved the proposal to base 60 percent of its teachers’ evaluations on their student MAP® test scores.

How is this relevant to us in Seattle?

Charleston is the city where our recently sacked Superintendent Maria-Goodloe-Johnson worked as superintendent before she came here. The MAP® test product was introduced into Charleston’s schools during her term there.

The MAP® test was also introduced to Seattle’s schools by Goodloe-Johnson beginning in the fall of 2009 (in a no-bid contract, while Goodloe-Johnson was on NWEA’s board of directors), and is now administered to most Seattle Public School children three times a year.

Here in Seattle, the MAP® test currently is being used to evaluate teachers.

The test is not designed for this purpose and should not be used in that manner. Even NWEA says so.

So why is the Seattle School Board and superintendent allowing the MAP® test to be misused this way?

This is one of the issues that Seattle’s new Interim Superintendent Susan Enfield and the board need to address and correct.

— Sue p.

An additional note:

Brad Bernatek, the person in charge of the implementation of the MAP test within our school district and a Broad resident, in a meeting that we had with Jessica DeBarros and several other parents last year, stated directly to us that the MAP test was not designed to be used to evaluate a teacher’s performance when we had told him our concerns abut how the MAP test would be used.

Did Brad not make this clear to his boss, the superintendent, at that time? Or had Dr. Goodloe-Johnson chosen to ignore that detail when negotiating with the teacher’s union last year?

Inquiring minds want to know.



  1. I believe that the testing companies are saying this because they are protecting themselves from future litigation. If/when teachers begin to fight dismissal because of test scores, testing companies will argue we aren’t responsible for how our tests are used, we stated numerous times they should not be used for evaluations.

  2. And as I wrote in the post “The Superintendent’s Send-off”:

    One speaker, a Seattle Pubic School teacher, brought up the fact that there is a $4.6M (!) proposal to be considered by the board to pay for IT upgrades so that the MAP can be given to all students. Another cost that slipped the minds of Brad Bernatek and the superintendent when they were buying the MAP test at an initial cost of a little over $1M. Now, with buying the rights to the test for the next several years, the implementation of the test, the initial cost of the test and now paying to support the test by buying more computers and IT support, we will be well over $10M in cost…and for what? If we hired more teachers for the same amount of money, that would reduce class sizes and give more students an opportunity to work directly with their teachers. Will this happen instead of throwing good money after bad? The board members basically said that this was a new day. Will they finally vote in a way that makes sense? We’ll see.


  3. A teacher that I know is concerned because her class happens to have about 1/4 of the class above grade level at 97%. So these kids showed no growth, because they were already above grade level. Her name is on a list that the Ed directors get that says x # of kids show no growth in her class, no explaining info.
    When will people understand that growth is not a straight line on a graph, but has dips, plateaus, and sudden movement. Any developmental/ cognitive researcher could tell you that. Why aren’t they speaking out?
    Now that she is gone, Dr. Enfield can hopefully change this.
    I also heard through the grapevine that the MAP crap was rammed through the tech. department without any reviewing or training for them. The entire process was hijacked.

  4. Interesting piece from Charleston, thank you. Evaluating teachers is such a hot topic these days–I have two points that I’d like folks to consider:

    1. Student voice
    2. Parent/guardian contributions

    Something that helps my teaching TREMENDOUSLY is to ask my students what works for them, how they prefer to have information ‘delivered.’ I want to know what they understood about a lesson and what left them puzzled. When I hear this type of feedback I can tailor my instruction to meet their needs. This is a win-win situation: my students grasp the concept, my practice expands and is refined, causing both of us to grow and learn. I teach to learn, and I am rarely disappointed.

    Keeping this in mind, I would love for my students to be asked how I am doing as a teacher and for their responses to be ‘officially noted’ in some formal evaluation. They’re the ones to ask! I value their opinions above all others.

    Also, if there were some ‘space’ provided to have teachers talk with parents (and I’ll say ‘parents’ here, referring to all primary caregivers– I’ve worked with a lot of grandparents raising grandkids lately) BEFORE the traditionally scheduled parent-teacher conferences in late November it would be so very helpful.

    It would enrich the academic lives of students in Sept., Oct. and early Nov. so much… The more we know, the better we connect. The better we connect, the better we teach. Teaching is largely about relationships–between student, material, teacher, home, school, peers.

    Parents know their kids better than anyone else and can contribute to a teacher’s effectiveness significantly. The more I know about my students, their backgrounds, skills, needs, preferences, habits, etc., the better I am able to build upon their existing knowledge. This helps set the stage for individualized education, and it is beneficial to both student and instructor.

    By providing that time early in the school year for parents and teachers to meet (the first week of school=conferences?) administration can help a teacher adjust/refine her/his practice in a much more effective manner than a test score would enable. I would welcome and learn from caregiver’s comments and be open to having them contribute to my performance evaluations.

    Isn’t that what evaluations are for? Improving what needs help, acknowledging what is working well? So that educators can both seek assistance and share their skills?! It seems that evaluations have been dehumanized and are being used in a fear-based manner more and more frequently.

    Let’s change that.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s