Behind the State “Report Cards” on Schools
Who Reads School Report Cards?
While writing my latest blog post this morning, I had the good fortune of receiving an email with a link to a story from The Columbus Business Journal on school rankings in Central Ohio based off of the latest school district report cards from my good friend who owns his own corporate recruiting and financial advising business. He asked me what I thought and at that moment, this post was born.
Before writing, like any good researcher, I did some digging into some of my own files on the topic and on a few websites. First, I looked at bizjournals.com to see if other cities besides Columbus posted stories on school report cards (The Business Journals are localized for those of you who don’t know, with various cities having specific content to that region). It turns out a good many cities that have a Business Journal bureau had a story on how good or bad their school districts were based off of the report card – Houston, Pittsburgh, Tampa, New York, Milwaukee, and more. What this means is that a good many business people are learning about good and bad schools based on school report cards.
Next, I looked at how many newspapers published stories about school rankings based on report cards. While there were too many to quantify, suffice it to say that it appears that every state has numerous newspapers in multiple cities publishing such stories. What this means is that a good many people, regardless of vocation, are learning about good and bad schools based on school report cards.
What’s on a School Report Card and Who Issues Them?
School report cards are issued by state departments of education in all 50 states in the US. While there is variety on what comprises each individual state’s report card (see the full list for each state here), they all include a large majority of items that are centralized on standardized test scores. Some common elements include:
Achievement scores on standardized tests in writing, reading, and math
Measurement of the gap in test scores between white and non-white populations
ACT/SAT standardized test scores (often called “college readiness scores”)
Growth/academic progress/value added scores (how much better did kids score on tests this year compared to last year?)
While there are variations, these are the most common measures of a successful school according to state report cards. In this article, I would like to briefly unpack each to give a clearer idea as to what these report cards are actually telling us. To best do that I am going to utilize Ohio’s School Report Card, which has the common elements found in each state. These elements include: 1) Achievement, 2) Gap Closing, 3) K-3 Literacy, 4) Progress, 5) Graduation Rate; and 6) Prepared for Success.
We’ve covered this in past posts (do go back and read if you want more detailed analysis), but the achievement category measures how well kids take standardized tests. They break the numbers down into who scored advanced, proficient, basic, etc. on the tests, do some math on it, and come up with and a number of points that they add up and divide by a total possible point score of 120 to come up with a percentage (this is called the performance index. For you visual learners, an example from a district in Ohio can be found here). They also look at the percentage of students who “passed” the test, which is counterintuitive given that it is a quantitative measure being used for qualitative purposes but more on that another time. That number is called the “indicators met percentage” and includes indicators of giftedness (all of our students are gifted by the way).
This category would require an advanced degree in mathematics to begin to understand and given I used to teach English, I’ll do my best. In short, it looks at the gap in standardized test scores and graduation between white and non-white populations. To try and quantify the achievement gap, an incredibly confusing equation was developed. Stay with me...they take last year’s test score for a demographic, subtract it from this year’s composite score, divide it by the state’s goal for that group to score on the test subtracted from the current percentage score, and multiply that number by 100. But that’s not all. The number that is calculated from that percentage is set to equal the amount of improvement, divided by the current gap year, is multiplied by 100 and then a number is born that is intended to reveal how well schools are doing in closing the racial achievement gap. Confused? You’re not alone.
With my head still spinning from gap closing, I won’t go into so much detail on K-3 literacy. Simply put, this indicator looks at how well 5, 6, 7, 8 and 9 year olds take standardized reading tests. A number is developed on how much better they scored at each level and voila; we are all supposed to see how well reading is being taught. I won’t go into the limitations of standardized reading tests, the biases, the realities of excessive test anxieties on young children, the inappropriateness of extensive testing time on little ones, or the absurdity of expecting everyone to develop the same way at the same time. However, I will point out that children who do not meet proficiency on this indicator are held back in school, which flies in the face of any research done in the last 50 years on the negative impact on a child’s psyche, social/emotional development, self-esteem, or happiness. But I digress.
In short, this indicator tells us how well students as a whole and in subgroups did on standardized tests this year vs. last year. Again, we are relying on a standardized test measure in reading, writing, and math to tell us how well we are teaching our kids. I leave it at that.
This one is as simple as it sounds and believe it or not, does not utilize a standardized test to provide a measure. Amazing! It looks at how many kids graduate with a diploma in 4 or 5 years. Hard to argue this isn’t important.
Prepared for Success
This indicator is supposed to reveal how well kids are prepared to go to college or get a job in the workforce. Unfortunately, we are back to relying mostly on standardized tests and a convoluted equation I won’t go into, to provide a number so as to quantify who is ready for college or work. The indicators include ACT/SAT scores, ACT/SAT “participation” (who took the test), AP and IB standardized test scores, number of honors diplomas, and “industry-recognized credentials.” It is worth noting that researchers have long ago disproved ACT/SAT scores as being predictors for future college success, but again, I digress.
So What Do School Report Cards Tell Us?
If it hasn’t become obvious already, taken as a whole, school report cards and most school ranking systems tell us which schools and districts are good or bad based off of how well kids take tests. Put more succinctly, they tell us who is good at test taking. If that is how we are going to judge our schools, we are seriously missing the mark. Test taking ability on low-level processing and basic skills is hardly what comprises a great education. We need to look at who is being inspired, who is engaged, who is learning to ask good questions, who is making meaning, who is thinking critically, who is being creative, who loves learning, who wants to go to school, who is learning kindness and empathy, who is learning how to collaborate, who is learning how to be a member of a pluralistic, democratic society, who is being empowered, and so much more. A school report card tells us literally none of that. There are many ways we can gauge these ideas without necessarily giving into our modern obsession of quantifying everything and I will go into that in a later post (“Visions for the Future” – has a ring to it?). For the time being, it is my sincere wish that the next time you come across a school report card or someone you know talks about the “school rankings” (which are based off of report cards or indicators like those on report cards – Yes, I’m looking at you U.S. News and World Report), you take pause and realize that these report cards are not indicators of good or bad schools. They are indicators of good or bad test takers. We can and will do better, but it starts with this awareness.