The best schools?

The Boston Globe published a peculiar article the other day on “The top 15 high schools in Massachusetts.” It wasn’t exactly an article, being a slide show of 16 images, each with a bit of associated text. And it was peculiar because of their criteria for “top.” I’m not so much objecting to the fact that they rated Weston as seventh rather than first, where we really belong. It’s more that I object to their arbitrary criteria for ranking schools. At least they do tell us what those criteria are and why they were selected:

1) Massachusetts Comprehensive Assessment System (MCAS) Mathematics Growth Score; 2) MCAS English Language Arts Growth Score; 3) School Climate (which includes graduation rates, dropout rates, and the intentions of attending a 2 or 4 year college; 4) College Readiness (which includes SAT Writing scores and the percentage of students scoring 3 or higher on Advanced Placement tests); 5) School Resources (as measured by expenditure per student); and 6) Diversity (calculation is explained below). The most recent available data is used for all calculations.

All variables were scaled (if necessary) and scored based on their deviations from the mean of the variable. This measure allows for some notion of distance between scores while removing some problems associated with the natural magnitude of the variables. Taking a weighted average of these deviations from the means (using category weights inputted by the user) allows for the calculation of a score, which is used to rank the schools.

That sounds fair enough. (Read the rest of the linked explanation for the justifications for those criteria.) But there are at least three major flaws. First, by emphasizing growth in Math and English MCAS scores they are inadvertently penalizing schools that have done well all along. If you’re at or near the top, there’s no room for growth! It’s not like IQ scores, which have no upper limit.

Second, they have a really rigid notion of diversity:

The 4 racial groups used to measure diversity are white, black, Latino/Hispanic, and other. A notion of “perfect diversity” is introduced where each group comprises 25% of the school’s population. The Euclidean distance in this space is then calculated for each school.

It’s not that I know the “right” way to measure diversity…but surely this isn’t it.

Finally, using “percentage of students scoring 3 or higher on Advanced Placement tests” is meaningless unless the denominators are comparable. Some schools can (and do) raise their average scores by limiting the number of students who can take AP courses. Also, many schools let students in AP courses decide whether or not to take the actual exam; those who are not doing well usually decide not to bother. Naturally that raises their success rate significantly. At Weston, students in AP courses are required to take the exam, even seniors who have mentally checked out because they’re safely admitted to college by that point. The criterion should not be the percentage out of students who choose to take the tests; it should be the percentage out of all students in the high school. Then it might be meaningful, at least somewhat.

 



Categories: Math, Teaching & Learning, Weston