Thursday, April 28, 2011

A Plethora of Rankings in Lake Nogebow

I've lived in six states and the District of Columbia, and for some reason it took me years to notice the following pattern. Everywhere I lived there was, at some point, a debate in the public sphere that went something like this: "I don't know how we can expect people to want to move to ____ when everyone knows we rank [45th, 46th, ....51st] in Ranking X in quality of education." The thing is, I heard the same thing from people who lived in other states which were supposed superior to my state. One curious thing was that it was never a single, comprehensive criterion: it varied from state to state: per pupil spending for education, teacher salaries, spending on education as a percentage of personal income, class size, graduation rates, SAT scores, etc. etc.

So, a while ago I decided to try a little investigation of this phenomenon on the internet. I went onto search engines, and typed in, in Alphabetical order, "State A ranks 45th in education", State A ranks 46th in education"; etc . It was eye opening. Primarily in regional and local newspaper reports, editorials, and letters to the editor, I realized that the United States as a collective was a giant educational Lake Nogebow --a reverse Lake Wobegon where almost everyone is below average in education according to some ranking. I have a small map on my wall. Send me a comment if you want me to tell you how bad your underachieving state really is (I think I found that somewhere around 40 of the states ranked 41st - 51st in some category; unfortunately I didn't keep track of all of the categories by state).

The political economy of this rankings game is obvious. Create enough rankings and almost every state will fail in some category X, much to the delight of whatever lobbying group wants more taxpayer funding for X. The job of the education economist is to make some sense about which of these rankings actually matter. The whole class-size debate is ongoing and most of what I have seen is not encouraging to the idea that lowering class size is a cost-effective way of improving student performance, at least not in higher grades. Some of the rankings are undoubtedly in conflict: higher drop-out rates could correlate with higher SAT scores. It is well known that some of these rankings are completely perverse. Probably the best example is to report average SAT scores without taking account of what proportion of high school students actually sit for the SAT exams. For example, in a state in which in-state schools rely on the ACT, the SAT may be taken primarily by students who already know that they are qualified and/or have the resources to go to a private school out of state. That's an incredibly self-selected sample compared to a state that encourages every student to take the SAT.


No comments: