Now that another season is in the books we can get back to the interminable my conference is better than your conference arguments, pointless as they may be. There are nearly an infinity of comparitive metrics typically used in such arguments, most of which are subject to (mis)interpretation.
The problem is that conferences don't play the games. Any metric that aggregates team ratings into conference ratings is going to be more or less affected by extremes on either end of the "good-bad" spectrum as measured against any particular criterion. In this column we will apply a rating to all of the Football Bowl Subdivision teams and then use a simple counting stat to find out where the teams in each conference fall.
The ISOV WRRV value is very slightly better than that for the Massey BCS rating. At this writing it is the 10th best of 84 published computer ratings. I'll describe the WRRV in more detail next time, but basically it is a function of the size of an "upset" in MOV and the difference in ranks that involve a violation, and the rank of the losing team.
Because of the way it is calculated the ISOV is approximately normally distributed - 65 percent of the values are within one standard deviation from the mean.
|Grade distribution by ISOV|
Here we've just used "grades" to indicate the distance from the mean (μ) in standard deviations (σ). If a team's rating is greater than or equal to μ − ½ × σ but less than μ + ½ × σ we assign it a "C"; the other grades are assigned higher or lower grades based upon (½)σ increments above or below "C".
We could just reproduce the chart above for each conference to count the number of conference teams that receivd each grade, but since there are different numbers of teams in various conferences it is more informative to tabulate the percentage of teams in each conference that received the specific grades.
A fascinating aspect of predictive ratings is that even when they're right they can be wrong. For instance, the ISOV correctly predicted Mississippi State's win over Auburn early in the year, but now rates Auburn significantly better than Mississippi State, so that game is now a "retrodictive violation" for the ISOV, even though the ISOV predicted that game correctly. Likewise ISOV ratings for teams that played games it incorrectly predicted have been adjusted to the point where those games are not retrodictive violations.
I'll address the problem of measuring rankings later.
To account for the "fuzziness" inherent to predictive systems we perform the same analysis using a retrodictive system, and the best one (by the WRRV criterion) for which we have rating values is the Massey-BCS. It also distributes rating values in an approximately normal manner, with 64.2 percent of observed values falling within one standard deviation from the mean:
|Grade distribution by Massey-BCS|
Again we count the percentage of teams in each conference that receive a given grade:
In general I don't think there's a valid way to sum up team performance into conference "performance", but if you must order conferences by which had the best teams this year, this is as good a measure as any. You could give the SEC and Pac 10 an "A", the Big 12, ACC, Big East and Big Ten a "B", the Mountain West and WAC a "C" and every other conference an "incomplete" if you're an easy a grader as I am.
Dirk Chatelain of the Omaha Post Herald wrote College Football: Big 12 scoring was tops in nation pointing out that
about which one smart-aleck SEC fan (is that redundant?) remarked on a message board:
Big 12 teams averaged 33.4 points per game in 2007, a mark believed to be the highest conference average in Division I-A history, according to an NCAA official.
Five of the nation's top nine offenses came from the Big 12 — Nebraska ranked ninth. Oklahoma was 19th in the country, but couldn't even crack the Big 12's top half.
So the question is, "which is it, great offenses or mediocre defenses?"
That makes perfect sense. When your opponents don't play defense, you're going to score more.
When the question came up in 2005 (regarding Western offenses vs southeastern - ACC more than SEC in that year) I came up with a way to mormalize scoring stats. Basically you just use the available game data to define a model that can be used to calculate what teams' average scores might be if they'd played every other team in division 1A.
The answer this year turns out to be the same as then - it's some of both. Even if every team had played the same defenses, the Big 12 in aggregate had better offenses than any other conference.
The "weighted median" combines how much better or worse the best (worst) teams in the conference are than the middle of the whole field. A larger negative number means the conference is "top-heavy" and a large positive number means it is "bottom-heavy."
There are still six Big 12 teams in the top 20, but there are also six SEC teams, and while the Big 12 overall sorts into 1st, it also has two teams with offenses worse than the worst offenses in the Pac 10 and SEC. A more enlightening view of the same data emphasizes that it is team performance that matters more. In this table we list the teams by rank and conference, with conference champions indicated by a †:
|80||5-7||Middle Tenn St|
|81||4-8||San Diego St|
|93||5-7||North Carolina St|
|108||4-9||New Mexico St|
|114||5-7||San Jose St|
Among the BCS autobid conferences, only the Big East and Big Ten were won by the teams with the best scoring offenses. Even those might have been coincidences, as we see from the scoring defense data.
The SEC is caricatured (not characterized) by the
talking cliche-spewing heads as being the toughest conference from a defensive standpoint. That is only true to the extent that the worst defense in the SEC was still better than over half the defenses in the Bowl Subdivision - the best defenses weren't in the SEC - the best offenses were!
In fact, five BCS autobid conferences plus the Mountain West had exactly three teams in the top 20. The ACC only had two, which somewhat belies the notion that the offenses looked so bad because of the good defenses they faced.
Again, the list by team is telling. In the two BCS conferences won by the best scoring offense, the champions were also the best scoring defense. It was only in the Big 12 where neither the best offense nor the best defense won, and in that case the champion was second in both categories.
And it must be conceded that the smart-aleck had a point: in scoring defense adjusted for strength of schedule, not only did all other BCS conferences rank higher than the Big 12, so did the Mountain West. (Aside: It is not a coincidence that the MW had the best bowl season of any multi-bowl conference, at 4-1. Their games were mostly defensive mismatches - their teams played D, their opponents mostly didn't.)
|68||5-7||North Carolina St|
|74||5-7||Middle Tenn St|
|84||4-8||San Diego St|
|96||5-7||San Jose St|
|118||4-9||New Mexico St|
In all of the above analysis we've been aggregating team performance by conference. In the left sidebar we have the first comparison by conference metrics, namely the order defined by the rank distribution of its teams in scoring offense and scoring defense.
I didn't specifically break "ties", but ordered tied conferences by the normalized scoring defense metric, since from what we described above it appears that defense played the larger role in determining conference champions.
We're still not going to say "the SEC is better than the Big 12", or "the Pac 10 is better than the ACC." But whether by plain dumb luck or clever scheduling or, maybe, having better teams, we can safely say that the SEC and Pac 10 had better years than any of the other collections of teams called "conferences."
And there's lots of team-oriented data in these results that can be used to explain or analyze the crazy 2007 season.
But that can wait for the offseason, since this is too long already.