One of the interesting observations about the 2005 season is that games involving "western" teams (usually defined as the Mountain West, Pac 10, and WAC teams) tend to have high scores compared to games involving SEC (or other non-western) teams. Conference fans, being more partisan than political party members, each want to put a different spin on the results.
Pac 10 fans snort indignantly "we do too play defense, it's just that our offenses are so good they spoil the defensive stats. The only reason your defenses look good is that they don't play any decent offenses." To which SEC fans reply "oh, yeah? Our offenses wouldn't look so bad if they played the same defenses yours do!" and before you know it we're into another "my Daddy is smarter than your Mother is better looking" argument.
Of course, both arguments are correct to one degree or another, and facts should never get in the way of a good football argument, but there are ways to analyze the data using an objective measurement. The basic idea is to make an adjustment to the statistics based upon strength of schedule. The tricky part is figuring out which "SOS" is most appropriate.
But to use "SOS" to normalize a statistic, you need a value, not an ordinal ranking, and it must be related to the statistic that is being normalized. While it is certainly true that the ability to score points and prevent points from being scored contributes to winning percentage, you can't derive scoring or scoring defense directly from winning percentage.
The Iterated Strength of Victory algorithm combines winning percentage with the "strength" of the wins and "closeness" of the losses to relate all teams through all possible paths of common opponents. Since it explicitly depends upon points-scored and points-allowed, we can use it to find an SOS that relates to Scoring Offense and Scoring Defense.
The ISOV can be thought of as F(PS,PA) over all games for all teams, where PS and PA are integrals of a rational function involving game scores. Then ∂F/∂PS is the SOS function for points scored, and ∂F/∂PA that for points allowed. We can't find those explicitly, but internally the ISOV approximates F′(team) for all teams, and from this we can approximate ∂PS/∂(PS+PA) and ∂PA/∂(PS+PA) for every team.Note that the "SOS" that is required is not just the average of opponents' ISOV values. That is a measure of which teams were on the schedule, and what we need is a measure of how representative those are of the field as a whole. This is the "internal" SOS that is used to derive the ISOV values, and while it will be approximated by the average opponents' ISOV, it is different enough to matter.
While the bars and sports talk-radio are consumed with "they don't play defense out west" and "those guys out east don't have any offenses that would test a defense" we can use the ISOV SOS values to find to which degree both arguments are correct. (Note: in the tables below, "WM" is what I call a "weighted median", which is a measure of how top or bottom-heavy a conference is. Like all ordinal ranks, lower numbers are better.)
|
|
The team details are listed under the Scoring Stats link on the main page.