I've been reporting an interest factor for games for years, but I've refined changed the formula enough times I should've given it version numbers.
Last year I wrote:
The formula I chose is loosely based upon Potemkin's magnitude of ranking violation metric. It is:That formula does a good job of distinguishing the "more interesting" from "less interesting" games based upon the computer rankings of the opponents, but its values are difficult to interpret because they vary so widely. You'd really have to form the ratio of two games' "interest factor" to get a relative rating. It would be a lot easier if you could just look at the difference between the numbers.N is the number of ranked teams, and R1, R2 are the ordinal ranks of the opponents in each game. We take any game against an unranked team as being of zero interest and assign unranked teams a rank of N + 1
( N + 1 - R1) × ( N + 1 - R2 ) ( | R1 - R2 | + max( R1, R2 ) )
I'm a little embarrassed not to have noticed sooner that the obvious way to turn ratios into differences is to just report the common logarithm:
The denominators are the same ( | R1 - R2 | is max( R1, R2 ) - min( R1, R2 ) ). The +1 just keeps what was zero, zero.
Log10 [ ( N + 1 - R1) × ( N + 1 - R2 ) + 1 ] { 2 × max( R1, R2 ) - min( R1, R2 ) }
Compare last year's graph with the #1 curve "off the scale" to
The maximum "interest" value for a game as a function of a team's rank is just a plot of the peaks of those curves calculated for every rank:
|
Now, it hardly bears calculating the average interest-factor by team-schedule (but you know I did anyway) because that pretty much tracks the team's rank. But there's a way to slice the interest factor surface that sheds some light on an old topic.
It is often posited that teams from the power conferences do not need to play (can get away without playing) strong OOC opponents because their conference schedule is so tough. We can test that hypothesis by applying the interest factor to only conference games.
|
I ordered the conferences by average interest factor minus the standard deviation to account for conferences that have dominant teams much better than the conference's average or really weak teams much worse than the conference average. It turns out to be almost the same sequence as just the average, with a narrower range of values. The exception is the B1G, which drops below the Pac 12 and Big East when sorted this way.
All of these are, of course, still based upon 2011 team-ranks. I'll revisit the subject when 2012 results-based computer rankings become useful. Meanwhile, here are the two-stars and above games for the first weekend.
|
Stop me if you've heard this before, but something needs to be done about FBS scheduling.