NCAA Division 1A Football

Copyright 2004, Paul Kislanko

Often people who are not familiar with the nature and limitations of statistical methods tend to expect too much of the rating system. Ratings provide merely a comparison of performances, no more and no less. The measurement of the performance of an individual is always made relative to the performance of his competitors and both the performance of the player and of his opponents are subject to much the same random fluctuations. The measurement of the rating of an individual might well be compared with the measurement of the position of a cork bobbing up and down on the surface of agitated water with a yard stick tied to a rope and which is swaying in the wind.
Arpad Elo in Chess Life, 1962
In theory, there is no difference between theory and practice; In practice, there is.
Chuck Reid
Fixing the BCS
Althugh changing no one component will do the job, changing the formula can address the systemic issues.
Fixing the BCS Not (Part Two)
The BCS use of the computers eliminates their most attractive attribute - consistency, and has other flaws.
Fixing the BCS Not (Part One)
The human polls are the biggest problem with the current formula. There's a way to address the biggest part of their problem with respect to the BCS.
The 2004 BCS
Not as bad as some claim, but still covered in maggots. The NCAA should not condone a system that encourages gambling and discourages sportsmanship.
My Daddy Can Beat Up Your Daddy!
A look at the different ways fans mistakenly compare teams by comparing the conferences they play in.
The BCS blew it again
For the 2004 season, the BCS radically adjusted the BCS formula in an attempt to make the AP poll winner the same as the BCS winner. Whether this was a good idea or not is subject to debate, but the way they went about only shows that they as little about ranking systems as they do about football and public opinion.
Basic SOS November 2, 2004
A Strength of Schedule definition that makes sense for formula-based systems.
College Football Rankings comparison
An invaluable resource compiled by Kenneth Massey. A different view of the same data is Summary of 98 Computer Rankings
Ratings The aWP is based only upon wins and losses. It basically sorts out all the chains of head-to-head-to-head wins to assign a value to the "quality" of a win based upon the quality of opponents' wins. PA-PWP is nearly the same thing, only the quality of a win is based upon the opponents' wins' quality, which is based upon their opponents ability to score and prevent scores. The ISOV is a combination of wins and Stength of Victory, which is determined by comparing every team-pair with all possible team pairs. It is the one used in the College Football Rankings Comparison. It is also the one used to provide the predictions below.

For more information, see About the Ratings.

Week 7Compositeadjusted Winning Percentage Performance Against PWPIterative Strength of Victory
Week 8CompositeaWPPA-PWPISOV
Week 9CompositeaWP PA-PWP ISOV
Week 10Composite aWP PA-PWP ISOV
Week 11 CompositeaWP PA-PWP ISOV
Week 12Composite aWP PA-PWP ISOV
Week 13Composite aWP PA-PWP ISOVPA-ISOV
Week 14Composite aWP PA-PWP ISOVPA-ISOV


Actual versus Predicted MOV
Week Win-Hit Win-Miss Marg-H Marg-M
5 38 11 32 17
6 34 15 24 25
7 39 10 35 14
8 44 8 35 17
9 30 15 24 21
10 30 22 25 27
11 3617 3221
12378 2916
131313 1115
1474 65
308 123 253 178

Where Predicted MOV (X) and Actual MOV (Y) are both positive or both negative, the method correctly predicted the winner. When Pred is negative but Act positive, it predicted a win by the home team but the visitor won. When Pred is positive and Act negative, the visitor was predicted to win but lost. Points between the line Y=X and the X-axis represent games where the winner was picked correctly, but by an excessive margin of victory.

For the retrodictive analysis, home and visiting teams are reversed. The sign of the expected MOV matched the game results 80.7 percent of the time (503-120).