2017 Computer Rankings

August 6, 2017

Dr. Massey has begun publishing the College Football Ranking Composite listings for 2017 so I am now updating my Ranking Analysis pages. This is the main purpose of the site, since I began it mainly to study how various rating systems are affected by scheduling and how they correlate.

Dr. Massey's table is ordered vertically by a Consensus Average:

The "average" or "consensus" ranking for each team is determined using a least squares fit based on paired comparisons between teams for each of the listed ranking systems. If a team is ranked by all systems, the consensus is equal to the arithmetic average ranking. When a team is not ranked by a particular system, its consensus will be lowered accordingly.
which I report as a meta-ranking but do not use to construct other "consensus" ranks.

I aggregate the computer rankings three ways, based upon three different ranked-ballot voting methods. The details of each are provided on the Ranking Analysis page.

Borda
This would result in an identical ranking to that of the Massey Consensus were it not for the fact that I exclude human polls and computer ratings that do not rank all teams.
Bucklin
Assigns the best rank for which a strict majority of the computer ratings rank the team at least this highly. When there are an odd number of ratings this is identical to the median rank.
Condorcet
Assigns ranks based on the number of pairwise wins each team has. Team A has a pairwise win over team B if it is ranked better on more ballots than team B.
I use the Bucklin rank ("Majority Consensus") for all of the reports in the reference section that explicitly list a team's rank. It is also the last tiebreaker for the division 1A conference standings display.

A fourth voting method illustrated is known as approval. It does not count ranked ballots, but just the number of ratings that rank the team in the top four without regard to whether the rank is first, second, third or fourth.

There are also three measures of the consensus rank distribution by conference affiliation. They measure the strength and depth of conference teams. Other than the techniques used to compare conferences of different size their only significance is their role in settling (or starting) barroom arguments.

Correlations

I report two different rank correlation coefficients that show how much each rating (and the Massey Consensus rank) differs from the Majority Consensus. The correlations are Kendall's tau and Goodman and Kruskal's gamma. Both are based upon counting how many of the 8,385 team-pairs two ratings agree or disagree about the relative rankings of the teams forming each pair, and by how much they disagree if they do.

These are convenient because they are based upon simple counting stats, and the contribution of each team to the difference can be reported. Of more interest than the correlation to Majority Consensus is a pairwise table that shows the "distance" between any two of the computer rankings. The table highlights which other rannkig or meta-ranking each specific rating is most and least like.

© Copyright 2017, Paul Kislanko
Football Home