About the Ratings

This site is mostly about analyzing ratings, but I calculate two that are meant as much to be compared to other ratings as to rank teams.

Boyd Nation's ISR
As currently implemented for college baseball, Boyd's version includes a home field advantage and a margin of victory factor. For college football, I use the original ISR plus the home field adjustment but without the margin of victory adjustment.
my ISOV
uses the same algorithm as the ISR except instead of a fixed game-value based upon win or loss each game has a value based upon scores.

The expectation was that the ISR would be most "like" retrodictive ratings, and the ISOV more "like" predictive ratings, so these could be used to model other ratings with similar properties. That turned out to be true for the most part. 

Each ratings report has five values based upon the rating, with an accompanying ordinal ranking.

ratenameISR or ISOV. This is the basic rating, which is derived by the recurrence relation
Rn+1(team) = F(Rn(team), Rn{opponents R in team wins}, Rn{opponents R in team losses} ) 
The difference between the ratings lies in the definition of F(). The ISR counts every visitor win the same as every other visitor win, every home win the same as every other home win, and every neutral site win the same. Road wins do count more than home wins, and home losses do hurt more than road losses. The ISOV uses a different value for each game, based upon how the game-score compares to the average strength of victory, adjusted for home/road wins/losses just as in the ISR.
SOSThis is just the average rating of all of the teams' opponents. Sometimes people (and sometimes computers) value this metric more than is warranted. In general, any variety of SOS should only be used to compare teams that by a first-order (winning percentage, rating value, etc.) comparison are equal.
aSOSThis is somewhat misnamed meta-rating for any rating. The adjusted Strength Of Schedule is the sum of opponents' ratings for all of team's wins divided by the number of games team has played. In other words, for team's losses the opponent's rating is treated as 0.

For undefeated teams, aSOS has the same value as SOS. In any case, aSOS can be used to as a measure of win-quality.

dSOSThe derived SOS is just the average of all team's opponents' aSOS values.

Conceptually, this is an "SOS" that is based only on the quality of opponents' wins. dSOS is probably a better measure of team's SOS than SOS is.

PA()The PAsos is calculated like aSOS except the opponents' dSOS values are used. This value is zero if team's only wins are against winless teams. Higher values indicate good wins against teams that themselves have good wins.

The aSOS and PA() columns could be used as meta-ratings, but their primary purpose is as analytical tools. In particular the rank-difference between aSOS and rating gives an indication of how much rating is influenced by losses to good teams. The diference between aSOS and PA() ranks shows the effect of "good wins."


 R(team) = Rm(team)
when the maximum difference between Rm(team) and Rm-1(team) for all teams is ≤ 10-3 and the sum of all such differences is ≤ 10-1.
Calculations are made using 20 decimal digits.
© Copyright 2014, Paul Kislanko