Each of these rating systems is a variation on the basic formula
where g and h are functions that correspond to winning percentage and strength of schedule, respectively. If the only inputs to g and h are wins and losses, the description is exact. When other factors (location, margin of victory, etc.) are inputs to g, then g is the analog of winning percentage, and h is the equivalent of SOS with respect to g.
If every team played every other team on a neutral site (perhaps more than once) then there would be no need for any rating system  winning percentage and headtohead results would be sufficient to rank teams. When this is not possible, but for any reason it is necessary to rank teams that haven't played each other (for instance, to assign seeds in a tournament), some algorithm must be used. a summary of many different football rankings in 2004, not including the ones published here.
The assumption behind every rating system is there is a function ƒ(team) that assigns a value to every team that can lead to an unambiguous ordinal ranking of teams wherein every team ranked higher will beat a team ranked lower more than half the time. All ratings use some function f that approximates ƒ using some subset of the variables involved in the hypothetical function ƒ (at least one of the hypotherical function's variables must be random, else there would be no reason to play the games).
The adjusted Winning Percentage depends only upon wins and losses. It assigns a value to each team based upon its record and opponents' records equivalent to the worth of a win over the team. The SOS component is a combination of opponents' winning percentage, opponents' opponents' WP, and OOOWP. Only opponents' wins contribute to a team's rating. The aWP has the same characteristics as the Colley Matrix method.
The aWP does not become useful until all teams have opponents' opponents that are distinct from opponents. It doesn't become very accurate until about 37 percent of the teams have opponents or opponents' opponents equivalent to half the field (in 2004, that won't happen until some teams have played eight games).
The Performance Against Pythagorean Winning Percentage uses the PWP instead of winning percentage, and the Generalized Performance Against Algorithm to derive a strength of schedule with respect to PWP.
The Pythagorean Winning Percentage is a measure of a team's ability to score (and prevent opponents from scoring). It is defined by
PWP  =  PF^{2} 

( PF^{2} + PA^{2} )  
The PWP was first applied to baseball, and is generally interpreted as a comparison to winning percantage. A PWP higher than winning percentage by a lot indicates an "underperforming" team, and one lower by a lot a team that has won a number of close games (a "lucky" team). In football there aren't enough games to justify that interpretation, but the formula allows a summary level of a team's ability to score and prevent scores to be taken into account. While the aWP uses only wins and losses, the PAPWP uses opponents' points scored and points allowed to determine the quality of a win. (Note: this is not the same as using "Margin of Victory" since individual game scores are not a factor  only a team's total points scored and allowed are used  and the PWP that matters is the opponents value, over which the team in question has no control.)
The "PA" part superimposes headtohead(tohead) results. A team only gets credit for opponents' opponents' results for the OO's that its opponents defeated, and only inherits benefits from opponents that it defeated. This rating is further distanced from "MOV" because a team's opponents' values are used, not the PWP for the team itself. PA is a combination of the team's WP, Opponents' WP and opponents' opponents' ratings. SOS() is the average opponents' rating, aSOS() WP combined with opponents' ratings, and dSOS the combination of opponents WP and OO ratings.
The Iterative Strength of Victory is an application of Boyd Nation's Iterative Strength Rating algorithm to Joshua Padgett's Strength of Victory metric. For each game a value between zero and one is assigned by the formula:
SOV  =  ( PF  PA ) 

( PF + PA )  
The algorithm defines a "winning percentage" (labeled on the report as "SOV") with values above or below 100 (as opposed to .500), and then adjusts these values based upon comparisons to every other team.
The iterative part of the algorithm actually determines an average SOS and adjusts all winning percentages until essentially all pairs of teams have been compared to every other team. The "SOS" component can't be determined until all the SOV values are, and once that is done the report lists that as OSOV (for Opponents' SOV). The function h that describes SOS is explicitly a function of g so we have
which explains why an iterative approach is required.
The ISOV uses only Margin of Victory, and unlike the aWP and PAPWP does take into account game location. Like the others, it becomes more accurate as more games are played, and is the one used to make the game predictions included on this site.
The composite of the three rankings is just the inverse of the harmonic mean of the three ordinal rankings for a team. This essentially gives more weight to better positions. For example, a 2, 3, and 2 and a 1, 2, and 4 would each have the same mean, but the team with the 1 as the highest ranking would score a higher composite. The same calculation is used to combine the three different SOS rankings.