Actually, it's not the polls themselves, its the way the votes are counted.
There's a reason the Borda count method (equivalent to the 25 pts for first, 24 for second, and so on) is not used in any real election.
Here's a good example. I took the 46 computer rankings from the Football Ranking Comparison Page and counted the "votes" using Borda. I used 118 points for 1st, 117 for second, etc. just so you can tell in the "others receiving votes" category what the ranks were (a team with 99 points can only be one vote for #20, not 6 votes for #25). To simulate the human polls I truncated the "ballots" at #25.
Right at the top there's an "error" - LSU is #2 even though only three of the 46 ballots had them ranked that highly. USC is third even though 27 of the 46 ballots had them no worse than second, and 36 had them ranked higher than LSU.
Maj is the best rank for which 24 or more of the 46 agreed the team should be at least that high. Cnt is how many did.
The problem is that, like average rank, Borda can be unduly influenced by "outliers." A much better method would be to order the teams by the majority rank (equivalent to the median rank if every voter ranks the team.) When there's a tie (many teams can have the same median rank) use the count for that rank or better. If there's still a tie, use Borda to only order teams with the same majority rank.
Compare the ranking to the right with the results using this technique (which is known as the Bucklin method.) Here I've only included teams that were listed on a majority of ballots, since the remainder would be the same either way.
Imagine the outcry if at the end of the season the polls came out this way? Oh wait, don't bother, it's already happend. That's why the Associated Press no longer participates and Texas got to play in a Rose Bowl. But imagine if the same thing had happened between #2 and #3 instead of #4 and #5.
The BCS changed its formula after their was an outcry because USC finished third in the BCS standings but #1 in the Associated Press poll after LSU beat Oklahoma in the championship game. The new formula made it just about impossible for there to be a disagreement between the AP and BCS.
The should-have-been expected result actually happened in 2004: a massive email campaign caused enough AP voters to switch their votes so that Texas got an automatic bid over California, the only team to beat the AP's final #1 that year. That could have only happened because Borda is so easily manipulated.
The ensuing uproar caused the AP to withdraw from the process, and the response by the BCS was not to correct the real problem but to just form a replacement poll. In 2005 there was no controversy, but it wasn't because the BCS champion was the same as the AP #1, it was just that there was only one undefeated team after the bowls and everybody had it #1.
Using a flawed voting system was never an issue as long as the polls were only used as originally intended - pick a #1 (Borda is for "single-winner elections", not a "top 25") and provide some entertainment for fans on a non-gameday. Once they became the determining factor for who would get A Whole Pile of Money, it was inevitable that the weaknesses in the system would eventually be exploited.
The formula should be fixed - really fixed, not just bandaged. An appropriate formula would:
It's ironic that the AP voters helped create such a big problem they had to leave the process. The computers were de-emphasized mainly because "we don't know how they work." Fans went along with that, but 2004 made clear that both the media and typical fans know so little about how the polls work that they trust them.
I don't know when, but some day the situation in this real-data example will happen with the BCS and we'll have the "wrong" participants in a BCS bowl again.
Much has been made of the "lack of transparency" in the polls and usually people only talk about making the ballots public as a solution. That certainly is not a desirable solution - we want the voters watching football games not dealing with email from angry fans. Nor is it necessary.
In order to understand why a poll came out as it did all we need is a count of the number of votes for each rank. Publishing just the number of votes for #1 was enough when the only question was "who do they think is #1?", but in the BCS it matters who is #2 (or 12, or 16). So a report like the following should be sufficient.