But the truth is more complicated than that. During each review period, the BCS conferences have far out-performed the Coalitions conferences, even including last year, when the Mountain West had a banner season. Obviously, even if the formula for the evaluation is unbiased, the present arrangement of college football leaves an uneven playing field between the haves and have-nots.
We have decided to play watchdog on all this development, with the help our friend Ben Prather, who once again produced the goods on the AQ question. Last year, he released the simulation of the BCS evaluation process up through 2008. From now on, the Guru will host the ongoing evaluation for 2009 and beyond.
Here's Ben's explanation for the keys to the 2009 BCS AQ data.
By Ben Prather (Fanblogs.com)
The BCS currently uses three criteria to determine future automatic qualifications. These are the number of top 25 teams, the average of all conference members in the BCS standings and the highest ranked team in the BCS standings. The results of these criteria from 2008 through 2011 will determine the eligibility for 2012 and 2013. Changes in membership are applied after a team has played a year in a new league.
The BCS uses a four year window, so an interesting question is where do the conferences stand going into 2009? Observing that none of these criteria include bowl results, what happens if these are included as a fourth criterion? The criteria are not specified to a precision allowing exact duplication so an estimate must be made. Western Kentucky will count for the Sun Belt next year, after they complete this year in the league.
The numerical format is intended to match that of the BCS standings used to determine annual BCS qualifications. 1.0000 represents an ideal performance and 0.0000 represents a performance not warranting consideration. 0.5000 represents the borderline case, typically corresponding to #14 in the BCS standings. 0.7500 represents an elite performance typically corresponding to #6 in the BCS standings.
Top 25 index
The top 25 index shall be the number of top 25 teams each year divided by 5.
The maximum number of teams any conference has ended the season with is 5, setting this value to 1.0000. This establishes an expected four year average of 2.5 teams for a BCS conference and a four year average of 3.75 teams for an elite conference. A conference with 6 teams one year could exceed 1.0000, rewarding excellence.
Computer average of all teams in each conference index
For each team remove the highest and lowest BCS computers rankings, add 0 points for last place thru 119 for first place. Divide by the total points possible to get each team’s score.
Average the team scores for each conference to get a raw score. The conference index is then (Raw Score - 0.5000) / 0.2000. Negative values are truncated at 0.0000.
The team scores emulate the methodology used for the BCS formula. The conference index is scaled to account for the effects of averaging.
1.0000 represents a conference whose AVERAGE member ranks in the top 35. 0.5000 represents an average ranking in the top 48 while 0.0000 represents an average ranking of 60.
Annual values over 1.0000 are allowed to reward excellence while low values are dropped to prevent conferences from being unduly hindered by their past. The four year average will return to the expected bounds.
Top ranked team from each conference index
The top team for each conference in the BCS standings is used unmodified.
Bowl record index
Each conferences annual bowl record is adjusted using the formula (PCT-.5000)*2+.5000.
Like the average in the computer rankings, PCT averages need to account for the central limit theorem. This is the only component allowed to take negative values.
1.0000-0.7500: Premier conferences
0.7500-0.5000: Automatic Qualifying conferences
0.5000-0.0000: At Large conferences
The distinction between Premier and Automatic Qualifying conference is to give the top conferences something to compete for without jeopardizing BCS status.
The term At Large conferences would replace the current misnomers used to describe these conferences.
Simply applying these to teams does not properly reflect the value a team brings to a conference.
The top 25 index should be multiplied by the number of team in the conference. The average membership is currently 10.5.
The computer average does not need to be adjusted.
The top team index needs to be tempered by the probability that the team is the top team in the conference. This can be accomplished by raising it to the power of the number of teams the conference expects to have ranked.
Bowl results are not included at this time.