When it came to narrowing down the field to four teams in the College Football Playoff race, a popular refrain was, "it will all work itself out" whenever we started to really twist ourselves into knots coming up with chaos scenarios. A decade of evidence from the four-team format has shown that by the end of conference championship weekend, picking the four best teams wasn't all that difficult. You could take issue with how the committee seeded some of those teams between Nos. 5-12, but the impact of those rankings was relatively light and mostly altered or impacted non-playoff bowl matchups.
Now, as the CFP field expands to 12 teams beginning in the 2024 season under a 5+7 model, those decisions will determine not only whether a team receives an at-large bid to compete for a national title but also whether a first-round participant will host a CFP game on its home turf or be forced to play an away game. When the CFP tripled the size of the field, it exponentially increased the responsibility of the College Football Playoff Selection Committee.
Deciding on the best teams is difficult
The selection committee has remained firmly committed to its process, and that provides cover when controversy erupts -- like Florida State's exclusion as a 13-0 conference champion last season. The process of voting on each position in the rankings one-by-one is purposeful with its patience. The CFP provides committee members with loads of data and lets them compare teams side-by-side with rounds upon rounds of voting to get through all 25 spots in the College Football Playoff Rankings.
I turn in a ballot for the CBS Sports 133 -- soon to be CBS Sports 134; hello, Kennesaw State! -- every week during the season, and deciding on the Nos. 20-35 slots can be the most painstaking part of the process. There are a lot of good teams with losses but limited comparable results, and it just turns into a bit of a grab bag. The AP Top 25 certainly feels that way as well with the spots in the 20s throughout much of the season. Voters vary wildly with the end of their ballot until we get deep into the season and the loss column provides a nice self-sorter for many.
But when the selection committee is tasked with comparing teams, one of the many pieces of information provided on the team sheet is a team's record against opponents ranked in the most-recent CFP Rankings, which is the previous week's top 25 before the most recent weekend of results. There are two issues here, one greater than the other. First, using the previous week's top 25 is a dated snapshot of strength in the sport. But, most importantly, 25 is an arbitrary number that doesn't properly reflect a line of demarcation for strength in modern college football.
If teams with losses are going to be judged against each other, and playoff spots will be on the line based on those decisions, the committee needs a way to acknowledge that "top-25 wins" is a flawed statistic for comparison. The committee needs to expand its purview, and in doing so, eliminate the built-in biases of recency and "quality loss" syndrome. When the spots at the bottom of the committee's top 25 are frequently populated with teams who have lost to contenders at the top, the appearance is that -- consciously or subconsciously -- the rankings are being reverse engineered to justify the decisions made earlier in the process. There is not much difference, objectively, between the teams with "quality losses" and the 5-10 teams who didn't make the cut other than having played -- and lost -- to a title contender.
Objective analysis from across the entire FBS landscape in the form of power ratings and efficiency ratings tells us that the difference between the No. 20 and No. 40 teams in the country is around a touchdown on a neutral field. There's more separation between No. 1 and No. 10 than there is in that 20-team range, so picking the top four or five teams has always been an easier task. If the margins in that range of good-but-not-elite teams is so small, the committee needs an objective way to give credit for beating the 30th-best team in the same way it does a win over the 23rd-best team.
Does college basketball lead us to the answer?
So, could something akin to college basketball's NET ratings -- or something similar -- be the answer? That sort of system can expand the definition of quality wins beyond the committee's own post-dated top 25 and eliminate some of the subjective impacts of decisions made at the fringe of the rankings. College football has been running away from computers since ditching the BCS system, but unless the committee is prepared to release a top 50, top 75 or rank all 134 teams at the FBS level, it's time to bring the computers back.
We don't need the computers or a formula to be the final voice in the room, but college football's defiance of using objective data that is both opponent-adjusted and tempo-adjusted stands in stark comparison to how teams are selected and seeded for the NCAA Tournament. The basketball selection committee uses NET as a sorting tool, while also utilizing predictive metrics like KenPom ratings and BPI on the team sheet in addition to resume -- or result-based -- metrics like KPI and Strength of Record. A football version for any of those statistics are more informative than total yards, scoring offense and other traditional counting statistics that have been provided on team sheets in the past.
Models are already available that can do this for the committee; all it needs to do is transparently add that information into the process. ESPN's Bill Connelly is the architect of SP+, a tempo-adjusted and opponent-adjusted rating of efficiency for every college football team. Whenever the topic has come up as to why there isn't a "KenPom of college football," my response has been there is -- it's SP+.
Connelly joined us on the Cover 3 Podcast this week to discuss the playoff's future in addition to his early 2024 college football ratings, and he explained that he was able to create a model that nearly mirrored many of the selection committee's rankings. It took into consideration the arguments for both "best" (predictive metrics) and "most deserving" (resume metrics) to spit out a top 25 that, by his estimation, hits about 23 of the 25 rankings from the committee. And though we were only discussing the top 25, it is a model that can easily extend to cover all 134 FBS teams.
Again, I do not think we should replace the committee with models, but it would be more informative for the public to have objective data as a sorting tool rather than relying on "top-25 wins" as a measure of differentiation. How a team has performed against the top 20, top 40 or top 60 of an objective rating that is up to date -- including the most recently completed games -- can give us a better sense of how those teams fighting for at-large bids stack up against each other. It would also lessen the distrust in the committee's process from those who believe that a good portion of the committee's ranking beyond the top spots is built to support the decisions made at the top.
Amid this age of College Football Playoff expansion, the evaluation process needs to be expanded as well. It doesn't need to be a NET rating with quadrant systems exactly like college basketball, but we already have the tools to introduce more objective analysis that sets aside top-25 results as essentially the lone cut-off for strength in the sport.