Friday, August 22, 2014

Dead BCS, Living Better Playoff

In 2012, the yearnings of fans finally broke through the corrupt umbrella of the big conference commissioners: the FBS level schools finally agreed to put an end to the travesty known as the BCS and adopt a playoff to determine a champion. So for the first time, biased and uninformed polls as well as corrupt computer formulas won't decide a paper champion in college football. This year, a selection committee picks the four teams that will get to play for a title. It's a step in the right direction... but it's not quite enough.

This is the new playoff. It's a start.
My concern remains that the committee will be biased towards the big conferences, even if a mid major runs the table. This is why I continue to advocate for the system created by Dan Wetzel, Josh Peter, and Jeff Passan in Death to the BCS. Though with realignment, we have 10 conferences instead of 11, I still like the idea of a 16 team tournament where in reality, every team has a chance to play for a title (win your conference, and you're automatically in). The odds of, say, the Sun Belt champion going on the road to Tuscaloosa or some other traditional football power source and winning is very slim, but they at least have a shot at it, unlike how the old system (and even the current system) plays out.

So for the fourth year in a row (I guess technically third, since the first year had a belated test run with the system), I will be building a playoff using the Wetzel, Peter and Passan system. Sixteen teams enter: the ten conference champions all get in automatically, with the rest of the field being filled out by either independents who ran the table, or any teams that saw some stumbling blocks on their road but still played excellent football for three months. And as I've gotten more experience following regular season results to build a field, I've been able to advance my tools to help me build said field.

The teams that make the tournament are seeded by a selection committee (read: me, though contact me if you want to be a part of the committee to give me another set of eyes and another brain to help decide these things) in an evolving process. The first time was a very organic "Who had the best records, and of them, who had the best wins and/or the least egregious losses?" process. This has since led to my building new metrics to help me out, or borrowing them from Illinois' high school football system. I've made some tweaks to the system this year.



For the third year in a row, I will be using the metric I created called the "Non-Conference Schedule Strength", or NCSS for short. It's an imperfect metric, but it looks at every team's schedule to get a general idea of how "tough" a school set up its schedule. Each week every team gets a score in a narrow range that gives credit for facing teams from a power conference or hitting the road, and penalizing teams for scheduling FCS squads. Even though it funds those smaller schools every year, it artificially inflates records and serves no real purpose on the football field other than maybe giving your backups a chance to play. Here's how a team could be scored in any given week:
  • -1 point for playing an FCS team at home (since God forbid someone like Michigan or Florida actually travel to one of these schools)
  • 0 points for a bye week or playing an in-conference game (this will be the most common score for most teams on a weekly basis)
  • 1 point for playing an FBS team from a non-"Power conference" at home or at a neutral site
  • 2 points for playing an FBS team from a non- "Power conference" on the road or an FBS team for a "Power conference" at home or at a neutral site
  • 3 points for playing an FBS team from a "Power conference" on the road
  • NOTE: "Power conferences" are the ACC, Big Ten, Big 12, SEC, and Pac 12.
It has its flaws. For example, Duke would get more points for visiting Purdue than for hosting Notre Dame, when Notre Dame is probably a tougher foe than Purdue. I'm just trying to get a general idea of whether or not schools are trying to prepare their teams for conference play, or if they want to just imbalance their schedule with extra home games and play smaller schools. That led to my use last year of a different metric.

For their football playoffs, the IHSA looks at schools following their 9-week season. All teams with at least 6 wins make the playoffs, and any teams that have 5 will fill out the rest of the 256-team field based on "playoff points". These are calculated by finding the sum of the wins of the teams you beat. I think it's a great idea, since it gives a tangible ranking to how good the teams you beat were. I used it last year as one of my best tools, but I'm expanding on it this year. This ranking will remain, but will become "First Degree Playoff Points", or "PP1". From there, I will add in "Second Degree Playoff Points" (abbreviated "PP2"), which takes the average playoff point scores of the teams a given school beat. For uniformity, the average will be based on the number of wins to keep everything relatively equal, so a two-win team will have the average PP1's of their two wins, while a nine-win team will have the average PP1's of their nine wins. This way, having zeros in multiple places won't drag the average down.

I also wanted to use computer polls that weren't tainted by the BCS' naive notion of "sportsmanship", and sought out computer formulas that would rank teams including margin of victory. The late David Rothman came up with a system that was kicked out of the BCS because he refused to take margin of victory out. He made his formula open source, and rankings can be found online from a UCLA professor here. The other formula was created by Jeff Sagarin, who did one of the rankings for the BCS grudgingly. He took margin of victory out of their formula, but on his website also does rankings that include it. After I played around with stuff last year, I'm using the final "rating" he lists teams at, which blends a couple different score-based rankings he has formulas for. These two computer rankings will aid in the final selection of at large teams, as well as seeding.

My biggest change for this upcoming season though doesn't really involve how I build the playoffs, at least not directly. The last two seasons I've shared some results here on COAS, but the spreadsheets I've used to compile the numbers have been for my eyes only. This year, I'm making it public. Thanks to the magic of Google docs, I will publish the Excel spreadsheet I use to compile the numbers. It might help, since I have made data entry errors in the past, but it will also allow you, my readers, to see the numbers for yourselves and add your input into whether or not I'm making the right choices in choosing the at large teams and seeding the field.

The season officially kicks off Wednesday, when Georgia State opens up against FCS team Abilene Christian. On Tuesday, I'll have my first runthrough of NCSS rankings by conference as I've done in years past. You'll be able to see individual team scores on the spreadsheet I'll link to in that post, so I will be accountable for any typos I make.

Will FSU repeat as champions? Will the SEC reascend the throne? Will we get a bizzaro simulation where a team that got their butt kicked in the Rose Bowl wins the Death to the BCS Playoffs because the computer doesn't know any better? Or will we get something totally unexpected, where a dark horse unexpectedly runs the table, then gets the benefit of the simulator? Come along for the ride this year, and we'll find out!

No comments:

Post a Comment