Nobody can really agree on an ideal method for ranking conferences. Most other computer rating sites simply average teams' ratings (which is pretty much what I do). ESPN, CBSSports, Bleacher Report, etc. use "power rankings" and human voting. D3football.com relies exclusively on human rankings (I personally think they do a pretty good job, other than consistently under-ranking some conferences like the IIAC and over-ranking others, such as the ASC; more on that later). I'm not going to muse on how to come up with a ranking, other people have already done that for me.
Here's what you need to know: I use each team's opponent-adjusted score differentials to determine their rating. For my conference ratings, I average all of a conference's teams' AdjO & AdjD, subtract the average AdjD from the average AdjO, and use that differential to determine the conference's rating. If two conferences played each other, with each team from Conference A playing each team from Conference B, the conference with a higher rating would be expected to have a winning record.
One of my larger goals in building my rating system was to compare long-term historical trends of greatness (or good-ness), not just for teams, but for conferences also. I already did a team-by-team comparison with my historical ratings interactive, and this post should serve as my analysis of conferences' historical ratings.
First, some context to my 1999 preseason ratings. I did not start every team with an average rating as I had initially planned to, because not every team started 1999 as equals (obviously). You can read the last paragraph of my Preseason Ratings page if you're interested in my methodology, but I feel like I did a fair (not perfect or ideal) job of developing accurate preseason ratings for each team.
The difference between end-of-season ratings for my rating system as-is when compared to what they would have been had every team started with an average rating are pretty minuscule (about a difference of 0.1-0.2 ppg after a season or two, and only about 0.01-0.02 ppg after ten seasons), but they satisfied my expectations for each team's/conference's relative ratings. I'm aware this is not a very scientific approach to rating teams, but it's more accurate than an "unbiased" approach (any ranking that says they're unbiased doesn't understand bias), so why not use it?
Below are the results for each conference's end-of-year rating:
The WIAC is the obvious top conference, and since about 2004, it hasn't been close. Part of the reason the conference wasn't rated higher before then was due to their non-conference schedules. Many of the top teams within the WIAC were playing Division II teams non-conference (and beating them). That means the only games for my model to compare the conference to the rest of Division III were the non-conference games played by the teams in the bottom half of the conference. If my model were interconnected with other divisions, there's no doubt in my mind that the WIAC would have probably been even better in 1999 than they have been lately.
The only other conferences to hold the top rating since 1999 have been the NWC and OAC, but both have seen some moderate decline in recent years. For the NWC, the decline appears to be more due to the decline of their top-tier teams (excluding Linfield). In the early years of the D3Football.com era, the NWC had multiple teams vying for conference titles, and even crowned two different teams as National Champions (Pacific Lutheran '99, Linfield '04). Since Linfield's ;04 Stagg Bowl though, no other team has consistently fielded a playoff-caliber rating. By contrast, the OAC usually has very good #2 and #3 teams (and obviously Mount Union), but the bottom half of the conference just isn't as strong as it used to be, hurting the conference's overall rating.
Below I have averaged every conference's rating from 1999 to 2015, and sorted them in descending order:
Two things stick out to me when I look at this graph. First, the IIAC is higher than I would have thought, and the ASC is way lower. To get a good read on which other conferences vary from the general perception, I decided to compare my conference rankings to those from D3Football.com. The folks over at D3Football.com have been ranking conferences since 2002, so I'll limit my comparison to seasons since then, and to conferences included in this nifty table (essentially every current conference, plus the UAA).
What I've done in this analysis is rank every relevant conference among all other relevant conferences, and then found the average of their historical rankings. This means that my derived average D3Football.com ranking will vary slightly from what they have on their site, because I'm not including the gaps in rankings due to defunct conferences. Below, I've charted each conference's historical ranking for, as well as the absolute difference between my rankings and D3Football.com's.
My suspicions about the IIAC and ASC seem to have been well-founded, for the most part though, my rankings and the D3Football.com rankings are pretty similar. Of the 28 conferences in this study, about half of the average rankings are within one spot, and three-quarters are within two. The conferences that vary the most are the ASC, UAA, SCIAC, IIAC, MIAA, NCAC, and USAC. Most of the reasons for the differences in these conferences probably comes down to three factors: playoffs, geography, and to a lesser extent, conference size (hello, UAA!).
When you read the D3Football.com conference rankings, it becomes apparent that they place a large amount of value on playoff results. Looking at the conferences mentioned above, that becomes apparent too. The ASC regularly has a Stagg Bowl-contending team in Mary Hardin-Baylor, and as such, their conference is "overrated" by D3Football.com relative to my model. Likewise, the NCAC's top two teams (Wabash & Wittenberg) are very capable of making their own playoff runs. In the IIAC though, Wartburg has been essentially the only team that hasn't underperformed dramatically in the playoffs in recent years. And for the MIAA and SCIAC, as well as the IIAC, geographic constraints on their early round playoff matchups generally pit their champions against the champions from better conferences. For the IIAC, it's matchups with the WIAC, MIAC, and CCIW. For the MIAA and SCIAC, it seems to always be Mount Union and Linfield/Mary Hardin-Baylor.
As a subjective approach, this emphasis on playoffs is perfectly reasonable, but from a statistical analysis perspective, it doesn't make any sense. In a 32-team playoff format, a total of 31 games are played, and not all of those games will even be non-conference contests. In 2015 teams played a total of 297 non-conference games. In a sport where sample sizes are already relatively small, placing too much emphasis on a certain subset of games, even if they are "more meaningful," can skew results.