In the last few weeks, I've done my best (or my "most good enough") attempt to build a model to predict regional rankings, and from there to predict Pool B and C selections. I used data going back to 2005 in developing these ratings, with more recent seasons weighted more-heavily than previous seasons. A couple fallacies in these ratings: they assume RACs and National Selection Committees behave predictably and consistently. We know that isn't the case. It also doesn't automatically bump teams over opponents they beat head-to-head, and it doesn't consider common opponents at all. It still correctly retroactively projects over 75% of Pool C participants in the last 12 seasons.
Below are the variables used and their weights:
As you can see, the variables that have the largest impact are a team's losses and their strength of schedule. I included each team's "Hansen SOS" and "Non-Conf SOS" because it vastly improves the model's predictive ability (by greater than 50%) compared to using only the SOS as calculated by the NCAA. While the national and regional committees are told to use pretty much only the NCAA SOS in their decision-making, the committee members are human, and they have biases, and they know that a 0.500 SOS value for Mount Union isn't the same as one for St. Scholastica.
These values are used to give each team a "Region Rating," where lower values are better, and then ranked within their region. The model then credits teams for wins and losses against regionally-ranked opponents. Because this model can only approximate the regional ranking of teams, it doesn't have a sharp cut-off for the definition of a "regionally-ranked opponent." Every team with three or fewer Division III losses is ranked, and the reciprocal of their regional ranking is then multiplied by the values below, and added to the previous "Region Rating" to develop a new rating:
W v. RRO: -9.00
L v. RRO: -1.92
So in the breakdowns below, you now know what the heck "Region Rating" means.
One thing that I could probably do to help my model's predictability would be to include tie-breakers for a team's rank in their conference. The fact that Plymouth State lost to a bad team in conference, and is technically tied with Framingham State, is the cause for their poor Region Rating. Other than that, there aren't many other question marks in the East Region Rankings. My model and I both think Wesley's resume is slightly better than Springfield's, but it's a relatively slim margin. It's also not completely out of the question that any of the three teams in the "Notable Exclusions" list could crack the Top 10 if they win in Week 11.
Illinois Wesleyan is one of the most-qualified Pool C candidates of the last several years if they beat Millikin. It's a joke they're behind DePauw. They have a win against a regionally-ranked opponent and against the WIAC runner-up (I'll get to that). DePauw ain't played nobody.
This region is also a good example to remind you that the people doing regional rankings aren't a computer (which is a good thing). If each committee applied the same amount of consideration to each metric consistently from year-to-year, region-to-region, and team-to-team, Wittenberg would absolutely be ranked ahead of Mount Union. Here are the official primary criteria for the committee per the NCAA:
Win percentage against DIII opponents
Results versus common opponents
Results versus regionally-ranked opponents
Strength of Schedule (2/3 opponents' win%, 1/3 opponents' opponents' win%)
Witt and UMU have the same win percentage.
They didn't play each other.
They don't share common opponents.
Witt has a win against a regionally-ranked opponent, UMU has no results vs. RROs.
Witt has the greater strength of schedule (lol).
If you're counting, that's 2-0-3 in favor of Wittenberg. To be clear, I don't think Wittenberg should be ranked ahead of Mount Union. I'm just pointing out that models like this have their flaws, and human intervention can be good, if it is well-reasoned, and sometimes, "come on, it's Mount Union," is as good a reason as any.
I would also like to point out the different priorities between the East and North RACs. If the East RAC was ranking the North teams, Lakeland would probably be ranked ahead of Millikin and maybe Wheaton, just because they are first in their conference and Millikin is tied for third.
Just like I pointed out with Witt/UMU, Hardin-Simmons being second in the South instead of sixth is a good thing. They're easily the second-best team in the region, even if they don't have the second-best "resume" among 1-loss South Region teams (JHU has a W v. RRO, a better SOS). Because we know committees augment the SOS metric according to the perceived strength of schedules, I have to assume that the only reason Berry is ahead of W&J is because Hendrix cracked the Top 10 instead of (probably more deserving) CMU. A CMU win over CWRU could vault W&J over the Vikings next week, not that it'll matter much. Centre being so high is also odd, and probably also solely due to Hendrix cracking the Top 10, giving Centre a W v. RRO. Both F&M and Huntingdon have better resumes.
The Top 6 here make sense, or I should say I understand what the RAC was thinking when ranking teams. The last four don't make any sense to me. I don't know who is lobbying against Concordia-Moorhead, but they're doing a good job. I don't see how UWL's superior SOS overcomes an extra loss and a worse result versus a common opponent (Moorhead beat UWW). UWL is also in third in the WIAC, and the selection committee has previously said that teams who can't finish third in their conference, especially when the second-place team isn't regionally ranked, are viewed differently from teams who finish second.
Let's talk about that second-place WIAC team for a second. Whitewater is 1-3 against regionally-ranked opponents. Their three losses are "second-best" in the country, behind only Westminster. They could easily be in the rankings after next week, and they absolutely should be in ahead of Lake Forest (as should Redlands, Chapman, Platteville, and Stout). Are you kidding me West RAC? Lake Forest? They've played one team rated better than 0.500 in my model all year, and they got their asses kicked.
I honestly don't know why this model is so bullish on Redlands. My ratings think they're still easily the best team in the SCIAC, but that should be enough to overcome their additional conference loss and worse record vs. RROs when compared to Chapman.
I just don't get what they're thinking. Lake Forest, who is second or third in the MWC, an inferior conference to the SCIAC, is ahead of the SCIAC champ or the NWC runner-up?
Oh, look, LAKE FOREST'S COACH IS ON THE COMMITTEE.