ADVERTISEMENT

Week 7 2020 Strength of Schedule

Titanium Shadow

Good all around Player
Oct 5, 2018
870
446
63
The table below lists the Strength of Schedule and Rating from MaxPreps for each team participating in the 2020 Football Season.

It appears some of the teams have a higher rating mostly for running up scores on weak teams and not playing a difficult schedule.

#SchoolSoSPower
1Salesianum (Wilmington)17.612.8
2Sussex Tech (Georgetown)14-11.6
3St. Georges Tech (Middletown)715.1
4Delcastle Technical (Wilmington)7-26.7
5Sussex Central (Georgetown)6.916
6Dover6.90.4
7Hodgson Vo-Tech (Newark)6.712.3
8Cape Henlopen (Lewes)51.5
9Mount Pleasant (Wilmington)4.2-7
10Smyrna3.828.2
11Appoquinimink (West Middletown)3.16.3
12Concord (Wilmington)0.4-2
13William Penn (New Castle)-1.65
14Middletown-1.728
15Milford-2.6-4.4
16Polytech (Woodside)-3.7-32.6
17Seaford-4-32.2
18Caesar Rodney (Camden)-4.610.3
19Lake Forest (Felton)-4.9-11.2
20Wilmington Friends (Wilmington)-5.5-15.2
21Caravel (Bear)-7.2-7.8
22Wilmington Charter (Wilmington)-8.4-40.9
23Glasgow (Newark)-9.2-21
24St. Mark's (Wilmington)-9.41.7
25Delmar-11-1.5
26First State Military Academy (Clayton)-11.7-42.6
27Brandywine (Wilmington)-13.5-12.8
28Howard (Wilmington)-14.327.3
29Red Lion Christian Academy (Bear)-14.7-0.8
30Laurel-14.7-2.8
31St. Elizabeth (Wilmington)-16-26.3
32Woodbridge (Bridgeville)-16.113.7
33Indian River (Dagsboro)-16.4-17.7
34DuPont (Wilmington)-17.4-39
35Newark-18-4.5
36Archmere Academy (Claymont)-18.97.2
37Conrad Science (Wilmington)-18.9-17.2
38McKean (Wilmington)-21.6-24.8
39Tower Hill (Wilmington)-21.8-8.5
40Delaware Military Academy (Wilmington)-24-4.2
41Christiana (Newark)-26.9-32.3
42Dickinson (Wilmington)-35.8-56.9
 
Last edited:
The preps rating system doesn't reward that.. If you beat teams bad you are projected to beat bad then there is no credit for that...

Here is basically how it works



RATINGS


The most important thing to understand about how our computer power ratings system works is that it is 100% objective. The size of the schools, their division, the league they're in, how good the school* or league are historically, their geographic location, how well liked the school is, how good the league is in other sports-- none of these things are programmed into the system. None of them are there to bias it the way that they inevitably bias humans saddled with the daunting task of trying to figure who belongs in post-season play. Rather, they are just a bunch of teams with a bunch of results. Cold, hard, and unfeeling, yes...but as accurate, objective and fair as is possible.

*For national football ratings only, how good the team appears to be entering the season (based on players graduated/back) is a factor in the ratings early in the year in order to improve early-season accuracy of the ratings, but this is eliminated as the season goes on. For more information on this, please click here


We will start explaining how the ratings work when margin of victory is used as a factor. It is much easier to explain that way.

When margins are used, the differences in ratings between two teams is roughly a measure of how many points better one team is than another. An 80 should beat a 60 by 20, etc.

Example:

Assume the following starting ratings. Don't worry about how they got to this point for now- that will be explained in a minute.

Team A's rating is 10.
Team B's rating is 0.
Team C's rating is -5.
Team D's rating is -8.
Team E's rating is -10.

The way our program works is as follows. It systematically sorts through all the results for the season (season-to-date results if we're dealing with an in-progress season). It takes each result and compares it to what "should" have happened given the ratings of the teams. It knows that if A played C, A should have handled them fairly easily. If A lost that game, or even squeaked by with a narrow victory, its rating is hurt, while C's is helped. The system keeps checking through all the results for every team. Sticking with team A though, let's say they also played D and won by 15 (that's about what they should have done- no real impact on either teams' rating there), demolished team B by 22 (which definitely helps their rating), and beat D by 10 (not doing quite as well as could have been expected- another "ding" against their rating.) When all is said and done, it takes the aggregate of how much better or worse they did than expected in all their games, divides that by the number of games played, and adjusts their rating accordingly. For example, if they averaged performing two points worse than expected, their rating drops from a 10 to an 8. (Please note: this is definitely over-simplified; it isn't this straight-forward-mathematical. Points aren't everything by any means- the win or the loss is always the most important thing, even when margins are used. There is a "diminishing returns" principle at play so as to not fully credit a team for blowing out a weak opponent. In addition to the cutoff point past which margins are not counted, there is a "win minimum" as well a maximum-- a number which no win is credited as being below...because, of course, a one point win isn't just barely better than a one point loss. Far from it.) All teams are adjusted similarly, and then we start over from the beginning with the new ratings- A is now an 8 and expected to perform accordingly, etc. This is done repeatedly until their is no longer any movement in the ratings, and they settle in where they "should" be.
 
  • Like
Reactions: youjustme
I've read all that BiB, and I have also analyzed the actual output of the computer programs over the years. If a team has a much higher rating than their opponent, they are penalized for not at least meeting the ratings differential (not doing quite as well as could have been expected- another "ding" against their rating).

If you do a correlation of the Ratings to Wins, SoS, and Points differential/game from MaxPreps, the highest correlation at 0.823 is Points difference/game, with wins second at 0.797. Both of those are in the Strong Correlation range (SoS is only correlated at 0.45, outside of the Strong correlation range). So teams that run up the score in general, and especially run up the score against the weaker teams they play, have a higher rating than those who don't, regardless of their opponents ratings.

"There is a "diminishing returns" principle at play so as to not fully credit a team for blowing out a weak opponent." All this means is that if a team does better than "expected" against a weaker team, for instance winning by 45 instead of the "expected" 40, that team is not rewarded in the ratings as much for those "extra" 5 points. But if a team "only wins" by 35 instead of the "expected" 40 they are penalized in the ratings for missing those "expected" 5 points.

There is some "expected" differential beyond which a teams rating will not really be affected by running up the score. From my observations, that "expected" differential seems to be between 42 and 49 points, but it could be as low as 35 or higher than 49.

Bottom line is that playing weaker teams is not as big a "ding" against a team's rating as not "doing as well as could have been expected" (i.e. running up the score) up to the undefined cutoff of around 42-49 points.

Here is the correlations for all the Delaware teams:

Max Prep Rating
# of Wins
Correlation Ratings/Wins= 0.797​
Strength of Schedule
Correlation Ratings/SoS = 0.447​
Points difference per Game
Correlation Ratings/Points diff= 0.823​
 
Last edited:
It's still not entirely clear what the SoS numbers fully mean (probably related to the poor correlation to rankings). Archmere played three 1-loss teams in playoff contention (RLCA, St. Mark's, and DMA) and somehow has a weaker SoS than basically anyone else? LOL. Howard's also appears curiously weak (e.g. how is Glasgow's schedule clearly tougher?: it's basically a trade of Appo for Caravel), but I guess that Howard didn't have to play Howard, unlike their unfortunate opponents. Would also apply to Smyrna, Woodbridge, etc. The better teams' SoS is inherently lower because their opponents have to play those teams but they don't play themselves.
 
SoS is an aggregate of all the teams Ratings you have played, so teams who run up their scores factor into that as well.

For instance SoS For Smyrna is (Dover 0.4+Cape 1.5+Central 6.9+Sallies 17.6+Tech 11.6)/5 games = 3.8
 
Last edited:
SoS is an aggregate of all the teams Ratings you have played, so teams who run up their scores factor into that as well.

Right, but that makes my point: if you win against any team, you're making your SoS worse, and you're making your opponent's SoS better. So the expectation is that better teams should (all other things being equal, which of course they never exactly are) have a worse SoS. An undefeated team by definition could not have played another undefeated team.

Differently stated, consider a game between two teams that are rated identically, and with identical SoS before the game. After the game is done, it would appear to be an expected victory for the winning team (that part is inherent to the ratings), but it would also appear to yield a tougher schedule for the losing team because their SoS will go up, while the winning team's SoS will (relatively) go down (their schedule appears easier). Which is all fine, it just says that SoS can understate the difficulty for the better teams, since their act of winning relatively knocks their SoS down and increases the SoS of their opponents. It's inherent to the fact that SoS is adjusted after the results of a game.
 
  • Like
Reactions: Titanium Shadow
Not singling out any teams here, but this is a good example of how much running up the score against a weaker team impacts a teams ratings more than winning the game does.

When Delmar played Seaford as their first game this year, they took out their starters midway through the second quarter and had 8th graders on defense when Seaford scored a touchdown late in the 4th quarter. The game was never in doubt and Delmar could have won by 7 touchdowns if they had kept their starters in, but the final score was only 27-6.

When Woodbridge played Seaford they kept their starters in through at least the third quarter (I stopped watching the game after the third quarter). They didn't do this to run up the score in my opinion, but to position themselves better for a tie breaker in the Henlopen South Championship. Woodbridge ended up winning 49 to 0.

If you look at Seaford, Delmar and Woodbridge's ratings on MaxPrep, the scores for the games they played are reflected in the current ratings.

Seaford -32.2
Delmal -1.5
Woodbridge 13.7

For an "expected" vs. actual difference of:
Delmar over Seaford by 30.7 vs. the actual 21 points (other games are pulling Delmar's ratings up while this one is pulling it down).
Woodbridge over Seaford by 45.9 vs. the actual 49 points (Woodbridge is getting full credit for this game).

If Delmar had kept their starters in all game and beaten Seaford by 49-0 like they could have, their MaxPreps Rating would most likely be 7 or more points higher.
 
Last edited:
Right, but that makes my point: if you win against any team, you're making your SoS worse, and you're making your opponent's SoS better. So the expectation is that better teams should (all other things being equal, which of course they never exactly are) have a worse SoS. An undefeated team by definition could not have played another undefeated team.

Differently stated, consider a game between two teams that are rated identically, and with identical SoS before the game. After the game is done, it would appear to be an expected victory for the winning team (that part is inherent to the ratings), but it would also appear to yield a tougher schedule for the losing team because their SoS will go up, while the winning team's SoS will (relatively) go down (their schedule appears easier). Which is all fine, it just says that SoS can understate the difficulty for the better teams, since their act of winning relatively knocks their SoS down and increases the SoS of their opponents. It's inherent to the fact that SoS is adjusted after the results of a game.
That's why when I do ratings I discount the games the team actually played. For instance if a team plays 10 games and wins all of them and all their opponents end up 9-1, I count their opponents collective records are 90-0. If a team plays 10 games and loses all of them and all their opponents collective records are 1-9, I count their opponents collective record as 0-90.
 
  • Like
Reactions: CityFielder
For those of you who don't know how the computer rankings work, it's actually fairly simple.

The computer gives each team a rating (or uses the previous week's ratings) and them checks each game the team has played by comparing both teams ratings and then adjusting the ratings of both teams until the difference between the teams ratings match the actual result as close as possible. Some games will drive a teams ratings up and others will drive it down. This comparison is done over and over (for many runs) for every game in the computer database (~7,200 games/week in a "normal" year) until the difference between every teams ratings between the last two runs is lower than some pre-selected number.

For instance if I only have one game and the result was:
Team 1: 28 vs. Team 2: 12

Team 1s rating would be 28 and Team 2s would be 12.

If I had 2 games between Team 1 and Team 2:
Team 1: 28 vs. Team 2: 12
Team 1: 20 vs. Team 2: 18

Team 1s rating would be 24 and Team 2s would be 15 making each game the same 7 point difference between the actual and projected outcome.

Now imagine this being done 720,000 or more times in a "normal" year at the end of the season.
 
Last edited:
As an aside, teams in states that have a running clock are getting penalized in the computer rankings relative to teams in states that do not have a running clock, as it's easier to run up the score and get the "expected" difference without a running clock.
 
As an aside, teams in states that have a running clock are getting penalized in the computer rankings relative to teams in states that do not have a running clock, as it's easier to run up the score and get the "expected" difference without a running clock.

Very few if even 1% of the teams in the country give a enough of a damn about these ratings to worry about trying meet or exceed the expected margins.. Computers don't give out state championship trophy's .. just saying of course.. fun to look at for discussion and what not but it ends there. Unless your a team trying to be Nationally Ranked in the top 100 or something , you don' care
 
Last edited:
Very few if even 1% of the teams in the country give a enough of a damn about these ratings to worry about trying meet or exceed the expected margins.. Computers don't give out state championship trophy's .. just saying of course.. fun to look at for discussion and what not but it ends there. Unless your a team trying to be Nationally Ranked in the top 100 or something , you don' care
Computer ratings matter to the kids, who use them for bragging rites.

Computer ratings matter to the states/organizations who use the computer ratings for seedings (Example, but far from the only one:
https://www.masslive.com/highschool...raging-running-up-the-score-matt-vautour.html)

Computer ratings matter when making the argument that Delaware isn't "weak competition" to people out of state.

Plus, computer ratings are fun to talk about.

Oh, and an interesting aside, teams in states that have a running clock are getting penalized in the computer rankings relative to teams in states that do not have a running clock, as it's easier to run up the score and get the "expected" difference without a running clock.
 
Last edited:
Running up the score for rankings? Move along, nothing to see here...

What @youjustme ??? You don't think teams aren't figuring out what their computer expected margin of victory should be and trying to exceed that so they can gain a few decimal points in the computer rating system..

c8c14ac285343d2cab1c68faa34d0ab4.gif
 
  • Like
Reactions: youjustme
Coaches in Massachusetts are paying attention to their computer ratings:

As are coaches in Iowa:

And Colorado:

And in Florida:

And in Idaho:

North Carolina uses computer rankings for tie breakers in seeding:

And there are others.

As I said before, "Computer ratings matter to the states/organizations who use the computer ratings for seedings". So yes @BackinBlack86 , there are coaches who care about their computer rankings and pay attention on how to maximize them.
 
Coaches in Massachusetts are paying attention to their computer ratings:

As are coaches in Iowa:

And Colorado:

And in Florida:

And in Idaho:

North Carolina uses computer rankings for tie breakers in seeding:

And there are others.

As I said before, "Computer ratings matter to the states/organizations who use the computer ratings for seedings".

No one is running up the score to improve their rating.. the amount it would improve is minuscule and not worth the complaining they would get from the other teams. Most of all that is for classification and it is only using the computer for partial in seedings.. You wouldn't even be able to figure out how much you needed to beat someone the get a slight bump in your rating to make any kind of difference. well maybe you would lol . it's silly
 
Last edited:
Computer ratings are entirely based on score differential @BackinBlack86. That is the only thing feeding into the algorithm other than wins and losses.

Teams that want to improve their rating only have two thing they can do: Win and run up their score differentials.

Although they can schedule other teams who do this as well to improve their Strength of Schedule.

And it is easy to figure out the "expected" differential. Take your rating and subtract your opponents rating. That is the "expected" differential. There is some limit around 35-49 (based on my observations) where a team doesn't get any more benefit from beating a lower ranked team by more than that, but it is still penalized if it doesn't get to that 35-49 point differential.
 
Last edited:
Computer ratings are entirely based on score differential @BackinBlack86. That is the only thing feeding into the algorithm other than wins and losses.

Teams that want to improve their rating only have two thing they can do: Win and run up their score differentials.

Although they can schedule other teams who do this as well to improve their Strength of Schedule.

ok fine tell me how much Delmar will have to beat Laurel by to get a ratings bump beyond what they would get simply by winning and how much would that bump be? Knowing margin of victory only matters in changing the rating if it's more or less than expected..
 
Last edited:
ok fine tell me how much Delmar will have to beat Laurel by to get a ratings bump beyond what they would get simply by winning and how much would that bump be? Knowing margin of victory only matters in changing the rating if it's more or less than expected..
Delmar rating -1.6
Laurel rating -2.9.

So Delmar has to beat Laurel by 1.3 to get a ratings bump. I have no idea how much the bump would be because I don't have a computer that is capable of running the millions of calculations with all the games played in the United States this year.

If Delaware used computer ratings for playoff seedings, all I would need to know is the more I win by, the more my ratings go up. But again, there is some limit around 42-49 points where additional win margin won't increase a teams rating anymore.
 
Last edited:
. Points aren't everything by any means – the win or the loss is always the most important thing, even when margins are used. There is a "diminishing returns" principle at play so as to not fully credit a team for blowing out a weak opponent. In addition to the cutoff point past which margins are not counted, there is a "win minimum" as well a maximum – a number which no win is credited as being below

The point I am trying to make is no one is going to purposely run it up on a weaker team for the purpose of a few rating points.. It is no advantage to them particularly considering the bad blood in creates..


Interesting enough the projector has Laurel winning..

CALPREPS.COM
Your source for high school football scores, standings & rankings



PROJECT A MATCHUP


neutral field
[2020] Laurel (DE) 19, [2020] Delmar (DE) 14
 
@BackinBlack86 I agree that coaches playing under systems that do not use computer ratings in any way to determine playoff seedings are not paying any attention to computer ratings, But coaches that are playing under a system that use computer ratings to determine playoff seedings are paying keen attention to those ratings

I linked many (but not all) examples of states (12% of total states, although there are more) that use computer rankings in one way or another to determine playoff seedings. Coaches working under a system that uses computer rankings to determine playoff seedings are incentivized to maximize their computer rankings for playoff purposes. It's only human nature to respond to the incentives you are given, which means that there are certainly more coaches than "no one" who are deliberately running up the score to increase their ratings.

Yes wins count, but they have about the same impact as score differential. And again, there is some limit around 35-49 points (from my observations) where additional win margin won't functionally increase a teams rating anymore. That is the ""diminishing returns" principle" stated by MaxPreps, but MaxPreps won't say exactly what that number is (it's actually probably an asymptotic function that approaches a multiplier 1.0 in the low to mid 40's).

If you want to understand how computer ratings work, there is an excellent paper (https://www.masseyratings.com/theory/massey97.pdf) written by Kenneth Massey of Masseyratings.com
that explains the math behind them. Massey's College Football ratings are used in the BCS ratings, so he knows what he is talking about.
 
Last edited:
It's all good man.. I am moving on from this already spent more time then I care to on the subject..
 
ADVERTISEMENT
ADVERTISEMENT