ADVERTISEMENT

Daily Tracking the NET: now 102

Bonnies do have a quad 1 win at VCU (and also beat VCU in upstate NY) and VCU just picked up a nice win vs #18 NET Dayton. VCU also has a win over Penn State. The good news for Rutgers is that they have plenty of quality-win opportunities ahead. Some of the mid-major teams with better NETs than RU right now have to practically win out the rest of the way to avoid slipping down in the NET.

Did you look at their pile of losses? Besides a sweep of VCU their only other Q1 or Q2 win is that one point win vs. Akron. That’s what they have to offset a collection of home losses to Caniscius, Fordham, UMass, Duquesne, and St Joes. We have one more loss than them, but our worst loss (PSU) is far less bad than the quad 4s and no worse than any of the above. Our wins are better too yet they are 73 and we are 86.

VCU, by the way, isn’t exactly a signature win this year either. Wins over Dayton and PSU don’t exactly offset their two Q4 losses to Norfolk and GW.
 
Last edited:
Baseball is different because of the fact that you are using different pitchers all the time. That doesn’t really extend well to other sports.
You’re deflecting away from my main point without really refuting it. You can win plenty of close games in any sport, and no committee comes in and says that despite having enough wins against good competition (strong enough SOS) to qualify, we’re gonna leave you out of the playoffs because your metrics aren’t up to snuff.
 
You’re deflecting away from my main point without really refuting it. You can win plenty of close games in any sport, and no committee comes in and says that despite having enough wins against good competition (strong enough SOS) to qualify, we’re gonna leave you out of the playoffs because your metrics aren’t up to snuff.
How many other sports have 362 teams and little parity or overlapping schedules between them?

NCAA football has a third as many teams and has a committee for the postseason, too.

There's really no comparison to draw vs professional sports leagues.
 
You’re deflecting away from my main point without really refuting it. You can win plenty of close games in any sport, and no committee comes in and says that despite having enough wins against good competition (strong enough SOS) to qualify, we’re gonna leave you out of the playoffs because your metrics aren’t up to snuff.
I fully agree with you that they shouldn’t do that so I’m not sure what we’re disagreeing about to be honest.
 
  • Like
Reactions: BillyC80
How many other sports have 362 teams and little parity or overlapping schedules between them?

NCAA football has a third as many teams and has a committee for the postseason, too.

There's really no comparison to draw vs professional sports leagues.
College basketball also has 32 automatic qualifiers. I’m pretty sure some of the efficiency metrics of those teams are not as good as some bubble teams who are left out.

Strength of Schedule is a powerful equalizer. No need to include efficiency metrics.
 
College basketball also has 32 automatic qualifiers. I’m pretty sure some of the efficiency metrics of those teams are not as good as some bubble teams who are left out.

Strength of Schedule is a powerful equalizer. No need to include efficiency metrics.

In concept, I agree with you completely. They should’ve tried to fix the old system rather than invent a new one with a different collection of flaws.

To be fair, RPI (the old SOS adjusted results only metric) had its own set of problems - many of which would be hurting us big time if that system were still in place. Our RPI would actually be worse than our NET. Pike would’ve been sorely punished for our non-conference scheduling because wins against teams with terrible records bring your metrics down in that system (no matter how much you beat them by). I’ll add - that this part of RPI is the biggest reason the system had to go. No credible metric should ever “punish” a team relative to other teams following a dominant win. That’s like saying - Rutgers became a worse team just because they played LIU.
 
Last edited:
How many other sports have 362 teams and little parity or overlapping schedules between them?

NCAA football has a third as many teams and has a committee for the postseason, too.

There's really no comparison to draw vs professional sports leagues.

There’s no good solution but I hate the use of efficiency metrics in the at large picture. Sorting tool or otherwise, style points now play a role in field selection and I find that very problematic. Winning is a binary metric and coaching decisions should reflect that and only that. Also - I see very little correlation between a team’s ability to stay focused and pound on more with a 20+ point lead and a clock winding down and how good a team is. Or on the other side - Chol knocking down a few late 3s to improve our shooting percentage a bit means nothing either. At least with the old system, once the game started coaches and players only had to focus on winning and not how they got there.
 
Last edited:
In concept, I agree with you completely. They should’ve tried to fix the old system rather than invent a new one with a different collection of flaws.

To be fair, RPI (the old SOS adjusted results only metric) had its own set of problems - many of which would be hurting us big time if that system were still in place. Our RPI would actually be worse than our NET. Pike would’ve been sorely punished for our non-conference scheduling because wins against teams with terrible records bring your metrics down in that system (no matter how much you beat them by). I’ll add - that this part of RPI is the biggest reason the system had to go. No credible metric should ever “punish” a team relative to other teams following a dominant win. That’s like saying - Rutgers became a worse team just because they played LIU.
College hockey adjusts their RPI to remove the negative impact of a win over a weak opponent.
 
College hockey adjusts their RPI to remove the negative impact of a win over a weak opponent.

I imagine that would help a lot. Unfortunately, there’s really no ideal way to account for SOS per Chop’s point. The RPI formula would’ve given a massive amount more credit to a McNeese win vs a Michigan win. It’s fine for it to be a little more I guess but the divide would be comparable to the divide credited between Georgetown and Marquette when the disparity in opponent quality is nothing close to that. The middle portion of the formula got a lot of weight and the attempt of adjustment in the last portion of the formula often failed due to the blending effect.
 
  • Like
Reactions: RUChoppin
There are better ways to do “RPI” (that is, computer rating systems that only use wins and losses). The idea of a computer rating system that uses only wins and losses is good, but RPI is an old, crude formula with no statistical basis whatsoever.
 
Good to see George Mason pulled ahead of us after pounding a terrible GW team. They’ve now beaten no one with a pulse all year.
 
  • Haha
Reactions: Local Shill
There are better ways to do “RPI” (that is, computer rating systems that only use wins and losses). The idea of a computer rating system that uses only wins and losses is good, but RPI is an old, crude formula with no statistical basis whatsoever.

That’s probably a bit too harsh. The system had statistical basis (albeit flawed) - a strong RPI always meant you performed well against teams that did well against the schedule they played. That’s consistent statistics basis - it just has a lot of holes.

RPI would be a good system in concept if there was a mathematical way to strip out the blending effect.
 
That’s probably a bit too harsh. The system had statistical basis (albeit flawed) - a strong RPI always meant you performed well against teams that did well against the schedule they played. That’s consistent statistics basis - it just has a lot of holes.

RPI would be a good system in concept if there was a mathematical way to strip out the blending effect.
When I say “no statistical basis” I mean that it’s not actually calibrated to anything empirical. It’s not a model. It’s just a simple formula that has some general logic behind it and produced some results that some people thought looked reasonable. In 2024 we have the ability to do better.
 
  • Like
Reactions: RUChoppin
There’s no good solution but I hate the use of efficiency metrics in the at large picture. Sorting tool or otherwise, style points now play a role in field selection and I find that very problematic. Winning is a binary metric and coaching decisions should reflect that and only that. Also - I see very little correlation between a team’s ability to stay focused and pound on more with a 20+ point lead and a clock winding down and how good a team is. Or on the other side - Chol knocking down a few late 3s to improve our shooting percentage a bit means nothing either. At least with the old system, once the game started coaches and players only had to focus on winning and not how they got there.
The fundamental and definitional problem stems from its reliance on economic theory, that every point has equal value. Defensively this is false when you have a big lead compared to when you are in a tight game. Time taken off the clock may be more valuable than a point. Or scoring quickly may be more valuable than scoring after 30 seconds during a come back.

and points that change the margin from 18 to 20 are typically less valuable than from 8 to 10 that are less valuable than those from +1 to -1.

It measures an unweighted average that gives the illusion of efficiency.

@fluoxetine, I ran out of time last season when we were going to think about a better measure. But I've could certainly envision something that weighted efficiency, accounted for time of game and ultimate result
 
Last edited:
The fundamental and definitional problem stems from its reliance on economic theory, that every point has equal value. Defensively this is false when you have a big lead committed to when you are in a tight game. Time taken off the clock may be more valuable than a point. Orscoring quickly may be more valuable than scoring after 30 seconds during a come back.

and points that change the margin from 18 to 20 are typically less valuable than from 8 to 10 that are is valuable than those from +1 to -1.

It measures an unweighted average that gives the illusion of efficiency.

@fluoxetine, I ran out of time last season when we were going to think about a better measure. But I've could certainly envision something that weighted efficiency, accounted for time of game and ultimate result
I think the key is to have a good formula for determining strength of schedule. One that has home, road, and neutral sites factored in.

And I would think you’d want to establish the initial SOS of the season only after every team has played its first 10 games (which would be primarily OOC games).

If you have that figured out, then you only need Wins x SOS to come up with a good rating system.
 
I think the key is to have a good formula for determining strength of schedule. One that has home, road, and neutral sites factored in.

And I would think you’d want to establish the initial SOS of the season only after every team has played its first 10 games (which would be primarily OOC games).

If you have that figured out, then you only need Wins x SOS to come up with a good rating system.
It isn't this simple. Probability of winning is not a linear function of the strength difference between two teams. What this means is that SOS in W/L terms can actually be dependent on the strength of your own team.

For example, imagine you are the #3 team and you have the choice between two schedules:

Schedule A
vs #1 team
vs #2 team
vs #361 team
vs #362 team

Schedule B
vs #180 team
vs #181 team
vs #182 team
vs #183 team

On average, these are the same strength. However, realistically your chances of winning against teams #1 and #2 are probably around 50% (slightly under, but close). Your chances of beating teams #s 180-183 are maybe 90%. And let's just say your chances of beating teams #361 and 362 are essentially 100%. So your expected number of wins against schedule A is ~3 (0.5 + 0.5 + 1 + 1) and against schedule B it is ~3.6 (0.9 + 0.9 + 0.9 + 0.9). So schedule A is significantly harder!

But wait, now look at the same choice from the perspective of the #360 team. They have ~50% against #361/#362, ~10% against #s 180-183, and ~0% against #1/#2. Now, they expect 1 win against schedule A (0 + 0 + 0.5 + 0.5) but only 0.4 wins against schedule B (0.1 + 0.1 + 0.1 + 0.1). Schedule B is significantly harder!

Linear SOS such as kenpom SOS or the silly rank averaging SOSs that the NET puts out only make sense when looking at linear things. So kenpom's SOS is perfect.. for net efficiency margin. But applying it to wins and losses doesn't work properly.


NOTE: The numbers are stylized but they give the right idea. IRL the #3 team probably has >90% against average teams and the #360 team has <10%. But this illustrates the general problem.
 
Strength of record and wins above bubble are basically the same concept. They use their full predictive model (basketball power index / T-Rank) to measure the strength of teams. Then they compute the record that a benchmark team would be expected to have against that schedule (SOR I believe uses a "top 25 team", I don't know whether this is the average top 25 team i.e. like the #12/13 team or the #25 team. I assume the latter. WAB uses a "bubble" team which I also don't know if is explicitly defined but in theory is probably somewhere around #50). Then your SOR / WAB is the difference between your W/L record against that schedule compared to the expectation of whatever the benchmark team is.
 
Note the top 25 thing is from their description of the SOR for football, not basketball. This is how they define it here: https://www.espn.com/college-football/fpi/_/view/resume ("SOR:Strength of Record rank. Reflects chance that an average Top 25 team would have team's record or better, given the schedule.) It is possible they use a different benchmark for basketball but the concept would be the same.
 
  • Like
Reactions: bac2therac
It isn't this simple. Probability of winning is not a linear function of the strength difference between two teams. What this means is that SOS in W/L terms can actually be dependent on the strength of your own team.

For example, imagine you are the #3 team and you have the choice between two schedules:

Schedule A
vs #1 team
vs #2 team
vs #361 team
vs #362 team

Schedule B
vs #180 team
vs #181 team
vs #182 team
vs #183 team

On average, these are the same strength. However, realistically your chances of winning against teams #1 and #2 are probably around 50% (slightly under, but close). Your chances of beating teams #s 180-183 are maybe 90%. And let's just say your chances of beating teams #361 and 362 are essentially 100%. So your expected number of wins against schedule A is ~3 (0.5 + 0.5 + 1 + 1) and against schedule B it is ~3.6 (0.9 + 0.9 + 0.9 + 0.9). So schedule A is significantly harder!

But wait, now look at the same choice from the perspective of the #360 team. They have ~50% against #361/#362, ~10% against #s 180-183, and ~0% against #1/#2. Now, they expect 1 win against schedule A (0 + 0 + 0.5 + 0.5) but only 0.4 wins against schedule B (0.1 + 0.1 + 0.1 + 0.1). Schedule B is significantly harder!

Linear SOS such as kenpom SOS or the silly rank averaging SOSs that the NET puts out only make sense when looking at linear things. So kenpom's SOS is perfect.. for net efficiency margin. But applying it to wins and losses doesn't work properly.


NOTE: The numbers are stylized but they give the right idea. IRL the #3 team probably has >90% against average teams and the #360 team has <10%. But this illustrates the general problem.
Flux, we agree that SOS is not linear, which is why I said a good (new) SOS formula must be created before the SOS factor is applied.

For example, you could apply a Quad system to it, so in the example you gave of team A and B, they will have their schedule strengths weighted by Quad, to account for having some very tough games and not just all average toughness games.
 
Flux, we agree that SOS is not linear, which is why I said a good (new) SOS formula must be created before the SOS factor is applied.

For example, you could apply a Quad system to it, so in the example you gave of team A and B, they will have their schedule strengths weighted by Quad, to account for having some very tough games and not just all average toughness games.
The main point there is not just that it's non-linear though, it's that the SOS is dependent on the observer. The two teams do not agree on which schedule is more difficult. This means that it's impossible to define a single number that represents the difficulty of a given schedule.
 
  • Like
Reactions: RUChoppin
Strength of record and wins above bubble are basically the same concept. They use their full predictive model (basketball power index / T-Rank) to measure the strength of teams. Then they compute the record that a benchmark team would be expected to have against that schedule (SOR I believe uses a "top 25 team", I don't know whether this is the average top 25 team i.e. like the #12/13 team or the #25 team. I assume the latter. WAB uses a "bubble" team which I also don't know if is explicitly defined but in theory is probably somewhere around #50). Then your SOR / WAB is the difference between your W/L record against that schedule compared to the expectation of whatever the benchmark team is.
thanks, i am now incorporating SOR more in my analysis as guiding tool. I do see there are a couple of aberrations out there now but in general its a good tool
 
thanks, i am now incorporating SOR more in my analysis as guiding tool. I do see there are a couple of aberrations out there now but in general its a good tool
Yes, to me this is kind of what the committee seems to be trying to do manually. That is, they don't really look at YOUR NET, they use it more to measure the strength of who you played when weighing your resume.
 
  • Like
Reactions: RUChoppin
Yes, to me this is kind of what the committee seems to be trying to do manually. That is, they don't really look at YOUR NET, they use it more to measure the strength of who you played when weighing your resume.
yep definitely SOR seems more in line with their selections than the NET. I dont think they are looking at the SOR necessarily though its on the team sheet. Its more of a manual process to get there but they seem to get there pretty close. RU's SOR wasnt so hot last year.
 
The main point there is not just that it's non-linear though, it's that the SOS is dependent on the observer. The two teams do not agree on which schedule is more difficult. This means that it's impossible to define a single number that represents the difficulty of a given schedule.

Nah - I think the real problem is the contradictory nature of the blend effect at the opponents’ opponents level relative to direct opponents record.

To simplify - imagine team A is your only opponent and they play 4 games. Go 3-1. One game is against UConn. That’s the loss. The other games are against terrible teams with an average record of 10-14. Team As opponents combine for a 52-44 record overall. That’s garbage data being used to SOS adjust that 3-1 record in the loop. It shouldn’t be an average. Each win and loss should somehow be “adjusted” individually in the equation with respect only to that single game outcome. There would still be imperfection but it would be a lot better and it’s still inherently more “fair” to use a pure results based approach than efficiency. I’d be less bothered by a McNeese win or loss getting treated too much like a UConn win or loss in the equation if it was restricted to propping the relative value of that one game only. Flawed yes, but still beats an efficiency based system that doesn’t distinguish garbage time, deliberate fouling, etc.
 
  • Like
Reactions: BillyC80
Nah - I think the real problem is the contradictory nature of the blend effect at the opponents’ opponents level relative to direct opponents record.
To be clear I was responding to @BillyC80 's proposed outline of a system, not RPI. What I was talking about is not "the problem" with RPI which has much more funamental problems that that.

To simplify - imagine team A is your only opponent and they play 4 games. Go 3-1. One game is against UConn. That’s the loss. The other games are against terrible teams with an average record of 10-14. Team As opponents combine for a 52-44 record overall. That’s garbage data being used to SOS adjust that 3-1 record in the loop. It shouldn’t be an average. Each win and loss should somehow be “adjusted” individually in the equation with respect only to that single game outcome.
Yes we definitely agree on the bolded.

There would still be imperfection but it would be a lot better and it’s still inherently more “fair” to use a pure results based approach than efficiency. I’d be less bothered by a McNeese win or loss getting treated too much like a UConn win or loss in the equation if it was restricted to propping the relative value of that one game only. Flawed yes, but still beats an efficiency based system that doesn’t distinguish garbage time, deliberate fouling, etc.
A properly iterative (win/loss only) system should be able to distinguish between UConn and McNeese. This is again where the simplicity of the RPI comes back to bite it. But it's not a problem inherent to W/L only systems. You could devise an efficiency based system that only adjusts one or two levels deep like the RPI and it would have some of the same (or analogous anyway) issues.
 
  • Like
Reactions: PSAL_Hoops
I was scanning the NET and noticed that Stonehill is 2-23 with two Q4 home wins. I know we were terrible that day, but they must have played by far their best game of the season for that to have been as competitive as it was. Derick Simpson saved us from what might have been the single worst loss in the last 100 years for RU hoops. Imagine if we come all the way back from that to make the Tourney?!
 
  • Like
Reactions: Scarlet83
I was scanning the NET and noticed that Stonehill is 2-23 with two Q4 home wins. I know we were terrible that day, but they must have played by far their best game of the season for that to have been as competitive as it was. Derick Simpson saved us from what might have been the single worst loss in the last 100 years for RU hoops. Imagine if we come all the way back from that to make the Tourney?!
I mean we came back from an actual horrendous loss like that (not quite as bad, but still) + a couple other pretty bad ones to make the tourney two years ago.
 
There are better ways to do “RPI” (that is, computer rating systems that only use wins and losses). The idea of a computer rating system that uses only wins and losses is good, but RPI is an old, crude formula with no statistical basis whatsoever.
Do it!

thanks, i am now incorporating SOR more in my analysis as guiding tool. I do see there are a couple of aberrations out there now but in general its a good tool
What is Rutgers SOR this year so far?
 
  • Like
Reactions: fluoxetine
Does SOR incorporate offensive and defensive efficiency in any way? If not then I’m in favor of using it!
Yes and no. It incorporates those for the teams that you played; that's how it determines the difficulty of your schedule. But then given that difficulty (that is computed based on efficiency metrics for your opponents) the calculation of SOR is purely based on your wins and losses.
 
Yes and no. It incorporates those for the teams that you played; that's how it determines the difficulty of your schedule. But then given that difficulty (that is computed based on efficiency metrics for your opponents) the calculation of SOR is purely based on your wins and losses.
So, if your opponent efficiently lost all their previous games by 1 point, and then you play them and beat them, does it strengthen your SOR even though you beat a team with no wins?
 
So, if your opponent efficiently lost all their previous games by 1 point, and then you play them and beat them, does it strengthen your SOR even though you beat a team with no wins?
Yes.

EDIT: I mean there's probably some limit to how efficient you can be while losing every game but I still believe the answer to the point you're making is "yes". That team will be viewed the same (as an opponent) as a team that had more wins but the same general efficiency metrics.
 
ADVERTISEMENT
ADVERTISEMENT