ADVERTISEMENT

Football ESPN FPI projects Rutgers Football's remaining 2024 schedule

Good to know I can save the time of clicking on and reading that, if it downgraded our chances against VT and UCLA based on week 1.
 
Just offenses = 10-7 at the half for P5 team in the 40s vs a mid FCS school around #200.
Of course there’s going to be a downgrade from every computer model.
All good though. FPI is just a data point. It’s not exceptionally accurate but it’s fun to see.
Also, like any good poll, it has the ability to fluctuate significantly early in the season.
 
Just offenses = 10-7 at the half for P5 team in the 40s vs a mid FCS school around #200.
Of course there’s going to be a downgrade from every computer model.
All good though. FPI is just a data point. It’s not exceptionally accurate but it’s fun to see.
Also, like any good poll, it has the ability to fluctuate significantly early in the season.

You could say the same thing for OSU, no? Their offense-only score was 14-3 vs Akron at halftime. It seems like the calculation values the mere classification as FBS which is likely flawed.

I don’t think FBS teams outside the top 100 (Akron, UConn, UTep, Hawaii, FIU) would be better than middle of the pack in FCS conferences. Just like Howard.
 
Last edited:
FYI: all 5 of those schools would be favored by 14-20 points vs Howard if hosting the game.
Also I highly doubt that espn which has every stat imaginable at their disposal has constructed an algorithm that treats all FCS teams the same.
 
  • Like
Reactions: Knight Shift
FYI: all 5 of those schools would be favored by 14-20 points vs Howard if hosting the game.
Also I highly doubt that espn which has every stat imaginable at their disposal has constructed an algorithm that treats all FCS teams the same.

I don’t think this is accurate.

The bottom FBS teams rarely blow out FCS teams unless it’s the ones like Wagner that are like HS teams.

Taking Akron as an example - they only beat Morgan State by 3 last year. I get that head to head isn’t a perfect indicator, but Howard beat Morgan State by 7 and finished ahead of them in the MEAC conf standings

FIU beat an absolutely dreadful 2 win Maine team 14-12 last year.
 
I don’t think this is accurate.

The bottom FBS teams rarely blow out FCS teams unless it’s the ones like Wagner that are like HS teams.

Taking Akron as an example - they only beat Morgan State by 3 last year. I get that head to head isn’t a perfect indicator, but Howard beat Morgan State by 7 and finished ahead of them in the MEAC conf standings

FIU beat an absolutely dreadful 2 win Maine team 14-12 last year.
I would love for bettors to wager in to this piece of the convo. I was basing the 14 to 20 point spread off Sagarin ratings which are pretty accurate from what I’ve seen.
 
Going by this, 7-5 . Still hits the Vegas over on wins
Am I misreading it? I seems like 6-6 to me. It was 8-4 last week, with projected wins over Virginia Tech and Washington, which are now both projected defeats. And that's barely projected to beat Wisconsin, which is clearly trending towards a loss. Not that any of it matters, really.
 
  • Like
Reactions: rubigtimenow
I think I should start a clickbait website and sell ads. It must be very profitable; fans eat this stuff up, apparently.

There seems to be no limit to what fans will read and take seriously during the week while waiting for the next game, regardless of quality...
 
I would love for bettors to wager in to this piece of the convo. I was basing the 14 to 20 point spread off Sagarin ratings which are pretty accurate from what I’ve seen.
I don’t know how to pull up past spreads but I wonder how much of a favorite Akron was last year at home vs Howard? Also - I don’t recall how good or bad either of those teams were supposed to be at that point.
 
Are people here suggesting that this model.. or any model.. looks at scores quarter by quarter or half by half and not just the final score? I'd be shocked if that were the case. I sense a strong human element in this "model".
 
Am I misreading it? I seems like 6-6 to me. It was 8-4 last week, with projected wins over Virginia Tech and Washington, which are now both projected defeats. And that's barely projected to beat Wisconsin, which is clearly trending towards a loss. Not that any of it matters, really.
Yeah looks like 6 more wins projected to get to 7. If I am looking at the right percentages
 
Are people here suggesting that this model.. or any model.. looks at scores quarter by quarter or half by half and not just the final score? I'd be shocked if that were the case. I sense a strong human element in this "model".
Final score only? This isn’t 1983. I think it considers every single play in every single game.
How it weighs that and comes up with a final number is well beyond me, but no doubt there’s a computation that considers various data.
 
Final score only? This isn’t 1983. I think it considers every single play in every single game.
How it weighs that and comes up with a final number is well beyond me, but no doubt there’s a computation that considers various data.
wow.., we are misinformed are we? Where'd you get that crazy play-by-play notion? Maybe someone should do that.. using AI.. but that is NOT Sagarins. Really.. Sagarins is not worth your time.

METHOD:
Sagarin, like the developers of many other sports rating systems, does not divulge the exact methods behind his system. He offers two rating systems, each of which gives each team a certain number of points. One system, "Elo chess," is presumably based on the Elo rating system used internationally to rank chess players. This system uses only wins and losses with no reference to the victory margin. The other system, "Predictor," takes victory margin into account. For that system the difference in two teams' rating scores is meant to predict the margin of victory for the stronger team at a neutral venue. For both systems teams gain higher ratings within the Sagarin system by winning games against stronger opponents, factoring in such things as home-venue advantage. For the Predictor system, margin of victory (or defeat) factors in also, but a law of diminishing returns is applied. Therefore, a football team that wins a game by a margin of 7–6 is rewarded less than a team that defeats the same opponent under the same circumstances 21–7, but a team that wins a game by a margin of 35–0 receives similar ratings to a team that defeats the same opponent 70–0. This characteristic has the effect of recognizing "comfortable" victories, while limiting the reward for running up the score.​
At the beginning of a season, when only a few games have been played, a Bayesian network weighted by starting rankings is used as long as there are whole groups of teams that have not played one another, but once the graph is well-connected, the weights are no longer needed. Sagarin claims that from that point, the rankings are unbiased.[8]
 
Last edited:
Some possibilities:

1. Maybe ESPN's college FPI computations take into account team injury status, which wasn't known prior to the game.

2. The QB position apparently has a big effect on FPI. An objective and critical view of AK's game against a weak opponent might have resulted in a slight negative impact. Yes, AK demonstrated more accuracy than GW. But he didn't set the CFB QB world afire with that game.

3. Giving up 4.2 yards per carry against a weak opponent might also have resulted in a slight negative impact. Our starting LBs are/were considered a strength for our D and they are injured.
 
  • Like
Reactions: Extra Point
Are people here suggesting that this model.. or any model.. looks at scores quarter by quarter or half by half and not just the final score? I'd be shocked if that were the case. I sense a strong human element in this "model".
I don't think there's any human element in the model. There's a fair amount of information published about it. Do a web search.
 
  • Like
Reactions: rubigtimenow
I don't think there's any human element in the model. There's a fair amount of information published about it. Do a web search.
Didn't I just post what is in the model?

"At the beginning of a season, when only a few games have been played, a Bayesian network weighted by starting rankings is used.."

Let's see.. it is September.. EARLY September.. 1-2 games have been played. The STARTING RANKINGS have the human element.

Physician, heal thyself.
 
Didn't I just post what is in the model?

"At the beginning of a season, when only a few games have been played, a Bayesian network weighted by starting rankings is used.."

Let's see.. it is September.. EARLY September.. 1-2 games have been played. The STARTING RANKINGS have the human element.

Physician, heal thyself.
I did a web search and some reading and none of it mentioned any human element. IIRC, the starting rankings are based on the same things as subsequent rankings, just using historic data since there's no current season data yet.

So either what I read was incorrect or incomplete, or I missed something, or else the only human element is in the design and implementation of the algorithm.

Incidentally, I don't really care either way. It's all meaningless and academic. Playing the games is all that actually matters.
 
Didn't I just post what is in the model?

"At the beginning of a season, when only a few games have been played, a Bayesian network weighted by starting rankings is used.."

Let's see.. it is September.. EARLY September.. 1-2 games have been played. The STARTING RANKINGS have the human element.

Physician, heal thyself.
this is what is strange- and yes you did post the model- If Rutgers did what was expected against their opponent and VT did worse than expected - then it still doesnt make sense
 
I think I should start a clickbait website and sell ads. It must be very profitable; fans eat this stuff up, apparently.

There seems to be no limit to what fans will read and take seriously during the week while waiting for the next game, regardless of quality...
Sports is entertainment and talking about and debating and poo pooing all these various aspects about sports is part of that entertainment.
 
this is what is strange- and yes you did post the model- If Rutgers did what was expected against their opponent and VT did worse than expected - then it still doesnt make sense
its the starting rankings... the models used to make adjustments does so SLOWLY. So those that start out high stay high. This is why the model itself basically says to ignore it until week 7 or 8. Which is why I don't understand why anyone pays attention to it until them... why? because media publishes it?

The human rankings are far more reactive to results and even they have their biases toward believing what they thought in the preseason was accurate.
 
its the starting rankings... the models used to make adjustments does so SLOWLY. So those that start out high stay high. This is why the model itself basically says to ignore it until week 7 or 8. Which is why I don't understand why anyone pays attention to it until them... why? because media publishes it?

The human rankings are far more reactive to results and even they have their biases toward believing what they thought in the preseason was accurate.
yeah- get it. Try not to pay attention but it is kind of like a fun new toy that we started to follow weekly last year with the %'s.

and on the other hand- using the same thought process- I understand it would take VT to dip lower in rankings early in the season if the pre season was artificially high- but they were considered higher, prior to their first game and dropped dramatically- no matter how you cut it- there is 0 reason for their chance of winning to become better. If it stayed the same- I'm ok with that but increasing their odds after a bad game makes no sense anywhere
 
I did a web search and some reading and none of it mentioned any human element. IIRC, the starting rankings are based on the same things as subsequent rankings, just using historic data since there's no current season data yet.

So either what I read was incorrect or incomplete, or I missed something, or else the only human element is in the design and implementation of the algorithm.

Incidentally, I don't really care either way. It's all meaningless and academic. Playing the games is all that actually matters.
It is all about how Sagarins determines the starting rankings.

"weighted by starting rankings"

How do you think they determine the weighted by starting rankings?

You think it is how teams finished last season? NO. You can see that. Michigan is 14.

So HOW do they do it?

HUMANS

And, as stated many times now.. even by Sagarins itself.. it means ZERO until there are enough games played and, with most of the season gone, the preseason human-inspired rankings that hide behind names like "Bayesian network".. the impact of those rankings becomes less and less until, they claim, it is completely gone from the numbers.

Why are people so determined to think that Sagarins or any computational model has any value whatsoever with such little comparative data?
 
Are people here suggesting that this model.. or any model.. looks at scores quarter by quarter or half by half and not just the final score? I'd be shocked if that were the case. I sense a strong human element in this "model".
Model is a very loose term here. This model would fail validation in any environment I have been in. It’s not predictive at all, constantly changes the forward looking view. Back testing is key to see how accurate a model. I wonder what the accuracy looks like at year end vs season opening.

This is more of a power index based on constantly updated results.
 
  • Like
Reactions: GoodOl'Rutgers
The original projections is the espn FPI. Not Sagarin. @GoodOl'Rutgers

With that said, based on only a few minutes of research, while it takes into consideration multiple factors well beyond just the final score, it isn’t factoring in the result of each play as I believed earlier.
 
These power rankings are only accurate after the season ends. They are least accurate after week 1. That being said, the glaring setback for Rutgers was the injury list, so that must have been factored in somehow. RU had a less than stellar start on both offense and defense, but many teams start off slow.
 
Model is a very loose term here. This model would fail validation in any environment I have been in. It’s not predictive at all, constantly changes the forward looking view. Back testing is key to see how accurate a model. I wonder what the accuracy looks like at year end vs season opening.

This is more of a power index based on constantly updated results.
I see that now. My bad. I thought this was mostly about Sagarins for some dang reason.
 
ADVERTISEMENT
ADVERTISEMENT