Out of curiosity, this metric is showing what amount each player has contributed to our NET, which is implying that a higher number is better, yes?
Technically it's their contribution to our SRS (simple rating system) which should be relatively close to NET but it's not exactly NET.
Given that our NET is in the very bottom of Major 4 conference schools, what does that mean?
It is calibrated such that the average team is zero. So a positive number means better than the average (division I) team and a negative number means worse. It could be recentered so that it was, for example, relative to a "bubble team" (i.e. setting the #40 or #45 team to zero) or anything really, but currently zero is set as the average D1 team.
I'd assume if this analysis were done for the worst NET team, there would still be a player at the top of the list.... Would that mean that player was doing well, or not?
You would look at their actual number. A player with a positive number per 40 should be doing better than the D1 average and a player with a negative number should be doing worse. Very roughly you can take a player's number per 40 and multiply by 5 in order to get the threshold for various stuff, like a team made out of 5 of a player with rating X would be this good. I don't have the ratings in front of me but roughly it should be like:
A +5 or +6 would equate to the #1 team (i.e. a team of five of those players would be approx equal to Auburn)
Roughly a +3 would be a top 25 team
Roughly a +2.4 would be a bubble team
Is this metric in any way useful to compare players between teams of significantly different net ranking?
Yes it should be in theory as described above though in practice who knows.
I actually found where the necessary data is available on Torvik's site in a json file so whenever I get time to write code to parse that and do the calcs I can do a ranking of this for all the players across the country and then we can see if it makes any sense or not.