10
« on: October 12, 2021, 02:59:00 pm »
I think if you're going to try to make a value added stat as opposed to another rate stat, you have to figure out the value of a kill, a death and a teamkill. WAR in baseball sort of works because SABR has sat down and figured out how many runs an offensive has to score and how many runs a defense has to . . . not allow? . . . to win a game in a given run environment. SABR figures that, in a 5 run per game environment, a batter needs to produce an extra 10 runs to be worth an extra win. I don't think that's the case for the draft leagues since, unlike baseball, we know exactly how many "points" we need to score to win a single round. In the case of the current league, is 6. I don't think we can easily, with any certainty, accurately assess preventing enemy kills, primarily because we don't track rounds in which a player played and did not die. Even with kills, we can't really do what baseball WAR does, because it ultimately constructs a run expectancy for a plate appearance and assigns every possible outcome a run value. We can't really count opportunities, so we have to basically overvalue opportunities taken to compensate for undervaluing opportunities missed.
I would guess that to set the replacement level, you would have to decide how set a stat line of the supposed replacement player, compare the value of kills and deaths to that replacement player. Again, assessing opportunity is the problem. The closest I can come up with is deaths. That is, set the replacement player's deaths to the player you're evaluating and multiply that by the replacement K/D to get the replacement kills, and you can get teamkills the same way. It's not ideal mainly because it doesn't sufficiently reward players who don't die. It may also be that just guessing for rounds played would be sufficient, especially if nobody is interested outside of the top 15 or 20 players anyway, because the only way top players miss rounds is if they miss the match entirely or their internet cuts out.
I also have absolutely no idea how first bloods factor in, partly because they don't play nicely in the formulas I was fiddling with, and in part because, as far as I know, we don't actually know exactly how valuable a first blood is.
A couple of leagues ago, when I had log access, I was tempted to try to calculate situational win probability. In a 5v3, how often does the team with 5 alive win, and how often the team with 3? If we had win rates for all the permutations of a draft league match, we could calculate kill win probably added. It would just be a matter of giving credit for the change in win expectancy to whoever got the kill and subtracting the same from the player who died. Of course, it would substitute for WAA more than WAR, and I have no idea without trying whether it'd be easier to generate a replacement player for that sort of thing. You'd also get a nice statistical phenomenon where the league would have a negative WPA, because it's possible to cost your own team win expectancy without any player on the other team earning any.
I ended up not doing it because it would have required more set up work than I was prepared to do. Most of the counting could be automated pretty easily, but it still would require extensive log edits to get something to feed into whatever is doing the counting. The resulting stat would still have its problems as well. It would, for example, likely punish a player for after holding multiple enemies while his team came back from an early deficit and then dying compared to a player who, in the same scenario, died instantly. The benefit is that it would reward players who got kills in close rounds and situations more than players who stat padded at the ends of rounds.