As has become something of a yearly tradition here at Variance Hammer, I’ve done some number crunching on the 2017 Las Vegas Open 40K Championships, to see what there is to see.
And this year, what there is to see is interesting. A king has been unseated, to be replaced by pirates, renegades and daemons.
The Data: The data used for this analysis is an unholy union of the tournament results found on Best Coast Pairings, last year’s LVO results (we’ll get to that in a minute), and the dump of ITC scores from this January I used for this post. The latter we assume is a reasonable standin for someone’s ITC score as they go into the LVO – not perfect, as there were some tournaments in very late January, but probably reasonable. I’ll get the dataset up on the Variance Hammer Github site soon, but that is in need of a major reorganization after the past few months.
Army Representation: The factions present at the LVO were a pretty broad swathe, with all the usual players there in force.
That’s a tiny picture, I know. By far the best represented were the Eldar, followed by Space Marines and Tau, with strong showings from Chaos Daemons, Chaos Space Marines, Cult Mechanicus, and the Dark Angels. This is pretty much the same as we saw last year. The singleton armies this year were Deathwatch and Militarum Tempestus. There’s somewhat middling numbers of everything else, and it’s good to see less powerful codexes getting played, though there are clearly some tournament favorites.
Player/Tournament Scene Contributions: I’ve recently begin trying to more accurately estimate the role of a player in the performance of their army, rather than just the army itself. After all, there’s probably a wider difference between me playing Eldar and the top ranked ITC Eldar player than there is between a top ranked ITC Eldar player and a top ranked ITC Chaos Daemons player.
If by probably I mean “certainly”.
I tried to get at this a little bit with my analysis of Warzone Atlanta, but was flummoxed a bit by two issues. First, because WZA had grown a lot, many of the players there hadn’t played in the previous Warzone Atlanta, which meant I couldn’t really use “How did you do at the last one?” as a measure. Second, a lot of the folks in that region don’t seem to heavily attend ITC events, which made their ITC score nigh useless.
The LVO has neither one of these problems. I’ll model them both when I look at army performance, but let’s take a look for a moment at how well your incoming ITC score correlates with your battle points at the event:
Nicely, but not perfectly. Like the “Events Attended and ITC Points” correlation, it’s a nice quadratic fit – basically more ITC points is exponentially better, and there’s few really high performing players with almost no ITC points. On the other hand, there’s some very well ranked ITC players who had a very bad weekend from a performance standpoint.
Now, lets also control for army performance (discussed below) to parse out just the effect of player skill independent of the army they chose. First, a brief statistical aside. I decided to jointly model both player characteristics (placing at LVO 2016 and incoming ITC points) and army selection at the same time, but a fair number of people are missing an LVO 2016 placing, and a smaller number are missing ITC points. Conventionally, the solution to this is to throw out anyone with missing values (this is called “complete case analysis”). Its bad for a number of reasons I won’t get into, but this year, I’ve used something called multiple imputation to handle that missing data problem. This is why, if you just rerun the model using my data on another machine you’re likely not going to get the same answer. /end statistical nerdery.
For the LVO, as they use non-integer battle points, I used a linear regression model, so rather than these modifiers being a multiplier of your score, this time they’re an addition to your score. For example, the “average” army is estimated to get a BP score of 103.95, and the modifier for a the Dark Eldar is 1.68, which means a Dark Eldar army statistically can expect a BP score of (103.95+1.68) or 105.63. Positive scores are good, negative scores are bad, and if everyone had a score of zero, it would mean army selection/player skill didn’t matter.
Controlling for army selection, the relationship between placement and ITC points remains largely the same, with a linear term of -0.94 and a quadratic term of 0.03 for ITC points divided by ten. What does that mean? It means that someone with 100 ITC points is expected to have (-0.94*10) + (0.03*10^2) or -6.4 BP below the average score, while someone with 450 ITC points is expected to have (-0.94*45) + (0.03*45^2) or 18.45 BP above the average score. That’s a fairly significant portion of what determines performance.
I also looked at previous LVO performance, which showed a pretty predictable linear trend of -0.13, meaning for each place lower in the 2016 40K Championships you placed, you could be expected to earn 0.13 less BPs. So the different between someone who placed 10th and someone who placed 60th last year? The model predicts the 60th ranked player will earn -0.13 * (60-10) or -6.5 BP compared to their 10th ranked opponent.
Army Performance: Now let’s consider army performance while controlling for player skill. The short version: Renegades won. Or well, kinda Renegades. A Renegades CAD along with Fateweaver, the Masque, and a Heralds Anarchic formation. We’re going to get to this later- “What does a faction at this point even mean?”
Importantly, in the Top 10 results, there were only three Eldar players overall, one each for Craftworld, Corsair and Dark Eldar, which makes me chuckle a little bit. And this is where things get interesting: This year, the Eldar were not particularly good.
That is not to say they’re bad. Oh no. But in terms of the way I’ve been modeling army performance recently, as how much an army can expect to have it’s score modified over an “average” army based on army selecting, they fared only middling well. Shall we look at them all? Below is easy army’s modifier, along with a 95% confidence interval for those who care about these things:
- Chaos Daemons: 14.77 (0.56, 28.97)
- Cult Mechanicus: 0.32 (-13.32, 13.95)
- Eldar Corsairs: 25.70 (5.52, 45.88)
- Chaos Space Marines: 1.19 (-9.97, 12.36)
- Dark Angels: 14.47 (-0.38, 29.32)
- Dark Eldar: 1.68 (-16.86, 20.23)
- Deathwatch: 9.01 (-7.69, 25.72)
- Eldar: 4.68 (-5.90, 15.26)
- Grey Knights: 9.77 (-6.81, 26.35)
- Genestealer Cult: 1.73 (-18.19, 21.64)
- Harlequins: 11.93 (-3.96, 27.84)
- Imperial Guard: -4.81 (-17.72, 8.11)
- Imperial Knights: 0.54 (-11.99, 13.07)
- Khorne Daemonkin: 22.57 (4.89, 40.26)
- Militarum Tempestus: 24.79 (15.61, 33.97)
- Necrons: -6.59 (-18.16, 4.99)
- Tyranids: 1.89 (-10.14, 13.92)
- Assassins: -7.83 (-40.37, 24.70)
- Orks: -5.41 (-18.85, 10.73)
- Renegades: 19.17 (-16.02, 54.36)
- Renegade Knights: 2.89 (-13.26, 19.04)
- Skitarii: -4.06 (-8.16, 14.12)
- Space Marines: 2.98 (-8.16, 14.12)
- Sisters of Battle: -7.30 (-28.64, 14.03)
- Space Wolves: 15.55 (0.59, 30.51)
- Tau: 0.19 (-11.54, 11.93)
Some interesting things to note here:
Craftworld-primary Lists Weren’t Particularly Strong: While Craftworld lists were better than average, they weren’t by very much, and pretty firmly in the realm of many other books. This comes as a surprise compared to previous years, where Craftworld lists were indisputably the strongest lists there. There was evidence they were slipping when I looked at WZA’s results as well, and it’s pretty strong here. This was, in some ways, inevitable. The Craftworld Eldar have been strong for well over an entire edition now and highly overrepresented in the tournament scene, which creates a very strong selective pressure toward armies that can deal with the Eldar. And it appears they have have arrived.
Interestingly, there seems to have been some flight toward more exotic Eldar lists, like Corsairs, among experienced players that have served them well.
Chaos is Fine Now: Renegades, Chaos Daemons and Khorne Daemonkin all made strong showings, and Chaos Space Marine primary lists still dwell in the “middling fair” category. I’ll have a more detailed post on this (hopefully) soon, but the recent additions to the overall Chaos faction have done good things for them.
The Variability of the Tau: The Tau are the army parked most firmly in “middling-okay” territory in the LVO data, which is an interesting contrast to WZA where they were the strongest single faction there. To my mind, this comes down in its entirety to whether or not the Ta’unar Supremacy Suit is legal. That single decision has a huge impact on the performance of this faction, and says some very bad things about the balance of that particular unit.
Exotic Space Marines > Gladius: Like the Craftworld Eldar, the Gladius has been a feature of the tournament circuit for a long time, and it seems that the meta has adapted to promote armies that can deal with it. Both the Space Wolves and Dark Angels, the cornerstones of more exotic “Deathstar” style builds well outperformed their codex counterparts.
The Great Devourer Isn’t Delivering: Reading the Genestealer Cults codex, many commentators saw a lot of potential in their special rules. This doesn’t seem to be manifesting itself as tournament performance. It’s possible that they’re the type of list that’s “preying” on some of the generic point-and-click Craftworld lists, like Scatterbike Spam, but that this doesn’t carry them into the upper tiers. Or its possible that their awesome special rules aren’t enough to carry them through having somewhat middling stats overall. This remains to be seen.
The Performance of Prediction As with WZA, I wanted to see how well these models actually predict the results of the tournament – could something like this be used for forecasting? Below is a plot of the actual results of the tournament compared to that predicted by the model:
This is a vast improvement over the Warzone Atlanta model (primarily due to being able to build in more player-based characteristics), but still far from perfect. The red line would be perfect performance – the predicted score perfectly matching the actual score for each player. This model is fairly good at predicting most results in the tournament. And while it doesn’t nail the final results, it does a decent job. The predicted winner is in the top four actual results, and the actual winner of the LVO is similarly highly ranked in the predicted model. Predicting the winner like this does somewhat create what one might call the “Nate Silver Problem” – just because your model predicts an outcome is less likely, if it occurs that doesn’t mean you were wrong, just like if you fail a 2+ armor save it doesn’t mean it’s not better than a 4+.
There’s still room for improvement here, and I’m interested in working on a more match-based probabilistic model, but I’m decently happy with the results. There is, however, a looming problem…
What Does Faction Mean? 7th Edition has been one, continuous exercise in undermining the concept of a single codex army – culminating in the new Gathering Storm books, which just toss it entirely. Many of the top armies in the tournament, including the winners, were pretty Pick-n-Mix. So is it meaningful to call Lion’s Blade army with a Wolfstar a DA or SW army? Or an army with zombies and artillery from the Renegades list, a bunch of daemons, etc. – how truly is an “Assorted Chaos” army assigned to Renegades, Daemons, etc.? Yet something like “Chaos Daemons” is already imprecise – this could be a summoning heavy list, or a disastrously poor Daemonette-spam army. Merging anything further, into something like “Chaos”, “Imperial”, “Eldar” etc. threatens to wash out any nuance at all.
To be honest, I’m not yet sure how to handle this. Going down to list specifics is too cumbersome for what is, in essence, a hobby project, and would still involve subjectivity in deciding where to draw the line between what makes one list different from another. For the moment, primary faction seems a workable enough compromise.
Enjoy what you read? Enjoyed that it was ad free? Both of those things are courtesy of our generous Patreon supporters. If you’d like more quantitatively driven thoughts on 40K and miniatures wargaming, and a hand in deciding what we cover, please consider joining them.
Awesome article as always, thanks!
As someone who is just starting work on his masters in stats, I appreciate all of the work that went into this. I had considered trying to calculate some of this information myself, but you just saved me hours of work.
One thing I would be interested in seeing is how player skill affects the performance of a particular army. It seems like the top armies (renegades, Daemons, etc) have larger variances in their performance. A lot of this probably has to do with unique combinations of allies, but would it be possible to see a breakdown of ITC points by faction?
Some of the variance in some of the top armies is, I suspect, a combination of there being relatively few of them and not being easy-to-play armies, so a Renegade or Daemon army in the hands of an average player (vs. an above-average player) might not be expected to perform reliably. Essentially the opposite of what I’ve often observed with the Tau, which often work as an army that’s assured to get you to the middle tables, but not much farther than that.
In terms of ITC points by faction, that’s somewhat harder to do retrospectively, but there’s a couple questions related to that I hope to be touching on in the coming year.
I wonder if there would be any value in looking at how well certain detachments and formations do. Like Tau as a primary did OK, but riptide wings did very well. Some analysis like that might be interesting.
If I can get ahold of the data for that, it would indeed be interesting
Well lets not forget that Chaos is a bit of an RNG army. Warpstorm, instability, boons etc.
A lot of the better Chaos armies don’t have to deal with any of that- Renegades, Cabalstar, and Magnus lists are all largely free of the “random = fun” nonsense that plagues a lot of Chaos, which undoubtedly is a factor in their being more effective.
I fully embrace it. Rewarded with spawndom I usually am.
I really enjoyed this article.
One caveat to these statistics though is that armies are being counted as a given faction whether they’re running 100% in that faction or 34%. A guy running Eldar-DE-Corsair scatbike spam list that’s using its 34 percent Dark Eldar to compete under DE shouldn’t be tracked the same way as someone running pure DE.
And from what I hear Riptide Wing was *everywhere*. That faction is just bonkers off-the-charts powerful, so much that I’d say all these Eldar-Riptide and Imperials-Riptide lists need to be tracked differently.
I mean… From what I’m seeing there was a Taudar player competing for top Harlquin player and winning it.
I touch on this a little in the “What does faction even mean?” bit – 7th edition has made this particularly hard. So yes, some of this should be taken with a grain of salt, and why, for example, I referred to the Corsair/Harlequin/DE builds that did well as “More Exotic Builds” vs. Straight Craftworld armies rather than being like “Oh my god, why do all the DE players complain?”
Breaking it down further has a couple problems to it:
1. It would involve reading the lists of close to 400 players and classifying them. This is the big one – keeping up with VH given my job is already an uphill struggle.
2. There’s inevitable subjectivity in that. To use an example, is a Harlequin list that’s Harlequin primary with a crapton of Tau not a true Harlequin list? Sure. How about one with a Wraithknight, a Seer Star and a ton of Scatterbikes? A single Farseer and two minimum Scatterbikes? A single Farseer and standard bikes?
3. Even the LVO is a “small” dataset, and there’s a struggle to estimate much. Subdividing factions into much finer detail will likely overwhelm the available data.
I feel like GW is going for a little bit of a Magic the Gathering style gameplay with allies. Say chaos would be black, imperial would be white, Eldar blue. Go ahead and make a Black White Blue deck.
Green iz da best!
I was pondering that – essentially “Imperial”, “Chaos”, “Eldar” and then a collection of others.
Certainly it would be fair to divide armies amongst battle brother sets, leading (I think) to 8 distinct groups?
Imperial
Chaos
Eldar
Tyranid
Ork
Tau
Genestealer Cult
Necron
Could be missing some? Very interesting to see those results…
I took Harlies to the LVO with hopes of finishing as the top Harlequin player…. But yeah that achievement was taken by a Taudar player with hardly any Harlequins in his army at all. Come on. He clearly was just swooping in to try to take a less competitive trophy, going all-out to *not* play the faction he was competing under.
A Dark Eldar list finished ninth but it was half Craftworld eldar with a Sathkatch and serious Scatbike spam.
I feel ya though, it would be serious work to try to quantify this in the data. For now though I feel like it’s a serious caveat to the value of of the data.
It would help tremendously if the ITC required lists to have 51%+ of points to be in the faction they’re competing under. All lists with no majority faction go into a “mixed faction” category. This would eliminate the worst offenders.
I think that’s a brilliant idea, having a 51% requirement or go into mixed faction.
Then people would just complain that 51% armies weren’t “really” representing the faction and that you needed 66%, then 66% would need to be 75%, etc.
Point taken, and you’re probably right. But I think most people would agree on 51% being a sensible baseline threshold.
I’m not strictly opposed to the idea, it just raises some issues- like, what do you call an army that doesn’t have 51% of _any_ single faction? Okay, sure, you can exclude it for Best Of awards… but then how do you list it in the tournament standings? With BCP becoming more relevant and the uploading of lists being a very common thing- as well as numbers breakdowns like Variance is doing here- I think there’s a significant push to be able to quantify and examine the lists that are making top tables, and altering things like that would make it more difficult to do so.
Not to say we shouldn’t consider the issue, but understand that there’s a lot more to it than “more pure = more better.”
What do you mean by “exotic” death star builds?
Building a Deathstar up from multiple codexes stacking USRs – in the context of the Space Marines, exploiting that the Dark Angels and Space Wolves (among others) don’t have Chapter Tactics, and so can be easily combined.
I see, I see. Thanks for the epic article.
Wow, what an awesome job. I really enjoyed all the statistical nerding out.
To your faction problem. I think the only solution is to start tracking formations or more precise points.
I’d love to see a “Tau” win list that showed your win percentage with 600 points of tau (or 1 riptide wing) being really high and reliable then dropping as you included more Tau.
The only other non-numerical measuring would be tracking detachments.
So you could say “Eldar CAD helps this much” or “Daemon CAD helps this much” and that would quantify what we all kind of feel that now chaos space marines is only doing better because it’s a magnus cad or rehatti war sect.
However, like you said, this is a hobby and collating the data would be a mess. Unless there starts being a place to enter formations that make up your army in best coast pairings!
You mention genestealer cult under performing. Do they prefer a long game or a short game out of curiosity? I am just wondering if games ending because of time on turn 3 would be a problem for them (I know as a Death Guard player it is certainly a problem for me) because of the whole return to the shadows thing they do.
Also, some factions such as Eldar have nerfs to their codex in place with ITC rules that I am sure if removed would allow them to reach the top like last year. Now that everyone knows how to fight Eldar I am sure that makes it somewhat more difficult for them.
Your comment on the Taunar is spot on. I played at Warzone this last year and the unit had a huge impact. But the lists using it were not necessarily all Tau based. One had a Celexus assassin and Fateweaver in it among other things. While another list was pure Tau and all points in between, It did not win Warzone but I believe 3 were in top 8. Great article. Thanks for taking time to write it.
I finally looked up the rules for the supremacy suit after reading this article and holy shit that thing is insane. I just would not know what to do if I saw that thing… can it intercept as well?
Well you have a choice; either bow before Negan and get you head bashed in or concede the game and enjoy the time off until the next round.
I’m pretty sure it can’t take any additional support systems so no Interceptor on him.
David, you forgot to mention you have to pass over half your army for your opponent to play the next round with haha
Are you willing and able to send the lists and the results to me Reecius? I would like to analyse the average number of wins associated with each datasheet used.
A very interesting stat that could be derived from this data set would be each unit’s plus/minus, kind of like they do for players in basketball. Basically, lists that take unit x (say riptide) win 60% of their games. Whereas, csm possessed would probably have 0%? That’s a lot of work, but would provide competitive tourney goers a good indication as to what units are most competitive/being abused.
It’d definitely be interesting to see this for at least a lot of the popular units/formations to see how good they _actually_ are, because I have a feeling a lot of people would be surprised at how poorly some datasheets are performing…
It would be interesting to see these results weighted in terms of geographic area and the number of larger ITC events in that area. Some areas have a disproportionate amount of Major events and thus players in those areas are likely to have higher ITC scores going into LVO, but may be weaker faction-specific players.