Modelling a Best of Five series
With the world championship looming, one of the big goals for western teams to show that they have narrowed the gap between them and Korean and Chinese squads is to win a best of five series at a world championship. Undeniably, it is a harder challenge, no western team has ever beaten a Korean team in a best of five series since 2012. In this article we will look at mathematically modelling a series in order to try and quantify this extra difficulty.
It is often speculated that the LCS format is to blame for a lack of success of western teams in the knockout stage of world championship. Common sense would dictate that only practising best of one matches in the regular season puts the teams at a disadvantage when playing against teams that play best of two or best of three series on a weekly basis. Its a convenient explanation for the lack of western teams success. Unfortunately for the critics of the LCS format, even the most simple model of the best of five format shows us that with the success rate per game against Chinese and Korean teams at last years worlds, they weren't strong enough in the first place.
We're going to begin with the most basic possible model of a best of five (here on out referred to as a Bo5). To to so, we will have to make some important assumptions, some of which you may not be comfortable with. Later on, we will revisit these assumptions to take a closer look at their validity, hopefully being able to improve our model, but only if there is sufficient factual evidence in place to add an extra layer of complexity.
Assumption one: In one game, we can represent a team's chance of victory by a probability p. 0<p<1. The probability of defeat will then be 1-p. Here the sum of probability of every possible outcome, victory or defeat, will equal one. For those familiar with probability, this should be fairly straightforward.
Assumption two: The probability of victory, p, remains the same for every game in the series. This helps us to create the simplest possible model of a Bo5, later we will revisit this assumption and I will try to convince you why it is not as bad as it may look at first glance.
With these two assumptions in place, we can now create our model. Readers who are more familiar with probability will recognise that a model of success vs failure with 5 independent trials is a binomial distribution. The chances of winning the best of five can be calculated from a relatively simple formula. However, to better illustrate the model and be able to build upon it later, I have created a spreadsheet that will take us through every possible scenario in the best of 5 and calculate a proability for each outcome. This model is found on sheet one. The chance of victory, p, can be found in the top left of the sheet and any value between 0 and 1 can be entered and new values for the result of the Bo5 produced. On the right there is a table of p against P(Bo5 win) and these plotted against an arbitrary x axis on the chart below. If you download the spreadsheet you should be able to insert your own values of p and have the other values respond accordingly.
At the last world championship western teams won 7 out of 25 games played against Korean and Chinese teams. A win rate of 28%. This isn't a bad rough figure to choose for a generic LCS team vs generic LCK/LPL matchup. Including MSI and IEM katowice from this year gives us a win rate of 32%, largely due to TSM's 3 victories over World Elite. A p=0.28 gives a fairly woeful 0.14 chance of winning the Bo5. It is therefore not wholly surprising that the LCS teams have not fared well. For a reasonable chance at victory in the Bo5, they need to have a better chance of winning the individual games. Fortunately for the LCS squad, a small increase in p can introduce a large difference in P(Bo5 win). From p=0.3 to 0.4, the chance of success at a Bo5 more than doubles to become 0.31. For the stronger LCS sides against some of the weaker LCK/LPL teams at the world championship this year, a 40% chance in an individual game would not sound unreasonable. A 0.31 probability in a Bo5 is a realistic chance of success, around rolling a 1 or 2 on a single roll of a die. The Bo5 format does a good job of distinguishing between closely matched teams without wholly eliminating the chances of the weaker team to win.
You may have read the previous section with a certain air of skepticism and maybe downright disagreement. The probability of success not changing game by game may sound absolutely ludicrous. Here you should be careful against falling for the gambler's fallacy, it occurs a lot in this discussion, even from the casters. Often you will see when a team is two games up in a series, but the teams are believed to be equal, casters and analysts predicting the team behind to bring a game back. They've looked equal so far, and have lost two tight games, the scoreline should reflect the closeness of the series. Another mistake often made is when a team is predicted take at least a game at the beginning of the series, this team will be predicted to win when two games down. Unfortunately, the real world does not function like this. The only thing that will adjust p on a game by game basis is the strength of teams at adapting their strategy over a series.
The ability of teams to adapt over a series is one that is often quoted as a reason for their success, and is the fuel for criticizing the best of one LCS format. To improve our model, it is the first thing that we must try and improve. Unfortunately, whilst the ability to adapt may well affect the outcome of a series, quantifying the effect and placing it in the model is a great deal more difficult. There is no possible sample to take that will allow us to place a figure on the strength of a teams adaptability, we simply have to make an educated guess. I have created three further models on sheets two, three and four showing three possible ways that the adaptability of a team could affect its chances of success.
Sheet two shows the simplest model, after each game, p will increase by a flat amount. This I have called the adaptability number. To add a further layer of depth, you can adjust the adaptability number for each game in the series, with no dependence on the result of the games. If you think that teams adapt more at the beginning of the series, you can account for this. The model on sheet 3 is very similar to sheet two, except rather than p increase or decrease by a flat amount, it is multiplied by an adaptability factor. For a team with better adaptability than their opponents, this will be greater than one, for worse, it will be less than one.
The final model (sheet 4) has a different adaptability factor depending on the result of the previous game. Often in a series, you will see a team lose because of a key champion choice made by the opponents, banning this champion in the subsequent game will change the chance of success. It is a result of the logic that you will learn more from a loss than from a victory. Whether this is true is up for debate.
Bear in mind when using these models that there are many, many factors that contribute to a team's success in League of Legends. Whilst the ability of a team to adapt over the course of a series is certainly important, it is unlikely to radically affect p.