Spring review of rolling data

The current International Break has offered me the first chance I’ve had since November to take fresh stock of my spreadsheet’s rolling data, and see what interesting trends have emerged in the meantime.

First off, a quick reminder to you that I ran parallel models last season to test which number of gameweeks provided my spreadsheet predictions with the strongest correlation to actual scores, and I established that an 8 gameweek data range was optimal.  Ever since then, my spreadsheet’s predictions have been based on each team’s last 8 home or away games (whichever is relevant) for the upcoming gameweek.

Every team is given a weighting for attack and defence strength, based on their last 8 home and last 8 away games. The resulting ratios represent how many xG teams are expected to score or concede against a defence or attack with a rating of 1.00 (average). They are updated after each gameweek, and by charting the fluctuations in these ratios we can observe the directions in which teams are trending.

The best to worst 8 game sequences are coloured on a scale of blue to red, making it easy to see where in the season each team’s best and worst xG form was leading into gameweeks 1 to 30. For example, the most noticeable trends in the home attack strength table below are the resurgent ones achieved by LEI and MUN, and the downward ones experienced by LIV and CHE.

Appendix I

Unsurprisingly, given their well documented fall from grace, the most dramatic decline seen by any team in any category is the home soil attacking form of LIV, which is now below average for the first time in a long time.

average xG expected to score vs an average away defence

Worryingly, for those heavily invested in Bamford, Raphina, Dallas et al, the rate of decline in home attacking form for LEE is actually even more pronounced, due to their rating peaking a couple of gameweeks later than LIV. Their impressive looking improvement in the first half of the season should be disregarded as this likely only shows that my initial best guess weighting for a promoted side underestimated them.

average xG expected to score vs an average away defence

The alarming drop off in both home and away attacking form for AVL since the injury absence of Grealish is captured by my rolling data, with the talisman’s last home appearance coinciding with Villa’s peak in GW23. Watkins owners ought to be anxious for the club captain to return sooner rather than later.

average xG expected to score vs an average away defence

Trending in the opposite direction meanwhile are LEI MUN and WHU (since GW20, GW17 and GW22 respectively) who all went into the International Break at the height of their home attacking powers.

average xG expected to score vs an average away defence

Evidently, the defences of teams visiting King Power Stadium, Old Trafford, and London Stadium respectively can expect to be thoroughly tested by these hosts, who are currently ranked 1st, 2nd and 4th respectively for home attack strength.

The home form of LEI has been something of a revelation, considering they have tended in recent seasons to be much more effective on the road. I wondered if the Foxes have had a particularly favourable sequence of home fixtures lately, but this is seemingly not the case as their last 8 were against EVE, MUN, SOU, CHE, LEE, LIV, ARS, SHU.

Kudos to Ole Solskjaer too for seemingly turning around MUN‘s home form, which was really quite poor indeed in the first half of the season. WHU on the other hand have been relatively consistent throughout the season.

Turning our attention next to the home defence strength table, the three teams that have noticeably cut down the quality of goalscoring chances they concede to visiting teams are BHA ARS and NEW.

Appendix II

BHA have been a very frustrating team to follow this season from an xG perspective. They have failed to score in games where they have had chances equating to 3+xG, and managed to concede in games they’ve restricted opposing teams to very few scoring opportunities in.

Even so, their transformation from one of the most porous teams last season to now ranking as the second best home defence in the league, sandwiched between CHE and MCI, is nothing short of astounding.

average xG expected to be conceded vs an average away attack

Not quite so impressive, but arguably more surprising, NEW currently rank as the 6th best home defence in the league, having shown steady improvement throughout the season (see chart below), which maybe should temper expectations somewhat with regards to likely popular GW30 captaincy selections in the form of Kane, Son and Bale.

ARS have been a lot more erratic in their home defence form, but they have improved enough lately to occupy a season high rank of 7th best.

average xG expected to be conceded vs an average away attack

Thus far, we have only focused upon teams’ home form, so let us examine the performances of teams on their travels, starting with attacking strength.

Appendix III

The trends that stand out most in the away attack strength table above are the role reversals recorded by LIV, LEI and MUN. The upward trajectories achieved by ARS and WOL are significant, and also worthy of closer examination.

Arguably the only case that can be made for not selling Salah for those who still own him is the fact that LIV remain an attacking force to be reckoned with away from Anfield. In fact, as the chart below shows, their attacking process away from home has actually gradually improved throughout the season. Only the champions elect (MCI) carry more goal threat on current form.

average xG expected to score vs an average home defence

Earlier on this season, it was well understood that LEI and MUN were counter-attacking teams served better by playing away rather than at home. Well, for whatever reason, managerial design maybe or random variance, this pattern has well and truly flip-flopped. Compare the chart below with the one shown in the home attack strength section, and you will see the home and away weightings trending in opposite directions.

average xG expected to score vs an average home defence

The drop from the season best weightings achieved by MUN and LEI are among the three steepest experienced by any team in this department, albeit they are still ranked 8th and 9th best on current form.

ARS on the other hand have been rapidly improving, and are now ranked a season high third best in this category, which contributes to them being predicted by my spreadsheets to be the fourth highest scoring team over the next 6 gameweeks.

average xG expected to score vs an average home defence

WOL are currently predicted to be the sixth highest scorers over the next 6 gameweeks and, on the cusp of a favourable fixture swing, it is encouraging for would-be investors to see the recent turnaround in underlying attacking stats. The fact they approach the appealing upcoming fixture run in better away attacking form than at any point this season bodes well for owners of Neto.

average xG expected to score vs an average defence

For the purposes of the next team under the microscope I will show the same away attack strength table again, but this time with the teams colour shaded relative to all the other teams in the league.

Appendix VII

Having already referenced the improved home defence strength of NEW as a cause for concern for those planning to captain a Spurs player in GW30, it should be pointed out here that the case for doing so is weakened further by the lowly 16th place ranking TOT occupy in the away attack strength table above. Only SHU, NEW, CRY and BUR are reckoned to pose less goal threat on the road than Mourinho’s men, and it’s not as though the trendline below offers much in the way of encouragement either.

average xG expected to score vs an average home defence

And finally, here is the table for away defence strength, and the teams drawing my attention the most, for good reason and bad, are LIV, CHE, AVL and WOL.

Appendix IV

For those contemplating investing in WOL defenders ahead of their aforementioned favourable fixture run, the picture is nowhere near as promising as it is for their attackers. It appears at though the upturn in Wolves’ attacking process has come at the expense of their defensive process.

Excluding the promoted sides, the declines in home and away defence performance are the worst and second-worst respectively. Consequently, having started the season rated as one of the best defences, they are now ranked only 12th best at home and 11th best away.

average xG expected to be conceded vs an average attack

Apologies to the legion of you who own Martinez (41.5%) and Targett (15.1%), but only one non-promoted side has suffered a worse decline in away defence strength weighting than WOL, and that is their West Midlands neighbours.

The ‘V’ shape below is indicative of striking improvements having been made in the first half of the season, followed by a no less dramatic reversion to expectation levels slightly lower than they were pre-season.

average xG expected to be conceded vs an average home attack

On current trends, the top 3 away defences in the league are MCI, LIV and CHE.

average xG expected to be conceded vs an average home attack

Whereas MCI have been consistently good throughout the season, the appointment of Tuchel has contributed towards eradicating the disparity that used to exist between the performances of the CHE defence at home versus those away from Stamford Bridge. They are the most improved away defence in the league according to this review of rolling data.

Surprisingly, given they were last season’s runaway champions, it is LIV who are the next most improved in this department, and memories are revived of the preseason hype for Alexander-Arnold and Robertson. Now might not be the worst time to be owning either of this formerly much-vaunted duo.

Changing the lens through which we looked at the away defence strength table at the start of this section to one whereby the teams are colour shaded relative to all others, it can be seen that the next best teams in this category are LEI TOT WHU ARS MUN and BHA.

Appendix VIII

Once again, I have found this to be a useful exercise to undertake, with some genuinely surprising findings. Admittedly, this retrospective analysis of team performance is not by itself predictive of future results. To that end, these tables and charts should be cross-referenced with my model’s predictions for future gameweeks, which take into account the relative strength of opposing teams to be faced, so keep an eye out for my poker table screenshots later in the week, when they will become my latest pinned tweet.

This season has been a long old slog, but we are entering the home stretch now, so best of luck to you all on the run-in. Wishing you all a green arrow fuelled late charge to the finishing line.

Coley (aka FPL P0ker PlAyer @barCOLEYna)

If Carlsberg did FPL spreadsheets….

They would probably be like mine.

Some spreadsheets simply track each team’s fixtures in gameweek order.  Some incorporate fixture difficulty ratings, with the aim of identifying favourable/unfavourable sequences to assist with managers’ transfer decisions. Although, as Richard Kenny @InfernoSix showed, FDR is a very blunt instrument when it comes to assessing fixture difficulty.  And some spreadsheets even assign predicted FPL scores to players based on a wide variety of algorithms, including RMT.  Mine do many of these things and more, but probably better.

Cheat Sheet

How I Got Here

I first came across the concept of season tickers during my first proper FPL preseason.  I immediately liked the concept, but as my second season progressed I grew dissatisfied  with what I perceived to be the arbitrary nature of team rankings, and distrustful of how often they’d be updated to reflect new realities.  Having finished 17,394th in my second season, I believed there was plenty of scope for improvement, and so I set out to learn what I could do with my desktop excel program, which I’d never had any use for previously.

I went into my third season determined to fly solo and rely solely on my own formulations.  With no formal background in maths whatsoever, I intuited a method of calculating the offensive and defensive strengths of each team relative to all the others.  As my Excel know-how grew I had several Eureka! moments that led to my second blog (December 2015) proclaiming my first prototype of a season ticker spreadsheet based on what I would later come to understand as a crude form of xG, though I’d never heard of that concept at that point.   My spreadsheets were constantly being refined throughout that season, but I nonetheless achieved a new personal best finish of 12,599th.

Unfortunately for my average overall rank though, I cut corners the following season by substituting my homegrown variety of xG with the much less labour intensive, and more readily available, Shots On Target data.  This was considered by many FPL managers at that time to be the most important metric of all for predicting attacking returns.  See One Stat To Rule Them All for example.  My complacency and laziness proved disastrous, and I did well to finish inside the top one hundred and fifty thousand (148,327) having been still outside the top two million going into Gameweek 12.

When I first met David Wardale @DavvaWavva during the summer of 2017 to be interviewed for his excellent book about fantasy football – Wasting Your Wildcard – I was feeling very bullish about my fifth season prospects.  This wasn’t the same blind optimism most players of FPL experience before the first bunting of red flags or first quiver of red arrows.  No, my optimism was based entirely on my discovery of a reliable source of xG data.  I’d done tonnes of research into the many different models used to calculate xG, and had settled on a source I trusted to take my spreadsheets to the next level.

In the course of that research into xG models, I’d been very pleased to learn that my homegrown method was almost identical to that now used by many professional sports bettors.  This was also when I learned of a concept called poisson distribution, which came to revolutionise my understanding of most likely correct scores based on my spreadsheet calculations, but more on that fishy sounding concept later.

Where I Am

When deciding where to ‘invest’ their money professional sports bettors use websites like FiveThirtyEight and ClubElo.  These were reckoned to be amongst the very best last season by alex b @fussbALEXperte who, apart from writing excellent articles about football and psychology, also measures the quality of football predictions.  After taking a close look at the methods and principles used by FiveThirtyEight and ClubElo I was pleased to see that they are essentially the same as mine.  Coincidentally, the latter was also credited by the aforementioned Richard Kenny as offering a more reliable barometer of teams’ attacking and defending strengths than FDR.

As far as I am aware though, these sites focus on upcoming matches only, which makes perfect sense given that most of the bettors they cater for have little interest in betting on the result of league matches several weeks away.  After all, there are a multitude of variables that can change teams’ future prospects in the meantime, e.g., injuries, morale, suspensions, transfers, etc.

Catering for FPL managers is a different ball game, however, as they must plan ahead to be successful.  They only have two wildcards per season and one free transfer per week, so my spreadsheets are just as focused on long-term projections about teams’ forthcoming fixture runs as they are for more immediate short-term predictions.

What I Do

After each gameweek I carefully enter the expected goals scored and conceded values reported by my preferred source, for all of the matches in the latest round of fixtures, into pre-prepared cells on my spreadsheets.  Consistently applied adjustments to these values are made in certain circumstances, e.g., red cards, unfairly disallowed goals, missed penalty kicks resulting in follow up shots, etc.

These are then systematically weighted to give every team individual values for strength in attack and defence, for both home and away.

The rationale for distinguishing between home and away form is simply that many teams approach away games differently to when playing at home, often changing their formations and personnel in the process.

The sophisticated part comes next, when my algorithms model what my subsequent spreadsheets will look like if, and admittedly it is a big if, my spreadsheets’ current predictions correlate precisely with actual outcomes.  Clearly, this is never going to happen, but I do find these dynamic xG projections to be more reliable than static ones that assume the status quo will remain relevant.  In effect, my sheets anticipate the cascading effects of predicted new data being added, and redundant data being subtracted.

xG GW25-30

The analogy I use is modern weather forecasts, and how they are based on computer simulations that evolve the state of the atmosphere forward in time using an understanding of physics and fluid dynamics.  They attempt to predict what the weather will be in the future, not what it is now.

Extrapolating from the values my spreadsheet assigns to every team allows me to do many things, including sorting teams by the predicted number of expected goals that will be scored and/or conceded over any number of future gameweeks for any range of gameweeks desired.  It also enables me to produce weekly correct score predictions.

An important distinction to make here is that what my spreadsheets predict is ‘expected goals‘, not actual goals.  As can be observed in the match stats shown during Match Of The Day post-match interviews with club managers, these often don’t correspond with each other.

Before coming to the vexed question of what the point of generating theoretical goals is if they can differ markedly from goals scored in reality, I should add that over the course of a season there’s usually very little to separate the total number of goals scored in the Premier League from those expected by the xG model I use.  Last season, for instance, total expected goals for all teams only exceeded actual goals by 46, which works out as an average of under 0.1 of a goal per match per team.

Why xG? 

Have you ever watched a game of football and seen the team with the biggest chances to score goals lose?  Then there’s your answer.  Before I started playing FPL, unfortunate events like calamitous errors by individual players, poor refereeing decisions, and unlucky deflections, were all just grist to the opinion mill.  It was only when I started recording match results onto my spreadsheets, and saw the distorting effect such simplistic data had on my team rating equations, that I came to realise all goals ought not to be accorded equal significance.

After 7 matches last season, Crystal Palace had yet to score any points or goals.  By the standard measure of actual goals they were destined to drop down into the Championship.  Clearly, they were a team who couldn’t score goals, and on that basis, wouldn’t score the goals needed to avoid relegation.  The Expected Goals model, however, told a different story.  One in which they were cast as merely the unlucky victim of variance who had experienced the unfairest results up until that point.  And the villain of the piece was our old friend ‘standard deviation‘.

Even before I came across xG on my fantasy football travels, my homegrown variety had me start Leicester’s title-winning season with Vardy and Albrighton in my FPL squad (alas not Mahrez), at a time when they were very unfashionable picks.  My DIY xG spreadsheet had identified Leicester as a vastly underestimated team judging from performances during their ‘great escape’ the season before.  I remember well my increasing exasperation with radio and television pundits alike midway through that season as they all declared relegation for the Foxes a foregone conclusion.

These, and countless other examples besides, have led me to confidently conclude that xG gives a more accurate picture of the attacking and defending strengths and weaknesses of teams than do the Goals For and Goals Against columns in a league table.

Predictably enough though, there are still large swathes of the FPL community yet to embrace this revolutionary way of understanding football matches.  My twitter feed is still littered with uninformed commentary and misguided sarcasm, from some of the most followed twitter accounts, at the expense of the xG approach.

Flat Earth in space with sun and moonTo my mind though, these snipers and swipers are akin to ‘flat earthers’ denying the earth is round.  I expect Tony Bloom and Matthew Benham wouldn’t have any sympathy for these conspiracy theorists either, given that they were able to buy football clubs (Brighton Hove Albion and FC Midjylland & Brentford) with the proceeds from their xG-based sports betting operations.

What’s The Plan? 

I’ve been refining and perfecting my spreadsheets for several years now, and they’ve helped me to sub-twenty thousand finishes in three out of the last four years, the last of which was a decent 4,733rd.

What I’ve enjoyed most about letting my xG spreadsheets govern my decisions is that they often promote going against the grain of template teams, against the flow of groupthink, and against the tide of crowd wisdom.  And yet, the maverick moves I’ve made have generally kept me ahead of the curve.

The time and effort I put into my spreadsheets has sometimes been hinted at in screenshots I’ve shared on social media posts, but the time has now come for me to make them available to other FPL managers.

The elusive nature of form in football, and the sensitivity of my algorithms, means that fluctuations in my player and team ratings are inevitable, but if my spreadsheets perform well, these should be gradual rather than volatile.

My spreadsheets will not eradicate difficult decisions regarding captaincy and transfers, however, and should be used alongside managers’ judgement and knowledge, not instead of them.  Getting the most from my sheets will depend greatly on synthesising them with managers’ instincts.  The onus will still be on managers to make adjustments and allowances for events like key injuries (and returns from injury), suspensions, transfers, etc., the significance of which cannot be immediately captured by my spreadsheets.

Reality check 

The most common scoreline in the Premier League last season was one-one, which happened just under twelve percent of the time (11.84%).  The next most common score was one-nil (11.58%).  And then there were just as many nil-nil draws as there were two-one wins (8.42%), meaning that in just under a third of all matches (31.84%) neither team scored more than a single goal.


Using a poisson distribution applet I was able to calculate that the most probable that a 1-1 scoreline could ever be is 13.53%.  That’s longer than 6/1 in fractions, but shorter than 13/2.  Accepting any odds of 6/1 or less (11/2, 5/1, 9/2, etc.) for a 1-1 score draw, therefore, would mark you out as a ‘mug punter‘.

The implications for successfully predicting scorelines are considerable.  Even if we find 10 matches on a football coupon that all had the highest probability of a 1-1 scoreline possible, the chances of getting at least 3 out of the 10 correct will never exceed 29.75%.

In other words, don’t be calling my spreadsheets out if they only get one or two score forecasts correct each gameweek, because in  reality, achieving 3 out of 10 with any more regularity than once every three gameweeks is against the odds.

As for correctly predicting all the results of the free-to-play SkySports Super 6, you will never have higher than a 0.00061% chance of winning the jackpot.  That’s a one-in-one hundred and sixty two thousand, seven hundred and fifty two chance (162,751-1), in the best case scenario.  Little wonder then that I’ve never won it!

If you were in any doubt about the random nature of much of what happens on a football pitch, then these startling odds should help you better understand the enormity of the task faced by those trying to provide accurate predictions.

If you understand predicting scorelines is difficult, then you will realise that forecasting who will be doing the scoring and assisting can be an even more unpredictable business.  Like weather presenters assuming the weather tomorrow will correspond with that of yesterday, much of what passes for FPL punditry too often presents evidence of what has happened in recent gameweeks as incontrovertible proof of what will happen in future gameweeks.  In my experience though, such simplistic thinking in FPL rarely work out how we expect it to.

Health warning 

Finally, I should warn you about the dangers of dependency.  Use of spreadsheets can become seriously addictive.  Never binge drink on them as they can really go to your head.  And they might give you dutch courage to make maverick moves that leave you with a really bad overall rank hangover.

Please drink responsibly. 


Coley aka FPL Poker Player @barCOLEYna





Challengers come and go, but my spreadsheets remain undefeated!

It’s five weeks since my last blog If Carlsberg Did FPL Spreadsheets…. in which I announced my pioneering spreadsheets, and invited interested FPL managers to test-fly them in the following 4 week period (GW5-8).  I have said all along my sheets would need the first 8 gameweeks to be fully up to speed, so now seems a good time to bring you all up to date with how things have gone.


Thanks to my test-pilots, anomalies were identified, teething problem fixed, and tips received that ensured smoother functionality of sortable table columns, as well as a number of other breakthroughs and improvements made.

My initial plan to supply my test-pilots with player ratings, however, was abandoned after our first mission.  My belief that trying to predict individual player points is a fools errands was only deepened by what unfolded in GW5.  I am firmly of the view that averaging out aggregate player xG/xA values, short-term or long, fails to recognise that player points tend to polarise between high and low scores.  So I returned to my original position that the best approach is to advertise teams with the best combined form and fixtures, and leave it to individual managers to determine the best fit transfers for their teams thereafter.

That said, I have looked back at the player table I provided my test-pilots with prior to GW5 and, with the benefit of hindsight, they look pretty damn good!  xFPL P DEF FWDIf we ignore Bryan and Davies, who only had one start between them, 6 of only 7 other defenders highlighted (Doherty, Laporte, Trippier, Monreal, Robertson and Alonso) were all amongst the top two dozen highest scoring defenders (1st, 4th, 6th, 14th, 21st and 24th respectively) during this period.

Obviously, the standout pick here was Doherty, who had averaged only 2 points per game at that point remember.  Bringing him in on my GW6 wildcard was one of my better moves, and coincided with my ranking improving from over 233K to under 53K in the space of 2 weeks.

Something else I did during the trial period was to compare and contrast my spreadsheet’s predictions with those of a well-known predictive fantasy football algorithm, marketed as the ‘world’s most powerful.  Small sample size notwithstanding, my algorithm outperformed the paywalled one in all but one respect.

My spreadsheet correctly forecast more team goals, more match scorelines, more accurately the number of goals scored, and had a significantly better correlation between predicted and actual goals as measured by Mean Absolute Error.  Odd then that it predicted fewer correct results.

The main selling point of my spreadsheets, however, is the way they are able to anticipate patterns and trends before they’ve even emerged by projecting forwards the long-term implications of short-term predictions.


One of the things I’ve enjoyed most about tweeting screenshots of my spreadsheets is the way that they are like a red rag to a bull for some members of our FPL community who cannot help but contest some of the more leftfield predictions that my sheets make.

Going against the grain

I should confess here that my somewhat contrarian nature is well suited to my sheets.  I like that they regularly highlight maverick moves, because that is where most differential value is to be found.  I do understand though that most FPL managers don’t have the requisite nerve for going against the crowd quite as often as I do.  Maybe that’s the poker player in me.

By the way, it does amuse me when my sheets are objected to for not being in line with received wisdom and/or bookies odds.  After all, what would be the point of going to all the trouble of generating predictions if they merely mirrored other widely available resources?

Mainly though, my enjoyment of the challenges to my spreadsheets stems from the fact that they usually backfire on the doubters and knockers.  For example, before GW5 my sheets were challenged for singling out CPL as the clean sheet banker of that gameweek.

AWB 62% CS - Crop

Admittedly, an entry error subsequently came to light that scaled down their probability odds from 62% to 55%.  Even so, in bookmakers terms, that’s about a 9/11 chance, so an odds-on favourite, whereas odds were available to back them at 7/4 against, which equates to a probability of around 36.5%.

MCI and WLV were the only other teams to keep a clean sheet in GW5, which could have been highly significant with so many managers having benched Palace defenders after their failure to deliver on an expected clean sheet the previous week.AWB vs Ralls - Crop

Courtesy of my sheets though, I’d benched Wan-Bissaka the week before, and swerved his zero points return, and now my sheets had me moving in the opposite direction to the majority once again.

Unfortunately for those of us who purposely started Wan-Bissaka in GW5, our advantage was negated by the unexpected absence of Mendy that weekend.  This meant most owners were spared the indignity of having a player with 9 points sitting on their bench.

Screenshot_20180929-142240_WhatsApp - Crop

Before GW7 I was asked my opinion about transferring in Arnautovic before WHU faced up against MUN.  Whilst understanding the reluctance to take a hit to do so, I felt obliged to share my sheet’s forecast that the Hammers would score twice.  In fact, they scored three times – stupid sheet – and predictably enough, Arnie was amongst the goals, because if ever the FPL Gods have an opportunity to punish managers exercising patience and showing restraint, then sure enough they’ll take it.

TOT 1 goal thoughtsMore recently, I was asked (politely) by one of my test-pilots to account for my sheet’s prediction that TOT would struggle to score more than once vs CDF in a week when so many FPL managers were taking hits to bring in and captain Kane.  This enquiry came in response to the following excerpt in the GW8 preview I emailed to all of those in receipt of my spreadsheet tables.

leaked GW8 document - Crop 2

Kane shyThis was another big call by my spreadsheet, and one I heeded by transferring in Lacazette and leave 3m in the bank, rather than bring in the most transferred in and most captained player of GW8, purely because my sheet rated the chances of goals to be higher for ARS than TOT.

The move would have been a master stroke, but for FPL’s uncanny habit of bringing us down to earth.  The player I sold to bring Lacazette (12 pts) in was Wilson (14 pts).  D’oh!

MCI 44% CS - CropIt was the final game of the last weekend, however, that arguably represented the biggest test yet for my spreadsheets, as its forecast of a MCI clean sheet (projected before GW6) was met with scepticism and barely concealed ridicule in several quarters.

MCI CS Anfield record




MCI 44% CS - Crop 2

Like Peter denying knowledge of Jesus after the Last Supper, I was guilty of making excuses for my spreadsheet’s prediction.

In fact, by the time GW7 results were taken into account, my spreadsheets position on the probability of a clean sheet for the champions at Anfield had hardened from 44% to 56.5%.

MCI greater than 50% CS - Crop

Going into GW8 though, I began to repent my earlier disowning of my sheets, and posted a rationale for having faith in their prediction.

leaked GW8 document - Crop

Not only did MCI not concede, they rarely looked like doing so.  Furthermore, my sheet’s 0-1 forecast was only prevented by a Mahrez penalty kick miss that is probably still in orbit around the Moon!

As I’ve said many times before, the thing with stats is that they are only true until they are not.  They tell us what has happened in the past, not what will happen in the future. From my perspective, the LIV vs MCI result illustrates this point perfectly.

In the interests of balance, I ought to acknowledge one public challenger has yet to fall by the wayside.  Prior to GW6, the prominence given by my sheets to BOU defence for their next 6 fixtures run was publicly disparaged by one manager.

BOU 2nd best GW6-11

Although misinterpretating my Clean Sheet Probability table because teams with the same number of predicted shutouts were shown in alphabetical order (CHE were actually rated as 2nd best), the shock 4-0 defeat suffered by the Cherries at Turf Moor did little to contradict such scepticism.

We are only halfway through the 6 week period in question, however, and the jury is still out as to whether the cynicism was fully justified.  It is worth noting that only six teams (MCI, ARS, TOT, CHE, LIV and WLV) have more clean sheets than the one BOU gained last time out vs WAT over those first 3 gameweeks, and I have said all along that my spreadsheets will not be firing on all cylinders until they have 8 gameweeks of data.

That success at Vicarage Road notwithstanding, Bournemouth’s defence ranking for the next rolling block of 6 fixtures (GW9-14) has fallen to 15th, so my sheets may not remain unbowed for much longer.


I have been back in touch with the 5 managers who missed out on my last intake of test-pilots, and am pleased to welcome them all to my squadron.  Having gained experience in distributing reports, reviews and spreadsheets to a large crew of people over the past month, I now feel able to expand my operation further, and repeat the exercise all over again for the next 4 gameweeks (GW9-GW12) leading up to the next international break.

If you are interested in trying out my spreadsheets, and willing to give constructive feedback, please RT and DM me to register your interest.

In the meantime, it would be very much appreciated if the naysayers could continue throwing down gauntlets, as my spreadsheet predictions seem to thrive on them!

And remember:  Flying isn’t dangerous.  Crashing is what’s dangerous!  To your overall ranking that is.


All FPL accounts and tweets in this blog – even those based on real people – are entirely fictional.  All FPL account tweets are recreated – poorly.

No feelings were intended to be harmed in the making of this blog.

3rd session chip counts are in – and Christmas has come early

It’s often said that “history is written by the victors”, but when I said in my last blog that I’ll keep a diary of my FPL progress, just in case I need it for end-of-season overall winner interviews, I was joking alright!?  Well, after running good in the third session and rising through the overall ranks to a personal best of 450th, I think it’s fair enough for me to say:  SHIT JUST GOT REAL!

fireworks tweet

I maintained the momentum I picked up in the second session and built my chip stack up to 699 points, and moved up 22,970 places in the overall rankings. I find myself the current Christmas Island number one, and lead the way in the Beat the General, Fplwildcards, Macmillan, and FPL Happy Hour Cup leagues, as well as my own.  I’ve still not played any of my chips and my realisable team value has risen by 0.5m up to £101.1m.  That’s an inflation rate currently of just over 4.7% p.a.

What a difference a year makes.  This time last year I was languishing outside the top 2 million and still bottom of my 18 player money mini-league, which I’d run away with the season before.  The first eleven gameweeks saw my respective defenders and goalkeepers keep just 4 clean sheets between them, and my title defence resembled that of Chelsea’s in 2015-16!  From GW12 onwards though, I steadily climbed the rankings and completed my ‘great escape‘ from ignominy with a top 150 thousand ranking, good enough for a face-saving fifth place finish in my mini-league.

This time around, however, I’ve already banked 26 clean sheets from my defence.  Indeed, this season has seen such a turnaround in my fortunes that it’s difficult to not draw parallels with Leicester City’s historic 2015-16 season!

LCFC 2015-16

In the absence of a Claudio Ranieri* then, what can I attribute my FPL transformation to?  Two words my friends, two words:  expected and goals, a.k.a. xG.  Whereas last year my spreadsheets were dependent on Shots On Target data, this year they rely on expected goals instead.  More about my spreadsheets later.

colourized aggressive gambles

I said in my last blog that I was going to be making more aggressive moves, and taking more gambles.  True to my word, after only taking one hit in the preceeding seven gameweeks, I took hits in three out of the next four that followed.

[*For what it’s worth, I’ve long argued against the mainstream view that a) Ranieri deserved nearly all the credit for Leicester winning the league, and b) he’d earned the right to be relegated with them.]

Calculating the net profit or loss of transfers can be a tricky business.  I often see people declaring the success or failure of such moves based solely on points in the subsequent gameweek.  In reality though, the effects of transfers ripple across several gameweeks, and can completely rewrite our gameweek histories, as future captain/vice captain choices and benching decisions are all impacted upon by the players we buy and sell.

GW8-11 Transfer History Audit

As the screenshot above confirms though there was little wrong with my player recruitment during the last session.  48 points profit over 4 weeks equates to a healthy average of 12 points per gameweek transfers and, but for Mo Salah’s poor penalty miss in GW10, I could easily have been looking at 67 points profit at a near 17 points weekly average.

Royal FPL Flush

In tournament poker, your tournament life will often be put in jeopardy by opponents playing their ‘draws’ aggressively.  And you may find yourself ‘heads up’ with an opponent you re-raised pre-flop, and you may find yourself hitting ‘top pair, top kicker’ on the flop, and you may find yourself continuation-betting the flop, only for your opponent to then check-raise you all-in for the remainder of their stack.  And you may ask yourself, well how did I get here?

talking heads

Now this is a really uncomfortable position to be in because even if you put your opponents on a ‘big draw’, maybe a flush and straight draw combo, you may know you’re ahead and have the correct pot-odds to make the call, but you also know that a lot of the time they’re going to outdraw you, and your chip-stack will take a big hit if they do.

I had much the same feeling leading up to my big ‘showdown’ in GW11.  The pre-flop action saw a Harry Kane raise on Thursday night, and a re-raise on Friday night.  The stakes were high.  By the time the deadline dealer ‘called the clock’ on me, there were more than 600,000 new Kane owners, and the all-in bet I faced was 1.4 million armbands.  I ‘went into the tank’ and decided before my allotted time ran out to make the call with my top player, top kicker Sergio Aguero who I transferred in to captain my team.

Pair of Agueros

I already had one Kane ‘blocker’ in the form of Christian Eriksen, but knew that transferring in Heung Min Son would give me extra ‘outs’.  I also hoped he might be able to ‘counterfeit’ some of the hands with Kane in them if Tottenham scored big against Crystal Palace.

For the record, I also thought about bringing in Dele Alli too, but there were several good reasons for not doing so.  Firstly, he’d played every minute of Tottenham’s previous 3 matches in all competitions; secondly, there were rumours circulating beforehand that some players were going to be rested; and, thirdly, unlike Son, he had none of the ingredients you need for the perfect pre-wildcard recipe.  Namely, he did not fit the bill as a one-week-only differential punt.

masked hero tweet

Appropriately enough, given his full name is (nearly) an anagram of MINI UNSUNG HERO, my lucky number seven scored the only goal of the game, scooping me a 10 point pot in the process.  With all those captaining Kane left holding nothing but a ‘busted flush’, Aguero’s subsequent goal was the ‘rub down’ they didn’t need.

pair of Sons





I’d not forgotten Kane’s blank when I transferred him into my team in GW8 to captain him for the supposedly plum home fixture with Bournemouth, but there were many other reasons for going against the crowd in GW11.  Ultimately though, it boiled down to believing Aguero would outscore Kane, because my expected goals spreadsheet was predicting MCI would win 3-1 and that TOT would do so 2-0, and I reckoned Sergio’s share of 3 goals would be bigger than Harry’s of 2.

colourized cheat sheet plan

I said in my last blog that most of my decisions will be determined by my spreadsheets, or my ‘cheat sheets’ as I dubbed them, and here they were helping me to finally break into the top 1,000

GW11 xScorelines

My spreadsheet’s team goals predictions in Gameweek 11 showed a strong positive correlation with actual team goals of +0.63 (on a scale between -1.0, the minimum, and +1.0, the maximum).


The actual scores were as follows: GW11 Actual Scorelines

Thanks to my ‘cheat sheets’ then, I finally achieved a long-term goal of breaking into the top 1K rankings.  The harder task now will be to remain there.  There is a tendency sometimes after taking down a huge pot in a poker tournament to want to sit the next bunch of hands out, to take it easy for a while, and to think you have the luxury of simply waiting to be dealt good hands.  This is nearly always a big mistake.

Poker has taught me to avoid being complacent.  You cannot afford to relax until you’ve either won the tournament or been knocked out of it.  And as I’m not ready to be knocked out of the running yet, I’d better stay focused!

In poker, players are responsible for protecting their hole cards.  If they ‘flash’ their cards then people will look.  If they don’t protect their cards then it’s their own fault if people exploit that advantage.  I might need to start heeding that lesson much sooner than anticipated, and start playing my FPL cards much closer to my chest when it comes to declaring my captain picks,  transfer plans, and such like.  But for now, and in the spirit of the way the game is played in the FPL community, I’ll ‘advertise’ below the scorelines my spreadsheets predict to be the most likely for Gameweek 12.

GW12 xScorelines

The next session is a mammoth one, with ten levels over the next 6 weeks, before the next break in early January.  I hope you’ll join me for the 5th session chip count then.  In the meantime, may the FPL flops be with you!

Coley a.k.a. FPL Poker Player @barCOLEYna



‘advertise’ refers to exposing cards in such a way as to deliberately convey an impression to opponents about the advertising player’s style of play

‘blocker’ is one of the cards your opponent needs to complete their hand

‘busted flush’ is a potential flush that ultimately fails to materialise

‘call the clock’ is how you challenge a player for taking too long to act. Once challenged, a player has a set amount of time to make a decision. If the player fails to act in the allotted time, their hand is dead

‘counterfeit’ refers to situations in which a community card actually makes a player’s hand less strong even after technically improving that hand

‘flash’ refers to a card becoming briefly exposed by accident

‘outs’ are any unseen cards that, if drawn, will improve a player’s hand to one that is likely to win

‘rub down’ is a deliberate act of putting someone down

‘showdown’ is a situation when, if more than one player remains after the last betting round, remaining players expose and compare their hands to determine the winner


‘top pair, top kicker’ is when you pair the highest card on the flop with one of your hole cards, and your other card is highest possible kicker, which in most instances is an Ace, e.g., raising preflop with Ace, 10, and the flop comes 10, 4, 2.

‘went into the tank’ is to take a lot of time to think about the decision on how to play a hand