What the Local Elections Actually Told Us

A closer look at the results of last week's local elections show the media narrative about a Lib Dem surge was wide of the mark.

The media coverage of the local election results has at times been deeply misleading with voting outcomes not being accurately described. The problem is two-fold. First, the usual way of presenting the figures – in terms of seats, percentage share of the vote and swings – can give a very misleading impression. It has on this occasion as the usual metrics have been distorted by the plummeting turnout compared to 2015 (when the turnout was high due to the General Election). No one single summary statistic can be relied upon in all circumstances. Second, it is nigh on impossible to get the actual voting figures. The country deserves a central publication point for election data to allow anyone to do their own, neutral, assessment of the trends. The time has come for the UK to have more transparency about its election results – surely one of the most fundamental of all statistics – as befits a developed democracy. Without this, the voters are unlikely to get a fair sense of the results given much of the interpretation is being done by politicians or columnists skilful enough to select their factoids to suit their own story or others who are innumerate, wilfully or naively ignorant of the facts, or producing projections from black box models.

What did the election results show?

Thanks to Wikipedia we have a two paragraph summary: “The Conservatives lost control of 44 councils and more than 1,300 council seats. It was the worst Conservative local election performance since 1995, when the party lost more than 2,000 seats. Labour, despite topping national polls, lost 6 councils and more than 80 seats. Labour lost votes in heavily Brexit-voting areas, while making gains in some other places. Parties supporting remaining in the EU performed well. The Liberal Democrats made the most gains of any party, while the Greens also picked up seats with the largest percentage growth. There was also a significant increase in the number of independent and local party councillors, with their number of seats more than doubling.”

The main metric used in the analysis is the number of seats won/lost (and its aggregated format of councils won/lost). At one level – and certainly in the council chambers – that is the important number.

But what do the number of seats tell us about trends of political opinion and the factors that were driving voter action? In particular, does the jump in Lib Dem seats show a rising appeal of the Lib Dem message or a desire to stop Brexit as many have said?

The Guardian had a piece from Bridget Phillipson, the Labour MP, who said (my underlining): “That pattern held across England – both Labour and the Tories saw votes that have previously been cast for them move elsewhere, above all to the parties that want us to stay in the EU.”

Andrew Grice in the Independent took a similar line. “The big winners were the re-energised Lib Dems, whose clear pitch as the party of Remain and a Final Say referendum did not stop them making gains in Tory-held areas in the south ……. the Lib Dems scooped up Remainers in the south ……. results will raise Lib Dem hopes that their coalition nightmare may finally be coming to an end …… The Lib Dems will be confident of building on their success at the European elections, and eclipsing Change UK at its first test. The new party’s view that the Lib Dems are going nowhere is now open to question; they are alive and kicking.”

The Observer had a piece from Robert Ford, professor of political science at the University of Manchester. It said: “The big winner was the Liberal Democrat party, which announced its return as a force in local government with hundreds of seat wins and some major local council takeovers. ………….. The Lib Dem vote surged by a remarkable 15 points on average in such areas, delivering hundreds of seats and control of 10 councils, restoring the party to something like its pre-Coalition vitality in its traditional south of England heartlands such as Somerset, Winchester and Chelmsford.”

Andrew Rawnsley, Chief Political Commentator of the Observer, added: “The main beneficiary in these elections were the Lib Dems, up by more than 700 seats. They were due a revival after the savaging inflicted on their local government base during their time in office with the Tories. Voter memories of Lib-Dem participation in the coalition may be softening …….  The Lib Dems are returning to their historical role, which is to outperform at a council level and offer a home for voters discontented with the big two. In many parts of the country, the Lib-Dem surge was augmented by its appeal to anti-Brexit voters as a party unequivocally in favour of a fresh referendum.

Sadly very little of the repeating narrative of Lib Dem vote surge seems to be true. This becomes clear by looking at the number of votes cast. I chose Chelmsford as a case study. (A “case study” is needed as, as I explain, the votes data is not readily available to analyse.) Chelmsford was one of the ten Lib Dem council gains from the Conservatives featured on the BBC election summary page.

The “Heavy Conservatives losses in Essex” story got quite some attention. As the BBC story put it: “All 57 seats in Chelmsford were up for grabs on Thursday’s poll, and the Liberal Democrats gained 26, mostly at the expense of the Conservatives who lost 31. Independent candidates won five seats. At the Chelmsford count, Ms Ford (the local Conservative MP) became emotional as she reflected on “a very disappointing night”.

I dug out the numbers for Chelmsford. (It’s a boring exercise getting the numbers as you have to find the relevant page of each council’s website, then, depending on the council, you have to crawl over pdf or html records hidden away on websites. Each council is different and there is no central record.)

I had wanted to do the analysis for Bath, also a big favourite in the media, but it became clear – and here’s another problem – that the ward boundaries had changed between the 2015 and 2019 elections so it was not possible for me to do the work. I should also say that I’d have loved to do the analysis for 2011 as well to better understand the impact of the larger turnout in 2015 but, without data, I can’t.

Looking at the Chelmsford data (see my excel spreadsheet Local election results 2019 Chelmsford), the story of these elections was not about a Lib Dem surge in support. There hasn’t been a surge in support. It was about turnout and the failure of Conservative supporters to vote. The simple mean of turnout percentage in each of the 24 wards fell from 69% in 2015 to 33% in 2019. Instead of two in three voting, it was only one in three. Half of those voting in 2015 couldn’t be bothered this time.

Within the aggregate fall in turnout, the Lib Dem vote held solid. Total Lib Dem votes rose fractionally from 38,000 to 39,000 across Chelmsford. It wasn’t that rise that got the Lib Dems the bulk of their new seats, rather that was as a result of the Conservative vote falling from 97,000 to 40,000. In some wards the Lib Dem vote rose and in others it fell, but in three quarters of the wards the change per candidate was less than 200, ie not much either way. Certainly not enough of a change or a consistent enough change to make a story out of (without having, for example, the 2011 data to compare with).

Such large changes to turnout are very rare (and occurred in these elections as 2015 coincided with the GE which boosted voter numbers then) but they also do funny things to the traditional metrics. Given turnout tumbled it’s a shame that none of those pieces above wrote about its (possible, probable) impact. But, as ever, the failure to get to grips with the numbers can even lead experts astray.

To recap, the claims in the articles above from the experts included:

  • “the Lib Dems scooped up Remainers in the south”
  • “ …… both Labour and the Tories saw votes that have previously been cast for them move elsewhere, above all to the parties that want us to stay in the EU”
  • “The Lib Dem vote surged by a remarkable …. restoring the party to something like its pre-Coalition vitality in its traditional south of England heartlands such as Somerset, Winchester and Chelmsford.”
  • “The Lib Dems are returning to their historical role, which is to outperform at a council level and offer a home for voters discontented with the big two ….”

Sadly, they aren’t likely to be true.

It’s not clear, of course, why such experts have drawn these conclusions. It is possible that they did their best in the absence of decent accessible data. But you’d expect experts to look hard for the numbers to support their lines. It is also possibly the case that naively they relied too much on the common currency of the debate, the seats, swings and percentage of vote figures – as opposed to actual votes cast. Again, one might expect better. A less generous interpretation is that the arguably lazy use of headline numbers just suited their pre-determined narrative. It has always been the case that too many people use statistics selectively to provide evidence for only one side of a multi-sided contentious topic. It’s another example of the well-known saying: Some individuals use statistics as a drunk man uses lamp-posts — for support rather than for illumination. This feature has become even more pronounced in the Brexit era of the last three or four years.

Even the generally brilliant Godfather of political analysis Professor Sir John Curtice summing up on BBC radio seemed to miss the main point. Two examples: “In seats where the Liberal Democrats were second to the Conservatives, double-digit swings from the Tories to Lib Dems are commonplace” and “now, good news for Ed Davey is this does look to us like the best Liberal Democrat local performance since his party went into coalition with the Conservatives in 2010.”

Neither statement is untrue as there were double digit swings to the Lib Dems in some Chelmsford wards but the almost negligible rise in the votes cast for them suggests that “good news for Ed Davey” is not such good news.

The graphic below from BritainElects demonstrates how the usual metrics can be misleading. It shows a 19% increase in the vote share of the Lib Dems, and a 9% fall in the Conservative share, compared to 2015. It’s easy to see how someone who doesn’t look at the underlying numbers – or doesn’t want to look at them – might get the wrong end of the stick about the extent of pro-EU voting. It’s a 14% swing to the Lib Dems after all!

The three wards below are examples of the extreme case – where the Lib Dem vote actually fell but the seats switched from Conservative to Lib Dem (because the Conservative fell so very much more).  Obviously winning a seat is great for the Lib Dems but it is less impressive for the national voting intentions picture (or as an indicator of Brexit leanings) and less likely to be sustained in future elections if it’s done on the back of fewer votes than in 2015. Again, if the data were there to be seen the electorate might be a bit wiser about the underlying trends.

The charts above show percentage share data but it is sometimes summarised in the swing figure – and that was not a better single indicator for describing what’s going on in this case. As a Parliamentary Briefing explains: “Electoral swing is … often used to analyse the performance of parties over time or in one election between different electoral areas. The basis of calculating swing is each party’s percentage share of the vote. The swing from Party A to Party B is conventionally defined as the average of the percentage point fall in Party A’s share of the vote and the percentage point rise in Party B’s.” Often it gives a reasonable sense of what’s happening but in the extreme cases like these local elections it too fails.

All this points to the need for the figures for election results to be collated in one place. It seems that there have been attempts to do so, such as a project by the ESRC. It might not be right for the government to publish the data but there are any number of “independent” bodies funded by the taxpayer (ie voter) that could or should do so. There is no reason why the Office for National Statistics could not collate them and publish them in a single place, in a reusable format. If they were unwilling, they could be mandated by the UK Statistics Authority and have the figures classified as trustworthy National Statistics. UKSA could also use its influence to improve poor use of data.

Alternatively, the Electoral Commission overseas elections and could provide the service. It already publishes lots of election data but it’s about turnout, postal votes and other such metrics that enables it to carry out its job. There’s no data about who won and votes cast for each candidate/party but it would be a modest job to add it.

The nation’s voters deserve better.

This article was republished from Simon Briscoe’s blog Britain in Numbers.

About the Author

Simon Briscoe is the director of The Data Analysis Bureau, a trustee of Full Fact and a council member of the Royal Statistical Society. He is a specialist advisor to the Public Administration and Constitutional Affairs Committee and blogs at Britain in Numbers.