CLAIM: The nationalist vote in Northern Ireland has not increased since 1998.
RATING: Accurate? Inaccurate? Neither? Maybe a little bit of everything?
- Most of the interesting things in life are affected by randomness.
- Some things can be sort-of accurate and sort-of not. Shades of grey are everywhere.
- Even straightforward facts have context – and sometimes that context tells you far more than a simple ‘true’ or ‘false’.
On May 18, DUP MLA Emma Little-Pengelly tweeted: “A gentle reminder, the nationalist vote has not increased since 1998.”
She received the following reply:
So, in the 1998 Assembly elections, the nationalist first preference vote share was 39.6%. In this May’s elections, the share was 40.3% – therefore Ms Little-Pengelly was wrong?
Actually, not quite…
Statistical significance is the idea that some change in some specific data under different circumstances (including through the passage of time) is not a matter of coincidence.
In other words, the change is not simply random chance – what is sometimes called “statistical noise”. Instead, the altered circumstances have very likely played a role in the change in data.
For example, if you plant 1,000 hardy perennial wildflowers in a field one year and the next year only 983 grow, you would not say that your wildflower meadow faces inevitable decline.
That’s because it’s unlikely that such a small reduction in blooms is statistically significant. Your meadow might be doomed – you might have been freakishly lucky to get that many flowers in year two – but you wouldn’t conclude that from the available information.
In 1998, the nationalist vote share was 39.6%. In 2022, it was 40.3%. This could stem from an upward trend in vote share over time. However, it absolutely could just be “noise”.
A similar thing can be seen when considering the percentage of seats won by nationalist candidates in NI Assembly elections since 1998.
You could point out the fact that the green line goes down and up, up and down. Or you could say it’s basically flat and that all the variations are just noise, and not statistically significant.
If you want a more formal, more strictly accurate and more unwieldy explanation of statistical significance (featuring the null hypothesis, p-values, confidence intervals – all the hits) this primer from National Centre for Biotechnology Information in the USA is about as clear as you are going to get.
Back to our own example, and Twitter…
Not a Fact Check
We could have fact checked Emma Little-Pengelly’s claim, or the counterclaim (note that the people involved carried on their discussion and found common ground re. statistical significance).
What would we have said? We could have deemed it inaccurate with consideration – emphasis on consideration – because 40.3% is bigger than 39.6%. We could have called it accurate with consideration (again, emphasis on consideration) because, although 40.3% is higher than 39.6%, this could simply be statistical noise.
Fact checks are very useful. The ratings provided within those checks are also very useful. However, it’s important to always read the article. Context can be just as important as the rating itself.
In this example, the context is actually far more illuminating than any rating. Hence, this is not a fact check.