top of page
Search
  • Suzanne Lugthart

The 3 golden rules of longitudinal analysis

When you hear a record breaking number that doesn’t feel right, it’s worth looking a bit deeper. If any of the golden rules of longitudinal analysis are broken - high quality data, consistent sampling and metrics - you may end up drawing the wrong conclusions.


While we await the Met Office telling a bemused nation it just experienced the hottest June average temperatures since records began, a quick reminder of some of the important factors to take into account when producing any longitudinal trend analysis in any sector - be it weather or brands or customer satisfaction.



1. Track the right metric consistently

Do you have any idea what an “average temperature” feels like? No, me neither. Yet that is the currency of modern weather reporting.  Like most averages, it hides a multitude of sins. One of those sins is that it disguises the fact that “average daily temperatures” are being driven up by rising night time temperatures. The Met Office themselves admit this. Urban environments - all that tarmac, concrete, brick and humanity - hold heat and create warmth. 


“Average daily temperature” was the metric the Met Office used though to tell us May 2024 was the highest since records began.  However if you dig a little deeper you’ll see that May’s average daytime temperature actually ranked 8th overall since records began. And when it comes to highest daytime temperature, it ranked just 34th. Those metrics were considered unsuitable for use in a climate emergency story though.


2. Consistency of data set/sample

When you track properly, you need your data set to be consistent. And this is where there’s another problem with Met Office data.  Temperature records technically began in 1815, yet only a tiny proportion of those weather stations have been consistently measuring over that period. In fact just 9 go back to 1900.  Taking more recent history, just 68 of the 103 weather stations in operation 64 years ago have reported consistently. So the “records” are a constantly evolving group of weather stations, making historical comparisons really very, well, challenging is the politest word I can think of. A better measure of changing temperatures would be to focus just on the stations that have been measuring consistently.


3. Data quality

Then there’s data quality. The Met Office tend to rush this stuff out to grab headlines which means it’s likely quite a lot of the data will be provisional and subject to change. But that’s near from the biggest problem. 


A recent FOI request revealed that 78% of measuring stations are classed as grade 4 or 5. These are poor quality locations which have margins of error of +/- 2 degrees Celsius. In fact the 29% of stations in the lowest category 5, have margins of error of +/- 5 degrees Celsius. Remember that when the Met Office gives you temperatures to 1 or 2 decimal places. Heathrow and Northolt airports are examples of a class 4 and 5 respectively.  A better approach from the Met Office would be to publish recordings from just the better quality stations where we can have greater confidence in their accuracy.


Ensuring high quality data, from steady and consistent sources and communicated through the most appropriate metric are the key to good trend data. Given you can build out any narrative you like with data, anything less is just politics.


Comments


bottom of page