Proposal: “Easter to be fixed to one date all the time”

Concerning news for all fans of seasonal adjustment around the world, particularly on the treatment of Easter!

http://www.independent.co.uk/news/uk/home-news/when-is-easter-date-to-be-fixed-archbishop-justin-welby-pope-a6814781.html

If this goes ahead at least there will be some work in updating all of the seasonal adjustment packages, estimating and applying seasonal pattern breaks, and making consistent long run time series!

“The Most Rev Justin Welby said that he hopes to make the change within the next five to ten years, in a move that will likely have huge knock-on effects for schools and other seasonally-dependent industries, according to reports.”

Although they have a history of taking their time!

“Mr Welby did warn however that churches have been attempting since the tenth century to fix the date of the festival, which at the moment is set with reference to the moon and the sun.”

Analysts look for the trend

What do analysts want?

Probably lots of things. Accurate estimates or more timely estimates are two things that come to mind. But when analysts and users use estimates for economic analysis it is typical to look for changes in direction in the underlying activity of a series. This tells them a few things: Is there evidence of a turning point? Is the latest estimate just a one off event and not typical of the recent trends? Is the new data the start something different? To help assess this it is best to check out the underlying direction (or trend) derived directly from the data.

A good example is the latest trade estimates from the UK where the analysts are spot on (although in a slightly different context) when they say

“They noted that the trade gap for the second quarter – a better guide to the underlying trend than one month’s figures – increased from £7.8bn to £11.2bn, enough to wipe 1% off economic growth.”

Rather than rely indirectly on the trade gap, trend estimates could even be derived on the actual trade deficit estimates.

Go for gold, go for the trend

Unfortunately the full article below is entrenched behind the paywall of www.thetimes.co.uk. But it is an insightful article by Sam Fleming and the selected extract below is useful to illustrate an important point.

“Wednesday’s numbers undoubtedly overstated the depth of the dive in GDP in the second quarter, in part because the Office for National Statistics struggled to measure the impact of bad weather and the Jubilee holiday. The 0.7 per cent drop is likely to be followed by a bounce of a similar magnitude in the third quarter, thanks to Olympics-related spending. Both quarters’ numbers should be treated as radio static. Britain’s growth is basically flat.”
Read it at: www.thetimes.co.uk: Caught in a Fiscal Ditch, 30 July 2012

This is all interesting stuff and what is really being talked about here are one-off events. The terminology can vary but these are typically referred to as irregular, one-off, special, extreme, or abnormal events. So pick your word of choice. In the end it doesn’t matter what we call it because these type of events are out of our control and we can’t stop them happening. What is more interesting is the interpretation and also the measurement.

Obviously these type of events can have either a positive or negative impact on important economic estimates such as GDP or retail sales. For example, the impact of extreme unseasonal weather could cause retail sales to plunge, (or conversely sales of scarves and winter jackets to soar), or the timing of the Olympics will cause travel and spending patterns to change from what they would normally be. The occurrence of these types of events all pose challenges for users, such as economists, for the interpretation of the outputs but also for the statisticians who use statistical methods like seasonal adjustment. When seasonal adjustment is boiled down to the bare bones it is just the estimation and removal of systematic calendar related effects, and if we get a curve ball of something unexpected then this can cause us some difficulties in applying these methods. However, all is not lost. There are ways that these events can be estimated for and removed to help the interpretation of what we are really interested in.

When these events like this occur what we should be interested in is the underlying direction of the time series. While it is important to estimate the magnitude of these one-off events it is often not possible to do this immediately. Often, before a thorough estimate of the impact of a one-off event can be made, more information is needed such as additional survey data which may take months or even years to arrive, or even waiting for additional anecdotal information from an independent source which can verify the impact. But it is almost never the case that this type of information is available in time to use when deriving the seasonally adjusted estimates. Everyone wants the latest data as soon as possible!

An important point to note is that for the purposes of the seasonal adjustment program, it doesn’t care what the reason was for the one-off event. It even doesn’t care that it happened. Depending on the seasonal adjustment program, it can try to deal with the data in its own automatic way. For example, the commonly used X-12-ARIMA seasonal adjustment package includes an automatic algorithm to correct for data points that it thinks are abnormal. By doing this it helps improve the estimation of the seasonal factor while also generating robust outputs. Leaving that aside for the moment, if we did have additional information, such as the reason for the one-off event, an expected magnitude of impact, or some anecdotal information, we could use this to prior adjust the data ourselves and attempt to fix this before we used seasonal adjustment. This would be the best thing to do as we have more control and it would help the seasonal adjustment program get the best seasonally adjusted set of estimates as possible. Experts do this all the time, particularly at National Statistics Institutes. However, when this is not possible, there is another different way.

We can simply treat any one-off event as being part of the irregular component. Remember that the collected data typically consists of three main components: a trend (underlying direction), seasonality (due to calendar effects, weather etc.), and irregular (volatility due to real world variation, or due to sampling or due to other random things).

This means that our seasonally adjusted estimates still contain the trend and the irregular component (this is not a problem as it is always the case). So following this approach, the trend can now become our friend, as we can reduce the impact of one-off events, and we can calculate a trend estimate in the following way:

1. Take the published seasonally adjusted estimates (which be definition will include all the one-off effects). This can be obtained from the ONS website.
2. Apply a Henderson filter which can reduce the impact of the irregular component, isolating the trend. In this case the Henderson is a 5 term filter, with an I/C (noise to trend ratio) of 2.0. Other options could be used.

Doing this for the UK GDP estimates, up to quarter two 2012, gives the following picture.

ONS trend estimate GDPSo what does this tell us?

So while we now have a trend estimate it doesn’t tell us the exact impact of the one-off event. We could derive an estimate of this by using the seasonally adjusted estimate (which is trend and irregular) and the trend estimate to give us an impact of the irregular. However this does not tell us precisely the impact of the special or one-off event because additional volatility may be within the irregular component.

If we go back to the original quote in the article… “Britain’s growth is basically flat”. Well, perhaps yes or perhaps not really. If we did have additional information on the magnitude of the abnormal events, this would result in a “flatter” trend as the recent time point may be adjusted upwards. Even ignoring the measurement issue for the abnormal event, we can still obtain a trend estimate which helps cut through the volatility of the seasonally adjusted estimates. And the best thing is that rather than use meaningless words we can actually quantify the movement in the trend. The table below best illustrates this. Rather than watching the seasonally adjusted estimates jump all over the place from positive to negative to positive to negative, the trend estimates show a clear change in direction in the recent two quarters.

Mar 2011 Jun 2011 Sep 2011 Dec 2011 Mar 2012 Jun 2012
Trend 0.02 0.32 0.19 -0.06 -0.54 -0.40
Seasonally adjusted 0.46 -0.09 0.59 -0.36 -0.32 -0.70

So go for gold, go for the trend.

GDP revisions are always higher(?) and Secret Bank Agents

I don’t mind opinions on serious hardcore subjects like the economy and statistical output as long as they make use of factually correct information to support the arguments.

Check out the extract from the following article below with the relevant parts highlighted.

“What can we take away from this? The GDP figures will eventually be revised higher, as always, after they have ceased to be of relevance. […] The Bank’s own regional agents report growth in the economy in all the sectors they monitor, with the exception of construction.”
Read it all: http://www.economicsuk.com/blog/001711.html#more (published 22 July 2012)

Lets focus in on the facts.

Fact 1: Are the UK GDP estimates always revised higher? Answer: A definitive no. People like to think this as it gives them an excuse when their “predictions” go horribly wrong or that they don’t believe the data. As you can imagine, a quick internet search threw up lots of opinions on this very issue. One of the more recent ones was an opinion piece in the BBC where the ONS replied.

“Since quarter one of 2007 the average revision between the first and third estimate of GDP has been -0.02 percentage points, with 15 of the 20 quarters only revised between +/- 0.1 percentage points. So if anything, the GDP estimate is more likely to be revised down slightly.”
Read it all: http://www.bbc.co.uk/news/business-17854550 (published 26 April 2012)

If you look closely for this information, the revision and bias analysis on the GDP estimates is even included in the regular GDP release as a little known dataset that keeps track of revisions to the official estimates. It is refered to as a revisions triangle, and shows the estimates at the initial publication and the subsequently revised estimates and can be found on the ONS website. This unbiased assessment clearly shows that GDP is never “always revised higher”. Sure there are revisions and depending on the new information this can mean estimates can be revised either up or down.

Fact 2: The Bank of England use regional agents to gather information? Answer: Yes this is true.  This is probably a little known fact outside of the Bank of England and the inner sanctum of economists. Although it sounds like something out of the Matrix where Bank agents are running around the countryside in a trench-coat and black wrap around sunglasses.

The Bank of England employ their regional agents to gather information on behalf of the Bank (probably because they don’t trust anyone else). One can only guess how they collect their information but probably what this means in practice is usually the regional agent having drinks or dinner with some local business owners and then feeding this selected information back to the Bank. The Bank then uses this as part of their information gathering for setting the interest rate at the regular Monetary Policy Committee. You would think that for such an important meeting, where one of the main outcomes is the setting of the interest rates, that they would only use official independent estimates rather than potentially biased information from single data sources.

Lack of understanding clouding analysis?

The good thing about statistics (and the internet) is that everyone can have a view and an opinion when new official estimates are released for the first time. The bad thing about this is that everyone can have a view and opinion! So – following this rule – here is an opinion on the opinions.

I’m sure there are many examples floating around on the internet where individual analysis is flawed. To catch each single one and then critique them would be a full time job. But I’ve managed to dig up one recent example where the commentator suggests that the seasonal adjustment estimates are mis-leading for the United States unemployment estimates. However, in making their point in this particular case, there is a fundamental flaw in their argument. Full link below and the key sentence is …

“Looking at 12-month changes means you can ignore seasonal adjustments, so these are unadjusted figures. In June, the number of jobs was up 1.8 million from a year earlier, or 1.6 percent. The annual changes were a little higher in the winter, but the year-to-year change was the best for any June since 2006.”

This displays a lack of understanding of the complexities and benefits of performing seasonal adjustment. In this case there are two main fundamental issues:

1. Looking at 12-month changes in the original estimates (e.g. non-seasonally adjusted estimates) is only sensible (or useful) when the data is non-seasonal. When there is seasonality involved you must perform seasonal adjustment. This is for the following reasons:

a) The calendar changes over time not only in the number of days for each month BUT also the number of types of days. For example, July in 2012 had 4 standard weeks (eg. Monday to Sunday), and then an extra Sunday Monday and Tuesday. July in 2011 had 4 standard weeks and then an extra Friday Saturday and Sunday. So if you were looking at the difference between these two months, you need to take into account the different type of days that each month has. This is one of the purposes of seasonal adjustment. In practice, this aspect is most relevant for series which can display significant trading day activity for different days (such as retail sales) so for unemployment series it may not have such a big impact.

b) Seasonality can evolve over time. It is not the case that seasonality is constant and identical each and every single year. Seasonal adjustment programs take account of this characteristic and can cope when seasonality evolves gradually. By looking at 12-month changes in the unadjusted data you will miss this subtle but important change. And even when seasonality rapidly evolves the seasonal adjustment processes can handle this as well, particularly when used by an expert who can set the appropriate parameters to take this into account. If you use the seasonally adjusted estimates, any seasonal differences (no matter how small) are removed so they will provide you with the best picture.

c) Moving holidays. What are these you ask? This probably deserves a separate post all on its own. But for now – these are holidays that can move between different months in a systematic way. For example, the timing of Easter moves between March and April, the timing of Ramadan changes throughout the year, and the day that Christmas falls on each year changes. You can’t take these effects into account when you just look at the unadjusted estimates. So while you may think you are looking at two March estimates in different years, the timing of Easter will have an impact. This is why it is important that seasonal adjustment methods are used to estimate and remove these type of effects so everything can be placed on the same basis.

So if you spot any analysis or commentary that begins with, “lets use the unadjusted figures … and take the change over a year”, make a note that the analyst doesn’t know what they are talking about and save yourself some time by going to read something else.

2. A 12-month change. What is that? It is the difference between this months data this year to this months data last year. You think this is useful? This is what a lot of analysts and hot shot analysts or even some economists will tell you. But what they don’t tell you (or perhaps don’t even know or want to acknowledge) is that this type of indicator is terrible to tell you anything about what is happening in the current most recent time periods or to identify turn points. Sure – it tells you how July this year is compared to July last year. But in reality this is a lagged indicator. Really what we should be interested in finding out is if the economy has changed direction as soon as we can. A 12-month change (or year apart growth, or annual percentage differences) will rarely detect the timing of turning points quickly, as by definition it introduces a lag so that any detection of turning points is delayed by a number of months.

You can read about the pitfalls and dangers about different indicators here: http://www.ausstats.abs.gov.au/ausstats/free.nsf/0/D829449095630207CA256D78001BB271/$File/13490_2003.pdf

User expectations and the real world

The United Kingdom Retail Sales estimates were released yesterday (19th July 2012) and the indicator that most people seem to watch and get excited about is the month-on-month percentage change in the seasonally adjusted estimates. This came in at +0.1% between May and June 2012 (for all retailing including fuel). See the official release here: http://www.ons.gov.uk/ons/rel/rsi/retail-sales/june-2012/stb-june-2012.html.

Now – some people get paid an awful lot of money to make educated guesses at what these estimates should be. These are either the investment and central bankers, the economists, the hedge fund types, or the stockmarket traders.

To help the journalists write useful articles, they often survey a wide number of their economist friends and get their opinions before the data is released. This means they often get nice quotes to use if the estimates do or don’t match what they think ranging from the “I’m an expert therefore that is why I predicted this correctly” to “The real numbers can’t be correct they just have to be wrong because they don’t match what I think”. From a bloomberg article (link below) they noted that the “expectation” for this recent +0.1% movement was actually +0.6%.

“The median forecast of 18 economists in a Bloomberg News survey was for a 0.6 percent increase. Excluding fuel, sales were up 0.3 percent. Food sales dropped 0.7 percent.”

Now, in my book, this is not even close! Perhaps this is a once off…? Of course it isn’t! And I’d guess there are plenty of examples where the experts would be no better than a dart board at guessing what has gone on. Sometimes you have to wonder what these so called “experts” actually know about the data they are guessing about? And more importantly, why we actually listen to them. Until they can show how accurate their guesses are, and can back it up with before and after facts and how they derived their guesses, why should we care what they think? It just clouds the real picture of what actually happened. So who cares what they thought would happen when we’ve just found out what actually happened!

In the end we should let the data speak for itself. At least we all know that the official numbers are backed by large independent surveys that can tell us what is really going on.

Don’t read too much into one figure

While going through the archives, I came across this nice piece by Larry Elliott at the Guardian. It was in relation to a large movement in retail sales in the United Kingdom for July 2010.

As we know, any single value can be difficult to interpret by itself. This is particularly relevant when we may be trying to interpret something as complicated as the economy! In cases like this is better to put any single large positive or negative movements into context with either comparing these single time points to previous recent periods (or perhaps even the same period last year), or using additional relevant information to help confirm the movements.

Larry (and the ONS) have it spot on in this case that you shouldn’t read too much into a single figure.

“That said, this is probably one of the occasions when it is wise to heed the advice of the ONS [Office for National Statistics, UK] about not reading too much into one month’s figures.”