Chinese GDP calculation and seasonality

Always interesting to read how different countries calculate their economic outputs, particularly GDP. This recent news article covers some changes to the Chinese GDP calculations with aspects relating to seasonality.

The relevant bits are:

“Now, China is calculating GDP based on economic activity of each quarter to make the data “more accurate in measuring the seasonal economic activity and more sensitive in capturing information on short-term fluctuations”, the NBS said.

Previously, China’s quarterly GDP data, in terms of value and growth rates, was derived from cumulated figures rather than economic activity of that particular quarter, the bureau said.”

Always good to go back to the original source which seems to be at:

in the sections

“1.4.1 Preliminary Accounting

As China’s quarterly GDP accounting is cumulative before 2015, the GDP preliminary accounting of 1-4 quarters is annual GDP preliminary accounting. Since the third quarter of 2015, China’s quarterly GDP accounting is completed quarterly, which means calculating the GDP of four quarters respectively, and totaling them up to produce the annual GDP preliminary accounting results. Annual GDP preliminary accounting is accomplished before 20 January. “

Would be curious to see some time series analysis of the outputs, or how they may deal with any changes in seasonality at this switch over.

GDP and updates

The article below is worth a read. It is not often you see these type of nitty gritty statistical issues reported in the mainstream media but it highlights an important issue for compiling and comparing estimates.

“Two years ago Ghana’s statistical service announced it was revising its GDP estimates upwards by over 60%, suggesting that in the previous estimates about US$13bn worth’s of economic activity had been missed.”
Read it all: (published 20 November 2012)

While it also seems to be a side promotion for the authors book (which is always the case I guess) it also captures the issue of data quality both within and across different countries.

Compiling statistics such as GDP (in the National Accounts) is not a simple thing. In a lot of major economies there are teams and teams of highly qualified people putting these estimates together. You don’t see them but they are often hidden away in the back rooms of statistical agencies. These brave men and women of the statistical world – yes! don’t laugh! – are often classified as nerds and geeks because it can take an unhealthy obsession with the finer details of the data and nuances of source data to compile a high estimate of GDP. It is often the case that with such a complex area, individuals will have over 20 or 30 years of experience in specific aspects of what makes GDP tick. It is not an easy task. And it is often why they are locked away in the back rooms.

A quick aside – and lately it seems to be a thankless task as the debate about the usefulness of GDP versus well-being measures rumbles onwards. Maybe this will be a topic for another post…

In this example, the compilation of GDP estimates is a worthwhile exercise, but what it really shows is the importance of getting the fundamental basics of the estimates correct. It also shows how vital it is to have regular revisions to country estimates, particularly when new or updated historical data becomes available, such as the contribution of different sectors of the economy. There are many detailed international manuals and frameworks for GDP compilation which have been around for many decades and are updated on a reasonably regular basis. The issues described in the article above are captured in detail in these types of manuals.

The manuals cover a lot of detailed points but if the basics such as having some fundamental knowledge and understanding of key concepts, while also having access to timely and quality source data, aren’t in place then in the end it doesn’t matter what the manuals (or consultants, or booksellers) actually say.

UK GDP and trend for quarter 3 2012

The Office for National Statistics released the first estimate of the quarter 3 2012 GDP yesterday. They estimated a 1 percent movement between quarter 2 and quarter 3 2012.

We had previously posted about the use of a trend estimate based on the data up to quarter 2 2012. We have now used the new information and derived the latest trend estimate.

Trend estimates can be useful here because they can help smooth out any volatility in the series. ONShas highlighted that the recent few quarters of GDP have been impacted by one off events such as public holidays and the timing of the Olympic games in London. The use of a trend estimate can reduce the impact of these events and help provide a guide of what is really happening. The ONS estimated that the timing of the Olympics contributed about 0.2 percentage points to the 1 percent increase in GDP between Q2 and Q3 GDP. To take this useful information into account we have derived two estimates of the trend. One taking the ONS published seasonally adjusted estimate at face value, and the other using the ONS one-off information to reduce the published seasonally adjusted estimate. This allows us to assess any impact on the trend estimate with this “known” one-off impact. It also serves as a rough and ready sensitivity analysis of the trend.

So taking the data from here and calculating trend estimates we get the picture below.

ONS GDP quarter 3 2012 seasonally adjusted and trend

Using the trend estimates for the calculations helps reduce the impact of the one-off impact in quarter 2 2012 (which is due to the timing of a holiday) and also the impact of the quarter 3 2012 estimate (the Olympic ticket impact). The difference between the two estimates for the quarter on quarter change for the trend are shown in this table:

Mar 2011 Jun 2011 Sep 2011 Dec 2011 Mar 2012 Jun 2012 Sep 2012
Trend using published 0.08 0.40 0.19 -0.12 -0.50 0.05 0.37
Trend using adjusted for tickets 0.08 0.40 0.19 -0.12 -0.48 -0.01 0.27
Seasonally adjusted published 0.49 0.07 0.52 -0.36 -0.30 -0.38 1.00

You can also compare between the two vintages of trend estimates and assess the impact of a new data point by going back to the previous article here which uses data up until quarter 2 2012. Copied here for convenience. There is an impact to the June (quarter 2) 2012 estimate because at the time we did not know about the quarter 3 estimate. So now there is new information, naturally there is a revision to both the seasonally adjusted and trend estimates.

Mar 2011 Jun 2011 Sep 2011 Dec 2011 Mar 2012 Jun 2012
Trend 0.02 0.32 0.19 -0.06 -0.54 -0.40
Seasonally adjusted 0.46 -0.09 0.59 -0.36 -0.32 -0.70

Looking at the first table, even taking into account the 0.2 percent point impact of the tickets, the trend shows an upwards movement, although it is 0.3 percent quarter on quarter in the underlying activity, rather than 0.4 percent. These trend estimates are a more stable estimate than focusing on the 1 percent increase in the seasonally adjusted estimates that we know will have been distored by one off impacts. It is likely that there are some other one-off factors that are impacting here but as always more future data will tell more of the story.

Go for gold, go for the trend

Unfortunately the full article below is entrenched behind the paywall of But it is an insightful article by Sam Fleming and the selected extract below is useful to illustrate an important point.

“Wednesday’s numbers undoubtedly overstated the depth of the dive in GDP in the second quarter, in part because the Office for National Statistics struggled to measure the impact of bad weather and the Jubilee holiday. The 0.7 per cent drop is likely to be followed by a bounce of a similar magnitude in the third quarter, thanks to Olympics-related spending. Both quarters’ numbers should be treated as radio static. Britain’s growth is basically flat.”
Read it at: Caught in a Fiscal Ditch, 30 July 2012

This is all interesting stuff and what is really being talked about here are one-off events. The terminology can vary but these are typically referred to as irregular, one-off, special, extreme, or abnormal events. So pick your word of choice. In the end it doesn’t matter what we call it because these type of events are out of our control and we can’t stop them happening. What is more interesting is the interpretation and also the measurement.

Obviously these type of events can have either a positive or negative impact on important economic estimates such as GDP or retail sales. For example, the impact of extreme unseasonal weather could cause retail sales to plunge, (or conversely sales of scarves and winter jackets to soar), or the timing of the Olympics will cause travel and spending patterns to change from what they would normally be. The occurrence of these types of events all pose challenges for users, such as economists, for the interpretation of the outputs but also for the statisticians who use statistical methods like seasonal adjustment. When seasonal adjustment is boiled down to the bare bones it is just the estimation and removal of systematic calendar related effects, and if we get a curve ball of something unexpected then this can cause us some difficulties in applying these methods. However, all is not lost. There are ways that these events can be estimated for and removed to help the interpretation of what we are really interested in.

When these events like this occur what we should be interested in is the underlying direction of the time series. While it is important to estimate the magnitude of these one-off events it is often not possible to do this immediately. Often, before a thorough estimate of the impact of a one-off event can be made, more information is needed such as additional survey data which may take months or even years to arrive, or even waiting for additional anecdotal information from an independent source which can verify the impact. But it is almost never the case that this type of information is available in time to use when deriving the seasonally adjusted estimates. Everyone wants the latest data as soon as possible!

An important point to note is that for the purposes of the seasonal adjustment program, it doesn’t care what the reason was for the one-off event. It even doesn’t care that it happened. Depending on the seasonal adjustment program, it can try to deal with the data in its own automatic way. For example, the commonly used X-12-ARIMA seasonal adjustment package includes an automatic algorithm to correct for data points that it thinks are abnormal. By doing this it helps improve the estimation of the seasonal factor while also generating robust outputs. Leaving that aside for the moment, if we did have additional information, such as the reason for the one-off event, an expected magnitude of impact, or some anecdotal information, we could use this to prior adjust the data ourselves and attempt to fix this before we used seasonal adjustment. This would be the best thing to do as we have more control and it would help the seasonal adjustment program get the best seasonally adjusted set of estimates as possible. Experts do this all the time, particularly at National Statistics Institutes. However, when this is not possible, there is another different way.

We can simply treat any one-off event as being part of the irregular component. Remember that the collected data typically consists of three main components: a trend (underlying direction), seasonality (due to calendar effects, weather etc.), and irregular (volatility due to real world variation, or due to sampling or due to other random things).

This means that our seasonally adjusted estimates still contain the trend and the irregular component (this is not a problem as it is always the case). So following this approach, the trend can now become our friend, as we can reduce the impact of one-off events, and we can calculate a trend estimate in the following way:

1. Take the published seasonally adjusted estimates (which be definition will include all the one-off effects). This can be obtained from the ONS website.
2. Apply a Henderson filter which can reduce the impact of the irregular component, isolating the trend. In this case the Henderson is a 5 term filter, with an I/C (noise to trend ratio) of 2.0. Other options could be used.

Doing this for the UK GDP estimates, up to quarter two 2012, gives the following picture.

ONS trend estimate GDPSo what does this tell us?

So while we now have a trend estimate it doesn’t tell us the exact impact of the one-off event. We could derive an estimate of this by using the seasonally adjusted estimate (which is trend and irregular) and the trend estimate to give us an impact of the irregular. However this does not tell us precisely the impact of the special or one-off event because additional volatility may be within the irregular component.

If we go back to the original quote in the article… “Britain’s growth is basically flat”. Well, perhaps yes or perhaps not really. If we did have additional information on the magnitude of the abnormal events, this would result in a “flatter” trend as the recent time point may be adjusted upwards. Even ignoring the measurement issue for the abnormal event, we can still obtain a trend estimate which helps cut through the volatility of the seasonally adjusted estimates. And the best thing is that rather than use meaningless words we can actually quantify the movement in the trend. The table below best illustrates this. Rather than watching the seasonally adjusted estimates jump all over the place from positive to negative to positive to negative, the trend estimates show a clear change in direction in the recent two quarters.

Mar 2011 Jun 2011 Sep 2011 Dec 2011 Mar 2012 Jun 2012
Trend 0.02 0.32 0.19 -0.06 -0.54 -0.40
Seasonally adjusted 0.46 -0.09 0.59 -0.36 -0.32 -0.70

So go for gold, go for the trend.

GDP revisions are always higher(?) and Secret Bank Agents

I don’t mind opinions on serious hardcore subjects like the economy and statistical output as long as they make use of factually correct information to support the arguments.

Check out the extract from the following article below with the relevant parts highlighted.

“What can we take away from this? The GDP figures will eventually be revised higher, as always, after they have ceased to be of relevance. […] The Bank’s own regional agents report growth in the economy in all the sectors they monitor, with the exception of construction.”
Read it all: (published 22 July 2012)

Lets focus in on the facts.

Fact 1: Are the UK GDP estimates always revised higher? Answer: A definitive no. People like to think this as it gives them an excuse when their “predictions” go horribly wrong or that they don’t believe the data. As you can imagine, a quick internet search threw up lots of opinions on this very issue. One of the more recent ones was an opinion piece in the BBC where the ONS replied.

“Since quarter one of 2007 the average revision between the first and third estimate of GDP has been -0.02 percentage points, with 15 of the 20 quarters only revised between +/- 0.1 percentage points. So if anything, the GDP estimate is more likely to be revised down slightly.”
Read it all: (published 26 April 2012)

If you look closely for this information, the revision and bias analysis on the GDP estimates is even included in the regular GDP release as a little known dataset that keeps track of revisions to the official estimates. It is refered to as a revisions triangle, and shows the estimates at the initial publication and the subsequently revised estimates and can be found on the ONS website. This unbiased assessment clearly shows that GDP is never “always revised higher”. Sure there are revisions and depending on the new information this can mean estimates can be revised either up or down.

Fact 2: The Bank of England use regional agents to gather information? Answer: Yes this is true.  This is probably a little known fact outside of the Bank of England and the inner sanctum of economists. Although it sounds like something out of the Matrix where Bank agents are running around the countryside in a trench-coat and black wrap around sunglasses.

The Bank of England employ their regional agents to gather information on behalf of the Bank (probably because they don’t trust anyone else). One can only guess how they collect their information but probably what this means in practice is usually the regional agent having drinks or dinner with some local business owners and then feeding this selected information back to the Bank. The Bank then uses this as part of their information gathering for setting the interest rate at the regular Monetary Policy Committee. You would think that for such an important meeting, where one of the main outcomes is the setting of the interest rates, that they would only use official independent estimates rather than potentially biased information from single data sources.