USA retail sales and trend estimates for August 2015

It has been a long time since we’ve looked at the USA Retail Sales estimates. Way back in 2012: so it is worth a revisit.

The Census Bureau do not estimate or publish official trend estimates, but trend estimates can be derived by taking the published seasonally adjusted estimates and applying a set of Henderson filters (with a bit of code in R and ggplot2). Using the latest published data up to and including August 2015 gives

USA Retail Sales Seasonally adjusted and trend estimates

where the one month percentage change in the trend and seasonally adjusted estimates are

Dec 2014 Jan 2015 Feb 2015 Mar 2015 Apr 2015 May 2015 Jun 2015 Jul 2015 Aug 2015
Trend -0.24 -0.18 -0.01 0.26 0.49 0.58 0.53 0.45 0.36
Seasonally adjusted -0.87 -0.77 -0.53 1.54 0.03 1.18 -0.04 0.71 0.19

So underlying one month movement in the trend has been strong since March 2015 even though the seasonally adjusted one month movements have bounced around. Even with a dip in the seasonally adjusted estimate in September 2015, it shouldn’t change the fundamental view of the underlying strengh in recent periods.

Over the length of the series the median for the one month percentage change in the trend for USA retail sales is 0.4%, so the recent activity is back in line with historical growth.

For background you can get the seasonally adjusted data here:

UK Bank Holidays and Retail Sales

Unofficial estimates of retail sales for August 2015 from the British Retail Consortium are mentioned in this article: from 8th September 2015.

This time period included particular issues which are relevant to seasonal adjustment, such as the treatment of the bank holiday which fell on 31st August. The article notes that:

“The bank holiday was on 31 August, but both the BRC and the Office for National Statistics judge that the month officially ended on 29 August. It means September’s figures will be boosted by back-to-school purchases. The bank holiday applied in England, Wales and Northern Ireland. Retailers report their sales on a weekly basis from Sunday to Saturday, which means that monthly figures do not necessarily cover the whole of a calendar month. Instead, a quarter will be made up of two four-week periods and a five week period. This is only particularly important when key shopping days such as bank holidays officially fall in different months from year to year, which makes comparisons difficult.”

The article doesn’t mention that if the seasonally adjusted estimates are used, then this problem is not relevant. Any comparison of the collected data which is not seasonally adjusted will be distorted by these type of events, but the use of seasonal adjustment approaches can estimate and remove the impacts of holidays, including those holidays that move over time. So only use the seasonally adjusted estimates to get the real underlying picture of what is happening. Even better is to use trend estimates that can be derived by smoothing the volatility from the seasonally adjusted estimates. So it is best to wait for the official seasonally adjusted estimates to see the real picture.

Choose the right indicator for analysis

I normally post articles that are useful to read. The author of the following article does themselves no favours with a general rant against official statistics outputs. I’ll quote two parts of the article that deserve to be highlighted:

The full article is available here:

With the first quote of:

“… comprehensive compilation of data is useful for economists and analysts, it is broadly unhelpful for anyone who wants to get a simple understanding of the direction of retail.”

I’d argue that nothing is simple anymore when it comes to interpreting movements in economic data. Simple analysis will lead to simple understanding and when it comes to the complex nature of economic outputs and estimates you are going to miss the subtle underlying aspects of the economic picture. Why constrain your analysis to simple measures and one indicator? The best approach is to build a picture of a set of outputs and look at a range of indicators. Seasonally adjusted and trend estimates can help give a useful picture when used in tandem; including even looking at the unadjusted estimates if needed. Just focusing on a single indicator is a recipe for disaster for interpretation and understanding as each indicator has its own strength and weakness.

The second quote from the article backs this up. The quote is:

“The more sensible measure is to look at the value of spend compared to the same period last year.”

Unfortunately this is a common misconception but ends up resulting in flawed analysis. It shows a lack of understanding of time series analysis issues in general. Firstly, depending on what data is being compared, year-on-year movements in non-seasonally adjusted estimates run the risk of resulting in seriously misleading analysis. This is because the nature of the calendar changes over time. July this year in 2015 has a different number of day composition compared to July last year. July 2015 has 4 weeks and an extra Wednesday, Thursday and Friday; while July 2014 has 4 weeks and an extra Tuesday, Wednesday and Thursday. So if there is lots of extra activity on a Friday, the results of this comparison will be disorted just by how the calendar changes over time. You may think that you are comparing like-for-like when you look at year-on-year movements, but if the data is not seasonally adjusted (including for trading day aspects such as the number of Mondays, Tuesday etc.) then you will get a false understanding of the movements resulting in changes that look important and significant but are just due to the calendar change. So it is important to ensure that seasonally adjusted estimates are used for any year-on-year comparison. Secondly, a year-on-year movement is lagged and tells you nothing about what is happening in the most recent periods. It only tells you what was happening against a year ago! How is that relevant when you’re ignoring the most recent set of information, e.g. May, June, July outputs in any analysis? The economy evolves over time, and a year ago can be a long time in the context of economic activity. Seasonal patterns change over time, and there can be shocks to the economy. To truly understand what is happening now, the most recent time periods need to be considered and taken into account in comparison with the latest data. The best way to do that is to use a form of trend estimates over the most recent time periods.

A response article that touched on these issues raised by the original article was also published and is available here:

Useful guidance on communicating uncertainty

The Government Statistical Service in the United Kingdom have put out some useful guidance on communicating uncertainty. You can check it out here:

The most interesting part on page 4 is they say

“You should provide sufficient and appropriate information to indicate:
…a longer term view of change (e.g. trend)”

Good to see the trend get an official mention as when it is packaged with a range of other indicators (original and seasonally adjusted estimates), it can give a complete understanding of the nature of the time series. Why settle just for the seasonally adjusted estimates when it still contains the noisy part of the time series?

Example of the use of trend for volatile series

A good example of how the use of trend estimates can help when the estimates are volatile.


Example trend bovine

Seasonal patterns and October public sector net borrowing

I had previously looked at public sector net borrowing in some earlier posts. The most recent data for October was released on 21 November 2012. It again led to quite a bit of press, for example,

“This meant government borrowing excluding the effects of banking bailouts came in at £8.6bn in October, compared with £5.9bn a year ago. City economists had expected a shortfall of £6bn.”

The Guardian also has a focus with some analysis, but only presents the annual data in a nice looking graph linking it all back to political parties. See this link.

We can do better than this and use the monthly data and also combine this with some seasonal adjustment techniques to tell us really what is going on. Annual data won’t tell us the true story. The monthly data is available from this link (series id: J5II). The data goes from January 1993 up to October 2012.

So lets first look at what a reasonable expectation should have been for October 2012 estimates. Ideally, we should use the data that was available at September 2012 estimates. This is because there may have been some historical revisions. But unfortunately the ONS website does not make it easy to extract data vintages of previously published data. To make this as realistic as possible I have updated the data based on the published estimates in their previous statistical release. So taking the actual data published up to September 2012 and applying some simple forecasting methods would have given a forecast for net borrowing for October 2012 of around £6 billion. So in-line with the expert economists who probably also just applied a simple forecast model. But this forecast came with a range between £-9.5 billion and £-2.5 billion. A wide range. So we shouldn’t be too surprised that the actual number was £-8.6 billion, as it is well within the expect range of our forecast.

Using the actual published data for October 2012 we can look at the seasonality for October over recent years. This can give us an indication of whether the October 2012 estimate is different to previous Octobers in different years. As everyone knows it is not a good thing to just compare the non-seasonally adjusted data across years as it does not take into account aspects such as changing seasonality over time and calendar composition of the month. The use of seasonal adjustment will account for this. The plot for the seasonal component in both Septembers and Octobers since 1993 is:

Net borrowing, SI chart up to October 2012

This shows a few things. While we didn’t look at the September 2012 estimate in detail, it highlights that this came in as we would have expected based on historical Septembers. For October 2012, this shows that the net borrowing came in actually lower than we would expect by about £2.5 billion, and broadly in line with what occurred in October 2009. So not a good October 2012 result.

This is better illustrated with the seasonally adjusted and trend data. It looks like this:

Net borrowing, up to October 2012

The black line is the original data (e.g. the £-8.6 billion for October 2012) along with the seasonally adjusted in red, and the trend in blue. Now it becomes clear on the benefits of seasonal adjustment, which strips out the regular seasonal pattern that is observed over the history of the series. Based on historical data we would’ve expected the black line for October 2012 to be slightly higher (e.g. less negative by coming in around £-6 billion rather than £-8.6 billion). This is illustrated more clearly by the dip in the seasonally adjusted movement between September and October 2012. More interestingly, the underlying trend of net borrowing has leveled out since October 2011 with a change in underlying trend of “only” 70 million over 12 months, e.g. essentially unchanged underlying net borrowing for over a year.

And even more finally. For November 2012 estimate, we will throw out a forecast for net borrowing (non-seasonally adjusted) of £+2 billion. Anything less than this and the underlying trend will be going the wrong direction. Lets wait and watch the hype.