The impact of machine learning in demand forecasting
At RELEX we live and breathe forecasting. We have specialized in forecasting for supply chain planning and optimization for over a decade and have gradually expanded into using forecasts in other business domains such as space and assortment optimization. At present we run about 10 billion forecast calculations daily, and 100 billion a week for our customers.
Our approach to forecasting and forecast methodology has always been very practical – forecasts are needed to inform certain specific business decisions; the more accurate the forecast the better the decision. The time horizon and nature of these decisions actually determines what constitutes a good forecast – they often determine what data is available for building the best possible forecast. For a quick recap of how our forecasting development has progressed please see the ‘RELEX’s Forecasting Approaches’ white paper.
Recently there has been a growing number of white-papers or texts extolling the benefits of using machine learning in demand forecasting. Naturally at RELEX we make use of machine learning. I’m fully aware of its strengths and benefits. However, the hype surrounding the technology is, from where I stand, both confusing and a distraction.
Actually at RELEX we are in a unique position to get the most out of machine learning in forecasting retail demand. There are three reasons for this:
- RELEX provides unmatched speed and performance in running machine learning calculations also with exceptionally large datasets. RELEX operates using our proprietary in-memory database which is designed bottom up to compress retail sales and supply chain data. This enables memory-based processing of large inquiries drawing on large datasets with very fast throughput.
- RELEX alerts users of possible ‘overfitting’, which can lead to ‘forecast-nervousness’, before using machine-learning-based forecasts live. RELEX offers native support that allows users to calculate and compare several different versions of a forecast using different models. This means that a machine-learning-based forecast can be constantly compared with, for example, a time-series-based forecast. Sometimes forecasts are ‘overfitted’. In other words, if the sample is too tight then the model gets calibrated to a very narrow set of circumstances including the quirks and background noise in that sample. That leads to what we call ‘forecast nervousness’ or ‘forecast jitters’ – unstable, error-prone forecasts. (The answer is to ensure the model approximates the entire population, not just the current sample). The advantage of running several forecasts using different models is that it provides points of comparison so forecasts can be compared for accuracy and analysts can be alerted when there is a difference exceeding a specified threshold so that extra checks can be made.
- RELEX allows users to select forecasts providing the best performance for specific business criteria – not only those offering best accuracy. The RELEX solution offers native support for the past and future replenishment and stock performance simulations using different forecast approaches – combining forecasts with supply chain dynamics and safety stock calculations. Then we can decide, for example, to use a forecast attuned to ensuring the lowest possible out-of-stock levels rather than simply the overall accuracy of the forecast.
So, as I mentioned above, I believe we are in a uniquely good position to benefit from machine learning forecasts. However, at RELEX we don’t tend to talk about machine learning. After all, the main focus should be on the quality of the forecasts and their fitness for their purpose – rather than how they were generated. The main area of use for demand forecasts is still managing the supply chain.
So what makes a good supply chain forecast?
We think it’s a combination of three things:
- Predictable degrees of error in the forecasts. Predictable error provides two immediate benefits:
1. The supply chain can be optimized on the back of that forecast – we can buffer stock with great precision and have very stable service levels.
2. Users quickly learn to trust the solution which makes implementation and roll-out much easier.
- The forecast supports exception-based working models. The solution can tell when analysts or planners should check the forecast – and give ample warning. This means that a small team of 3-5 skilled analysts can even run a retail operation with thousands of stores and manage tens of millions of data series. It is best to build processes so that they can be run by a small expert team rather than a larger less expert one.
- The forecast(s) should support working on several time horizons. Our customers do a variety of supply chain decisions with our bottom-up (i.e. calculated from the SKU-Store) forecasts. These might begin by optimizing the stock build for Christmas six months before the start of the season and end with allocating a batch of newly received fresh produce from the DC with the freshest possible real-time information. In some situations, (especially a long way out) there is little or no information on the weather, competitor pricing, or even one’s own pricing and assortment, but the material flows still need to be modelled; for the latter there is an abundance of data that can be used. Improving forecast accuracy and dynamics with each new piece of information received by the analysts should be easy to interrogate and follow.
The problem with machine-learning-based forecasting, especially if it’s ‘untamed’ (in other words it hasn’t been subjected to the checks and balances that the comparative approach we recommend provides in order to ensure it’s a good, effective fit) is that it is easy to get wrong. The world is littered with failed applications of machine-learning-based supply chain forecasting. All too often the common factors to these failures is a willingness to invest years of time and millions of dollars to get it working – just to be forced to give up in the end. I have seen many of these projects, some of them attempted by our customers before we’ve started working together. The most common problem has been overfitted models that produce unstable, jittery or nervous forecasts – characterized by the extremity of their values. One or two of these can very quickly damage user confidence and this makes implementation more challenging. If errors are not rooted out early then the decisions made on them can have disastrous financial consequences.
To be too focused on using machine learning in forecasting is like using a chainsaw to build a house to the exclusion of all other tools. To build a house well requires a wide selection of good tools. And ultimately it should be all about the quality of the house, not your determination to build using nothing but a chainsaw.
Want to know when similar resources are published?
Subscribe to receive a monthly digest of our most valuable resources like blog posts, whitepapers and guides.
- Pushing the Boundaries of Data Processing Requires Specialization
- Considering Cannibalization and Halo Effects to Improve Demand Forecasts
- Towards a Weather-proactive Supply Chain
- Fresh Forecasting & Replenishment: Running an Efficient Omnichannel Grocery Retail Operation
- More Accurate Promotion Forecasting with Causal Modelling