I admit it, I’m struggling around part of the machine learning story when it comes to demand forecasting. There is circular reasoning going on.
Machine learning can be used to improve forecasts. The basic idea is that a demand forecast is made, a machine learning engine ingests data on how accurate that forecast was, and then the machine autonomously applies better math to improve the next forecast. This is explained in more depth in a previous article.
But as Scott Fenwick – the Director of Product Management at Manhattan Associates – points out that a good forecast is based on demand, not sales. In the consumer goods supply chain, that means the forecast needs to understand both the sales and the lost sales. Lost sales are the sales that could have been achieved if inventory had been in stock on the store shelf. Mr. Fenwick added, “the goal is to stock to true demand (or as close to it as we can get) so the retailer doesn’t miss out on those opportunities lost when the inventory isn’t maintained perfectly.”
Manhattan Associates has a solution for calculating lost sales. But here is the rub. Point of sales data can tell you what the stores sales were. But the lost sales calculation is made based on a store forecast. The store forecast tells you what the sales should have been; if the sales don’t occur, store systems check to make sure the inventory was available to complete the sale. If the inventory was not available, it was a lost sale.
This is circular reasoning. A demand forecast is made, and then to check the accuracy of that forecast you must rely, in part, on that same forecast.
The problem is even worse than that. Stores often have poor inventory accuracy surrounding their in-stock position.
And as Mr. Fenwick pointed out, the definition of what a lost sale is, changes in an omni-channel environment. For example, if a customer goes to the shelf and the product they want to buy is not present, a store associate might suggest that it can be shipped from their ecommerce warehouse. Manhattan Associates is doing some interesting product development around calculating lost sales in an omni-channel environment. However, these calculations still depend on forecasts.
Another problem is that the more granular the forecast – SKU at store level by week, for example – the higher the forecast error tends to be. Mr. Fenwick addressed this, “For sure, the greater degree of error in the store level forecast, the greater the impact on the lost sale calculation. However, even if we hit a 70 percent accuracy measure we’re still capturing 70 percent of the potential lost demand in the store due to stock outs. Which, from a forecasting perspective is a lot better than capturing zero lost demand. As the saying goes, ‘if you only forecast to sales, you’ll only ever stock to … what you sold.’”
Mr. Fenwick also pointed out that the value of the lost sales calculation goes beyond improving forecasts. Store managers don’t like empty shelves. “They tend to be vocal and adamant (in) situations where shelves are left empty or they’re having to turn customers away because they’re temporarily out of certain items. This impacts their ability to hit sales goals, (achieve) customer satisfaction,” and other metrics they are directly measured on.
In conclusion, I know that POS and other downstream data, has greatly improved forecast accuracy. But I don’t know how much, or even if, the lost sales calculation has led to forecast improvements. I’d love to talk to any consumer goods company or retailer that has used the lost sales calculation to improve their forecast. My email is firstname.lastname@example.org.