There is a seemingly endless amount of information on the current analytics and big data revolution in business. I recently read a Wall Street Journal article on a tech-driven revolution in sports, written by Billy Beane, the Oakland A’s general manager most well known as the character played by Brad Pitt in the movie Moneyball. I found that article to be an interesting perspective of the development of technology and analytics in sports. It also encouraged me to develop this article outlining my perspective on the analytics revolution in supply chain management. In effect, my view is that the analytics revolution is driven not by a change in calculations (average is still an average, and a standard deviation is still a standard deviation)or even analytics software, but by the evolution in supporting factors such as data capture, data availability, calculation speed, and storage capacity. It is these factors that have created the ability to analyze more attributes and behavior – at a more granular level, greater frequency, and shorter latency – to arrive at more valuable and timely descriptive and predictive information.
Data Capture and Availability
The massive amount of data relevant to supply chain management is possibly the most important factor behind the growing prevalence and sophistication of analytics. Traditional point of sale (POS) data is increasingly being shared and leveraged among supply chain partners, and e-commerce transactions are naturally conducive to data capture and analysis. Meanwhile, online accounts and customer loyalty programs supplement sales data with consumer information and buying patterns, allowing companies to conduct product substitution analysis for demand shaping programs. In fulfillment, the use of radio frequency in warehouses is now commonplace, providing real-time inventory information. In transportation, the decreased cost of fleet telematics has increased its use and in-turn provided more frequent location information that can be used for fleet scheduling, delivery time estimates, and improved visibility into inventory that is in-transit between facilities.
Calculation Speed and Storage Capacity
The ongoing exponential increase in computer hardware capabilities has greatly increased storage capacity and processing speeds while decreasing costs. This allows for large-scale increases in data storage and the ability to analyze massive amounts of granular data in place of data aggregates. The analysis of “disaggregated data” provides insight into distinctions previously hidden by data aggregation. For example, sales by individual store locations provide greater insight than sales at the DC level. Daily or weekly sales data provide more accurate and timely inputs to forecasts than can be obtained from monthly sales data. And finally, analysis by SKU provides greater insights than analysis by product category. In essence, more granular data can provide more valuable insights than aggregated data. I’m reminded of what my college statistics professor once said, “statistics can provide insight, but are in themselves a form of data destruction.” Greater granularity provides better analytical opportunities.
Today’s Supply Chain Analytics
As a result of these factors, warehouse analytics can provide visibility into current inventory status and improved worker productivity measurement; transportation routing can be updated dynamically to adjust to changing workloads or unanticipated traffic jams; supply chain planning and visibility applications now capture inventory at DCs, in-transit, and at store locations; and forecasting is now done in-memory, allowing for demand sensing inputs that facilitate forecasts for each SKU, at the store level, on a daily basis. These tools provide great potential for operational improvements across the supply chain. Importantly, like the Oakland A’s in Moneyball, analytics provides a means for creative, out-of-the-box thinkers to leverage data to execute their operations in new and effective ways.
Leave a Reply