Across a wide range of supply chain projects and processes, poor data quality is a recipe for failure. Today, I’m focusing on dimensional data. Do you have accurate weight and dimensional data for your products? For the cases, inner packs, and pallets those products are shipped in? While data quality is one of the least sexy topics I can imagine, it’s also a costly one for companies with poor dimensional data due to:
- Increased transportation costs. Without accurate pallet dimensions and weights, chances are you’re underutilizing the available space in a trailer, resulting in unnecessary additional shipments.
- Increased parcel shipping costs. For FedEx and UPS, all expedited shipments, including next-day air and two-day air, are subject to dimensional weighing, regardless of the size of the parcel. Ground shipments are subject to dimensional weighing if the parcel is larger than three cubic feet. There are other arcane surcharges for packages with nonstandard dimensions (see “Navigating Dim Weight Charges” on the Quantronix web site for more details; Quantronix manufactures the CubiScan dimensioning equipment that automates weighing and dimensioning tasks).
- A lower payback for investments in WMS and associated logistics solutions. Slotting optimization, which has a very good ROI, is not fully feasible without dimensioning products. Similarly, when I talk to companies that implement advanced processes like flow through, I’m not surprised to learn that they first had to improve their dimensioning data before they could move forward with the new process.
- Increased charge backs. Some retailers use a dollars/square inch/time period metric to measure and maximize shelf-space profitability. Without accurate dimensional data from suppliers, these category-management metrics are not valid. Receiving and put-away also become less effective. For these reasons, some retailers have charge backs on suppliers that cannot provide accurate dimensional data or deliver their pallets in the agreed upon dimensions.
- The inability to have robust order accuracy. In many warehouses, products moved on conveyor belts are automatically weighed to make sure the right product was picked. It is widely known that order discrepancies cost a company at least $50 (the “all in” cost is really closer to $100 for many companies). Therefore, high-volume shippers can achieve a very strong ROI if they improve their pick accuracy by less than one percent.
Correcting poor dimensional data is, in essence, a data quality (DQ) project, and ensuring this data is kept updated and readily available in the required applications is a Master Data Management (MDM) project.
While many DQ and MDM projects are still funded based on a narrow scope, the vision has grown. Companies are increasingly recognizing that they need to improve their data governance processes to manage information over its lifecycle – from creation, to augmentation, to keeping it clean on an ongoing basis, to effective usage, and through retirement. Data Governance involves people (Who are the data stewards? Do we have a data governance board?), processes (What processes do we have in place to achieve operational excellence across the information lifecycle?), and technology (MDM and DQ tools).
Thus, one impetus for conducting a MDM project is not just the payback of the initial project, but an understanding that this is a beachhead that may help the company gradually improve data quality across the extended enterprise.
Total Quality Management (TQM) has been supplanted by Operational Excellence (OpX) methodologies like Six Sigma and Lean. Still, as a methodology for continuous improvement, some of the thinking that permeated TQM is being applied to the idea that information has a lifecycle. TQM argues that there is a cost associated with poor quality and that projects could be justified by auditing the current levels of waste and then establishing a cost for that waste. This perspective is very appropriate when you think about Information Lifecycle Management.
While having a near-religious belief that all data has to be clean is unjustified from a cost-benefit perspective, companies do need to understand the cost of poor data quality. Once the cost is understood, companies should attack the biggest ROI buckets in a serial manner. I’m convinced that when companies do this, they will find logistics dimensional data to be a low hanging fruit.