I’m currently conducting some research on Fleet Management, so I was delighted to find a free, downloadable best practice report on this topic. Until I read the first page. Then I knew it was junk.
ARC produces best practices reports, and I have written a few of them myself. Done right, these reports can provide a lot of value. But sometimes these reports (including, I hate to admit, some of our own) miss the mark.
Best practice research is hard to do. The last project I completed required more than two months of concentrated effort to get a sufficient number of respondents, and even after the data came in, I had to conduct twenty follow-up phone interviews to fully make sense of results. As someone who has conducted best practice research, I have some insights on what to avoid if you are a consumer of these reports.
Typically, these reports start by defining performance metrics that will allow the researcher to sort respondents as best-in-class, industry average, or laggards. The selection of the metric, or combination of metrics, is key to the sorting process. Select the wrong metric, or a biased metric, and your results are worthless.
That’s what I saw in this fleet management report-the author started with biased metrics. To determine best-in-class companies, one of the metrics the author looked at was the reduction in demurrage costs over the last twelve months. Best-in-class companies, according to this report, were able to reduce demurrage costs by 2 percent. Laggards performed much worse. The author used other metrics as well to sort the respondents, but these metrics were also focused on how much a company had improved in a certain area over a given time period.
Here is my problem with this approach: Let’s say my company’s demurrage costs are already very low, despite having very demanding customers, because we manage this metric well, and there is not much more room for improvement. In fact, over the last twelve months, our performance has not changed. According to this report, my company is a laggard.
I consider these biased metrics, rather than just poorly chosen ones, because the companies that often get labeled best-in-class are those that were performing poorly (because they lacked technology and automation) until they recently implemented a technology solution to streamline and automate their processes. The metrics are biased in the sense that they suggest technology is the primary answer.
Now contrast this with a company that really is best-in-class. Perhaps they bought a software solution several years ago and achieved an initial big improvement in performance. But what really took them to the top of the heap is that they continued to make small incremental improvements to their processes, and continued to train and motivate their people, to perform a little bit better on this metric over the course of several years. In other words, selecting a metric that focuses on recent improvements might lead you to conclude that best-in-class companies just use better technology, instead of a more balanced approach that focuses not just on technology, but also on processes and people issues!
I have one final thought. Companies should not treat even a good best practice report as if it were the Ten Commandments. Different companies – even companies in the same industry – can have very different strategies, infrastructures, and customer requirements. Best practice reports need to be read in a nuanced way that accounts for these kinds of differences.
jhill835 says
Bang on, Steve! Unqualified benchmarks and a dollar will buy you USA Today.
claerhkm says
Great insight Steve.
Lately I’ve found a big watch-out-for with “Best Practice” reports is if they are sponsored by a single technology vendor. With today’s economy they are all working even harder to compete for business and some are using biased metrics that support their technology over others. I recently attended a webinar related to “Best Practices”. I was hounded for weeks by the sales guy, even after I told him there was nothing new in their presentation.
Deborah says
I often return survey’s unanswered, explaining that I cannot answer without skewing the results. Or I rewrite it for them. Of course, I know that it is written so they get the results they want. For example, they might ask if you are currently using a software package for your reverse logisitcs, or if you are planning to buy in the near future. They do not offer a response of ” Not appliable”. They then report that only X amount of companies currently use this software. It drive me nuts.