At some point in history when the British ruled India, the colonial government were concerned about the number of venomous cobras. No one likes to be bitten by a poisonous snake, right? They decided to put a bounty on every dead cobra. Intended effect was that people would capture and kill the snakes reducing the population of cobras. However, another thing happened. People started to breed cobras intentionally, so they could kill them and receive the bounty. When the colonists learned about this, they stopped the reward program. Now with the cobra’s not having any value the breeders released them. Oops, the initial problem even became a larger problem.
This mechanism is called the cobra effect and there are many examples that show the same behavior. Remember Hacktoberfest? An initiative to get people involved with Open Source by providing a T-Shirt for their initial contribution. Result? A vast number of useless contributions were being pushed to open-source projects overloading the maintainers. Goodhart’s Law describes it as “When a measure becomes a target, it ceases to be a good measure”. Small side note, this is not the original Goodhart’s Law, but a generalization by Marilyn Strathern.
Does that mean metrics have no use? Is there a way to mitigate the cobra effect? There is, and it is called Pairing Metrics. Former Intel CEO Andy Grove writes about it in his book High Output Management. The idea is simple. For each metric you instigate an opposing metric. Aimed to prevent perverse incentives from the initial metric. Pair a quantitative metric with a qualitative one. Combine a process metric with an outcome-driven metric. Might sound abstract but look at an example.
Let us say we measure the software deployment frequency of an application. On its own it does not say much. A high number might mean the same thing as a small number, as it is not correlated to a context. The context comes from the strategy behind it. What are we changing that we need to monitor this. For example, I might want to have many experiments on my application to learn what boosts conversion. In this case, it would make sense to pair the deployment metric with a conversion metric. I also might want to balance it with the stability of the application. In that case, an additional pairing metric could be the success rate of deployment.
Key here is not just to pair metrics together, but about pairing metrics that are supportive towards your goals and strategy. It directly emphasizes monitoring possible unwanted behaviour from the initial metric you are trying to change. Keith Rabois – former executive at PayPal and LinkedIn – for example measures fraud rate and customer service in his business. Making sure that the customer service team would not treat each customer as a potential fraudster.
What about your metrics? Do you have paired metrics against them to prevent the cobra effect from happening? Can you overcome Goodhart’s law and make metrics work for you and your team?