Over the weekend I met up with a couple of partners from a leading consultancy practice and the conversation turned to data-analytics - a new line of business (LoB) for their firm. The more senior of the two actually objected to the definition of this LoB as ‘data-analytics’ - preferring instead ‘customer solutions supported by technology.' Although this distinction is important (meeting a need vs. selling a product) I left the conversation feeling they were (unconsciously?) selling a beautiful lie.
In the run up to the 2008 crisis many major financial services were convinced they were managing exposure to risks through their Value at Risk (VaR) models. The reality shock of September 2008 stood in stark contrast to the ‘artificial intelligence’ of their VaR models (in that the ‘intelligence’ it gave was false). This should have been a key lesson of the crisis - simplified models of complex systems may be ‘beautiful’ (providing a single understandable number) but they are a ‘lie’ - “the only valid model of a human system is the system itself” (Murray Gell-Mann). Sadly, the lie seems to be scaling today.
Data driven analytics is very useful for telling who did what, where and when but is dangerously silent on ‘why’ they did (didn’t) do it - making it impossible to understand ‘how’ to react when things change, as they invariably do in a more complex world. While my friends’ consultancy is developing some impressive capabilities in this area I wondered how well they understood the context they are operating in and the three major risks their clients will face if the go too far down the rabbit hole of data-driven decision-making.
Three Major Risks of Data-Driven ‘Solutions’
When we outsource decision-making to machines we risk losing confidence in our human ability to make sense of and respond to complex challenges. In an effort to become more precise we overload on so many variables that only a machine can then sort them. This vicious cycle then makes us dependant on the processing power of the computer (and the programming of the algorithms) to sift through the chaff to get to the wheat. But the fault line is that the future is not a continuation of the past: environments shift and when they do the models don’t have enough data to tell us what to do, while our abilities to decide for ourselves have degraded in the meantime - leaving us at the mercy of tools firing blanks.
So, by all means augment your decision-making with technology but recognise that its utility is bounded in applicability (e.g. excellent for re-stocking shelves - terrible for explaining why something goes viral) and the more complicated your tools become the greater your dependance on them will be. Perhaps that’s a winning - and ‘beautiful' - business model for a consultancy but, I’d argue, a losing one for their clients in the long run if the ‘lie’ is not recognised.
© Narrative Insights (2013-2018)
Part of the global Cognitive Edge & Cynefin Centre network
"It ain't what you don't know that gets you in trouble; it's what you think you know for sure that just ain't so"
(Attributed to Mark Twain)