Organisations replicate military structures. A general sits atop a hierarchy focused on strategy, supported by lieutenants responsible for implementation. Yet this is the military structure in peace time. In times of war it transforms into a fluid set of connected crews taking real-time decisions in a dynamic context without waiting for permission from HQ. So why do organisations copy a structure designed for training rather than one designed for action?
“The present ways of dividing labour have been historically based on a very different communications environment than the one we are living in at present. The earlier high cost of coordination and communication is the reason behind many of the organisational forms that are taken for granted and which we still experience. The digital world we live in today is totally different when it comes to the quality and costs associated with coordination, communication and contracting and allows us to experiment with totally new value creation architectures.”
‘Conway’s Law’ suggests that organisations are ‘constrained to design systems which are copies of the communication structures in these organisations.’ In other words, form (the org. chart) follows function (how people actually communicate with each other). The General and lieutenants sit atop a peacetime hierarchy to train how to work within constraints on communication in times of war. But why do CEOs and senior managers sit aloofly atop their hierachies — even in times of ‘war’?
Does leadership choose not to speak to people on the frontline?Fundamental changes in how we communicate represent a danger and an opportunity that the military is aware of: “Like war itself, our approach to war fighting must evolve. If we cease to refine, expand, and improve our profession, we risk being outdated, stagnant, and defeated” announced General Dunford, Marine Corp Commandant in 2015. Yet, non-military organisations remain wedded to ‘peacetime military structures’ despite the fluidity around them being more like a time of war than peace. Why?
Organisations are designed for power, not effectiveness. People continue to be ‘promoted to their own personal level of incompetence’ — a great sales person will be promoted to manager, but if they lack managerial competence they’ll go no further. However, they won’t re-join the ‘ranks’ either and the organisation not only loses a great ‘soldier’ but gains a mediocre lieutenant — increasing performance pressure on the entire organisation. People accept this misalignment between their own personal capabilities and role as this is where the power (and rewards) are.
Crews offer an alternative to entrenched hierarchyCrews differ from teams. They fluidly self-form (rather than being appointed by ‘the Centre’) to meet a defined need — then disband again once the objective is met. The ‘process’ of formation links members’ identities (aptitudes and attitudes) with the specific roles to perform — the leader may be the one who has a particular capability needed rather than merely being the more senior in the hierarchy. Sales teams have been doing this for years: dividing into ‘hunters’ — who land a client — and ‘farmers’ — who nurture the relationship thereafter.
Crews enhance organisation-wide agility. Simon Wardley introduced an important transitional role between two extremes (that the ‘hunter/farmer’ dichotomy ignores) in his ‘Pioneers — Settlers — Planners’ framework  that can help an organisation structure for battle rather than power:
These are silos: crucial to the rapid knowledge sharing and learning innovative and agile organisations require. Yet ‘the Centre’ today remains fixated on trying to destroy silos (despite a failure to do so everywhere) as they appear unaligned with the organisation (read: ‘the Centre’). Rather than destroying this most effective knowledge transfer system (Sharepoint doesn’t compare!) ‘the Centre’ needs to start creating the connections between silos to scale the breakthroughs these tight knit crews often make.
Creating a ‘Mechanism of Internal Theft’ (S.Wardley) — encouraging silos to ‘steal’ the work of earlier teams — augments a natural ‘pull through’ dynamic:
People’s identities are fluid  — meaning crews are not static. For example, I may be a Pioneer on one challenge but more of a Settler on another. The freedom to move between crews is crucial for autonomy, mastery and purpose — the three levers of intrinsic motivation and engagement. 
If the ‘Centre’ needs to justify it’s wage bill then it should concentrate on creating the conditions for crews to form, (think of ‘scrubbing up’ that cognitively activates surgeons) interact and develop each other — rather than breaking them down into atomised parts just to make them easier to control.The changing nature of how we communicate — where the entire world is inter-connected at essentially zero-cost — is changing the structure of human systems. How quickly organisations ‘refine, expand and improve’ their structures will distinguish those who succeed from those who quickly become ‘outdated, stagnant and defeated’. Working with how people really are should become a priority. Disrupt yourselves before others disrupt you.
In an uncertain world  human beings are the most unpredictable element. Good market researchers have long known that consumers don’t always say what they mean, or mean what they say. Context matters and in focus groups participants offer rationalised responses, to project an image of themselves that matters at that moment - yet outside the artificial environment they’re people swept up in the swirls and eddies of real life: subject to overwhelming emotional and physical influencers.
Developing reliable policies and plans for people based systems therefore requires researching people in their natural habitat. For “humans are unpredictable, mushy bags of irrationality and emotion” (@travisgertz) and only a holistic approach captures the reality in all its messy glory and gore.
Human unpredictability emerges from a triad of factors - a greater understanding of which offers immense possibilities for breakthrough innovation in marketing, organisation design, policy making and development (D.Snowden):
“A person’s identity is like a pattern drawn on a tightly stretched parchment. Touch just one part of it, just one allegiance, and the whole person will react, the whole drum will sound.” (A.Maalouf)
Humans aren’t limited to one identity. We’re a “constantly fluctuating constellation of identities. We may for example chat about politics (as a citizen) with a colleague (as a professional) while driving (as a motorist) our child (as a parent) to a school meeting (as a community member)”  and our identities are flexible, adapting to sudden needs (consider how the parent identity kicks in under threat to a child). Policies and plans predicated on simplistic views of human beings, (homo economicus, customer segmentations, beneficiary communities) will leave you wide open to damaging shocks.
Humans have agency: they can choose, persuade others, and be persuaded in turn. Yet our intent doesn’t spring from us unprompted: outside forces - our own, or those of our peers - past experiences - our own, our ancestors (in the forms of shared cultural myths and legends) connect and shape us. These connections also evolve (as do we) through multiple daily interactions, making human systems - communities, organisations, networks - neither static nor easy to know.
“It isn’t what we don’t know that gets us into trouble; it’s what we thought we knew that just isn’t so” (M.Twain - allegedly)
Intelligence isn’t unique to humans - we’re even trying to extend it with AI. However, our understanding of the basis of human intelligence suggests we’ll pass through a long desert of Artificial Stupidity before we arrive at the promised land (or hell) of real Artificial Intelligence. We currently see our intelligence through the lens of the dominant metaphor of the age: “For more than half a century now, psychologists, linguists, neuroscientists and other experts on human behaviour have been asserting that the human brain works like a computer” .
However, we’ll surely discard this faulty metaphorical paradigm as we have the different metaphors that ‘explained’ human intelligence over the past 2,000 years: from the ‘hydraulic model of human intelligence’ in the 3rd century BC (when hydraulic engineering had just been invented) to the metaphor of the brain as a telegraph system, inspired by the advances in communications of the mid 19th century. (G.Zarkadakis).
Our innate desire to seek simple causal explanations  backs us into a corner - we have reduced people in systems to little more than ants or data processors (D.Snowden). We place atomised individuals on a pedestal (in the dominant Anglo-American domain) but subjugate and even demonise the collective; failing to see the humanity in them. Human endeavour - civilisation, where life blossomed on every continent on this planet - has been the outcome of human interaction across silos and borders. If we are not to undo this we need to go beyond reductionist approaches to humans and human systems that provide only the illusion of control to instead discover and exploit our evolutionary possibilities.
2 Bramble Bushes in a Thicket. Kurtz & Snowden (2005)
Overwhelmed by complexity people look for single causes to explain why the unexplainable has occurred. For example, since the 2016 US election the Democratic Party have blamed Russia for the horrible defeat of its chosen candidate (despite the absence of verifiable evidence — and a myriad other contributory reasons). Oversimplifications such as this differ from Ockham’s razor — a problem-solving heuristic — that “among competing hypotheses, the one with the fewest assumptions should be selected” which privileges evidence over assumptions and less assumptions over more.
Faced with complexity the dark side of human intelligence pushes back: latching onto ideologies, conspiracy theories and other biases rather than expending the effort on thought (D.Snowden). Hoodwinked by our biases people settle on a comforting explanation “which almost always is a great oversimplification” (G. Klein).
The problem is as acute in business as anywhere:
“…many executives, despite their good intentions, look in the wrong places for the insights that will deliver an edge. Too often they reach for books and articles that promise a reliable path to high performance. Over the past decade, some of the most popular business books have claimed to reveal the blueprint for lasting success, the way to go from good to great, or how to craft a fail-safe strategy or to make the competition irrelevant.” 
Executives buying the illusion of predictable outcomes in an unpredictable world are falling prey to simplistic casual explanations. Success in the business world “is the result of decisions made under conditions of uncertainty”  and requires us to resist “the hyperbole and false promises found in so much management writing . Business strategists” Rosenzweig suggests “would do far better to improve their powers of critical thinking”.
“We don’t see the world as it is, we see it as we are” (Anaïs Nin)
Yet humans are selective in what they see and pay attention to. Adults have long developed a ‘schema’ (a worldview) which they add to by choosing how much importance to attach to the external stimuli we are all bombarded with. “As time goes on and experience builds up, we make greater investment in our systems of labels. So a conservative bias is built in. It gives us confidence.” (M.Douglas). We reach a point where we value comforting lies over novelty, however useful.
There are good evolutionary reasons for our biases. Subsistence living meant our primary aim was getting through each day. Finding an unexpected large food source would have been great, but in the absence of any storage capacity it was more important to find the necessary minimum not to starve each day. Therefore, we evolved to seek out the predictable and avoid failure — and 200,000 years of homo sapien evolution is hard-wired into our brains. Merely knowing about our biases doesn’t make us immune to them ‘anymore than you can correct your vision by understanding how the lenses in your shortsighted eyes are flawed.’
Predictability in a more complex world is drowned out by tangled interactions that appear random, meaning predictions about what will happen next becomes become decreasingly less possible. However, in the business world we continue to see the search predictable silver bullet solutions (“Go Agile! Go Lean! Uber-ise!”) — whereas what we need is to embrace uncertainty as an ally to work with; not an enemy to defeat.
If we can accept that we are biased we can work with this. We can practice ‘constructive uncertainty’ (H.Ross): recognising that our brains do not seek ‘best-fit’ but ‘first-fit’ responses we can open up the decision-making process to inputs from distributed (network) intelligence, especially in those moments a leader feels, for any reason, uncomfortable with the decision they are about to make. As Linus’s Law teaches us: “”Given enough eyeballs, all bugs are shallow.” .
The potential for gaining new strategic advantage by accepting uncertainty in our decision-making is huge: “learning trumps ‘knowing’, since we are learning from the cognitive scientists that a lot of what we ‘know’ isn’t so: it’s just biased decision-making acting like a short circuit, and blocking real learning [and action] from taking place.”
Unacceptable casual explanations for complex events is the pre-eminent bias we need to avoid today because they become hollow but powerful narratives confused or scared people hook their fears on to. They perniciously feed prejudice by becoming part of the ‘system of labels’ people adopt to make sense of their world going forward. Governments have often been accused of de-humanising an enemy as a prelude to war but the reality is ore complex: nations sleepwalk into war  when populations conclude it’s the only ‘obvious solution’ to address the fears their unacceptably simple casual explanations have led them to.
We’ve never needed political, business and other leaders to embrace uncertainty as much as now.
The world feels a more uncertain place. Twentieth century certainties have been crumbling under economic collapse, technology advancing faster than our cognitive abilities and outbreaks of intractable wars. Yet organisations — powerful collectives of people — continue to operate as though the world were a stable, predictable place: analysing, predicting and planning.
We’re designing ‘what to do in the event of fire’ posters while the building is burning.
Unsurprisingly, the topple rate of firms (the speed at which they lose their leadership positions) is increasing rapidly. “Public companies have a one in three chance of being delisted in the next five years … that’s six times the [topple rate] of companies 40 years ago.” [The Biology of Corporate Survival. Reeves et al (HBR) Jan-Feb 2016].
Firms today “die, on average, at a younger age than their employees .. Regardless of size, age, or sector”.
Our core weakness is that we lack an ecological metaphor to guide our action. The enduring mechanical metaphor continues to shape how we see our organisations, our world, even our own brains — as rational, information processing devices (interestingly, the metaphor of the brain as a computer is a reflection of the times we live in — other times had metaphors that made sense to them but that we ridicule now; as we will ridicule the computer metaphor in future). .
Mechanical is not biological, silicon is not carbon, manufacturing is not services. The former are closed systems based on an input-transformation-output model; in the latter the system is transformed by interactions in real-time. Our prejudice for the former is so strong we super-impose it on the latter — it gives us comfort, a sense of predictability, but it’s an illusion and a source of much suffering.
We live and operate in open, inter-dependent systems — everything that happens can affect us; often in bewildering ways.
At the heart of this ecological view of the world is the human (as ‘being’, not ‘resource’), who is (alas) unpredictable. This makes human networks (of customers, employees, citizens etc.) highly uncertain — the relationship between cause and effect in their actions isn’t clear; it’s deeply entangled in multiple interactions and becomes clear only in hindsight (which doesn’t lead to foresight).
The best we can hope for is ‘sufficiency’ — knowing just enough to be able to act. This requires a shift from trying to anticipate unpredictable futures to triggering awareness of what’s really happening now if we are to adapt to the challenges we’re going to face over the next few years.
Over the weekend I met up with a couple of partners from a leading consultancy practice and the conversation turned to data-analytics - a new line of business (LoB) for their firm. The more senior of the two actually objected to the definition of this LoB as ‘data-analytics’ - preferring instead ‘customer solutions supported by technology.' Although this distinction is important (meeting a need vs. selling a product) I left the conversation feeling they were (unconsciously?) selling a beautiful lie.
In the run up to the 2008 crisis many major financial services were convinced they were managing exposure to risks through their Value at Risk (VaR) models. The reality shock of September 2008 stood in stark contrast to the ‘artificial intelligence’ of their VaR models (in that the ‘intelligence’ it gave was false). This should have been a key lesson of the crisis - simplified models of complex systems may be ‘beautiful’ (providing a single understandable number) but they are a ‘lie’ - “the only valid model of a human system is the system itself” (Murray Gell-Mann). Sadly, the lie seems to be scaling today.
Data driven analytics is very useful for telling who did what, where and when but is dangerously silent on ‘why’ they did (didn’t) do it - making it impossible to understand ‘how’ to react when things change, as they invariably do in a more complex world. While my friends’ consultancy is developing some impressive capabilities in this area I wondered how well they understood the context they are operating in and the three major risks their clients will face if the go too far down the rabbit hole of data-driven decision-making.
Three Major Risks of Data-Driven ‘Solutions’
When we outsource decision-making to machines we risk losing confidence in our human ability to make sense of and respond to complex challenges. In an effort to become more precise we overload on so many variables that only a machine can then sort them. This vicious cycle then makes us dependant on the processing power of the computer (and the programming of the algorithms) to sift through the chaff to get to the wheat. But the fault line is that the future is not a continuation of the past: environments shift and when they do the models don’t have enough data to tell us what to do, while our abilities to decide for ourselves have degraded in the meantime - leaving us at the mercy of tools firing blanks.
So, by all means augment your decision-making with technology but recognise that its utility is bounded in applicability (e.g. excellent for re-stocking shelves - terrible for explaining why something goes viral) and the more complicated your tools become the greater your dependance on them will be. Perhaps that’s a winning - and ‘beautiful' - business model for a consultancy but, I’d argue, a losing one for their clients in the long run if the ‘lie’ is not recognised.