Most organisations are not struggling with whether to adopt AI. That question has been answered. The real challenge is integrating it well: knowing where AI will accelerate your work, where it will quietly erode the judgement your people have spent years developing, and where it simply does not belong yet. That distinction is not a technology question. It is a complexity question.
At Unflocked, we work with leaders navigating situations where the path forward is not obvious. AI adoption is one of those situations. Not because the technology is unclear, but because the systems it is being introduced into are varied, layered, and often more complex than the integration plan assumes. The organisations that integrate AI well are the ones that first understand the nature of the systems they are working with.
Your Organisation Is Not One Thing
This is the insight that changes everything: your organisation does not sit in a single state of complexity. Right now, your payroll system runs on fixed rules and repeatable processes. Your product strategy is emergent and unpredictable. Your compliance team navigates expert analysis and informed disagreement. And somewhere, a crisis is brewing that will throw part of the organisation into genuine turbulence.
These are simultaneous realities, all running at once in the same organisation. And each one requires a fundamentally different relationship between humans and machines. The Cynefin framework, developed by Dave Snowden, gives us a practical way to see these differences. It maps four domains of complexity, each with its own dynamics, its own constraints, and its own answer to the question: where does AI fit here?

Clear : Where Machines Should Lead
Sense, Categorise, Respond
Fixed constraints. Cause and effect are stable, repeatable, and widely agreed. This is the natural home of AI: automation, rules engines, workflow systems, data validation. Machines should dominate here. They bring speed at scale, zero variation, and tireless consistency. This is where AI delivers the clearest return on investment, and where most organisations should start.
The human role in this domain is not execution. It is governance. Someone must decide what gets automated, set the boundaries, and monitor for the moment when conditions shift and the rules no longer apply. The best AI implementations in the Clear domain are the ones where humans designed the constraints well and built in clear signals for when those constraints need revisiting.
Complicated : Where Machines Amplify Human Expertise
Sense, Analyse, Respond
Governing constraints. Cause and effect exist, but they require expertise to uncover. This is the realm of professional judgement, informed disagreement, and multiple "good practices" rather than a single best practice. AI adds genuine value here: pattern detection, anomaly alerts, processing volumes of data that no human team could review. The combination of machine analysis and human interpretation is more powerful than either alone.
But there is a subtle risk. When AI provides a recommendation, it can shift the burden of proof. In one well-documented case from the North Sea oil industry, geophysicists stopped trusting their own abductive insights because the institutional risk of disagreeing with the AI was too high. They could not be faulted if the AI said there was oil. But if the AI said no and they said yes, the personal risk was unacceptable. The AI hadn't replaced their expertise. It had made the personal cost of using it too high.
Organisations integrating AI in the Complicated domain need to actively protect the conditions under which human expertise can be exercised. That means designing systems where disagreeing with the AI is safe, expected, and valued.
Complex : Where Humans Are Essential
Probe, Sense, Respond
Enabling constraints. Cause and effect can only be understood in hindsight. This is the domain of emergence, experimentation, and narrative. You probe the system with safe-to-fail experiments, sense what happens, and respond. Most organisational change, most strategy work, and most of the situations that bring leaders to Unflocked live here.
AI is least reliable in this domain and most dangerous if misapplied. The Complex domain requires the ability to make meaning from ambiguous signals, to hold multiple interpretations simultaneously, and to act on incomplete information with genuine stakes. Adaptive judgement under uncertainty is built through lived experience and the consequences of getting it wrong.
This does not mean AI has no role in the Complex domain. It can surface weak signals, visualise patterns across large narrative datasets, and help teams see what they might otherwise miss. But the sense-making, the interpretation, the decision about what to do next, must remain with the humans who carry the context.
Chaotic : Where Humans Must Act First
Act, Sense, Respond
No effective constraints. The system has collapsed into turbulence. There is no time for analysis and no stable patterns to detect. The only response is to act, sense what happens, and stabilise. This is fundamentally human territory: authority, accountability, moral judgement, and the willingness to make a call when nothing yet makes sense.
AI may support monitoring and communication once the initial response is underway. But in the first moments of genuine crisis, the qualities that matter most, adaptability, contextual awareness, empathy, moral clarity, are qualities that emerge from embodied human experience. They cannot be downloaded or deployed. They are built over time, through years of navigating difficulty and learning from consequences.
The First Question: What Kind of System Are You Looking At?
Before any AI integration decision, there is a prior question that most organisations skip: what is the nature of the system you are introducing AI into? Is it ordered and predictable? Does it require expert analysis? Is it emergent and unpredictable? Or is it in genuine crisis?
Most AI adoption failures happen not because the technology was wrong, but because it was applied to the wrong kind of problem. Organisations treat Complex challenges as if they were merely Complicated, throwing analytical tools at situations that require experimentation. Or they automate Complicated work as if it were Clear, removing the expert judgement that made the process reliable in the first place.
The framework does not tell you what to do with AI. It helps you see the nature of the system you are working with, so you can make that decision with clarity rather than assumption.
"We need to be able to discriminate between AI and human input and map the contexts in which they work stand-alone, hybrid, or not at all."
Try It: Map Your Own Systems
Think about your most recent integration, restructure, or major transition. Where did the friction actually live? Not where the project plan said it would, but where people felt it.
Place the items below into the domain where they currently sit in your organisation. There are no right answers. The value is in the conversation this map creates, and in noticing where your assumptions about complexity may not match the reality on the ground.
Map Your Organisation
Tap a system, then tap a domain to place it.
Organisational Systems
Fixed rules, repeatable processes, automation
Expert analysis, multiple good practices
Emergence, experimentation, narrative
No effective constraints, act then sense
What Good AI Integration Looks Like
The organisations that integrate AI well share a few characteristics. They are specific about what the technology actually does, naming the model, the version, the vendor, rather than talking about "the AI" as if it were a single, coherent capability. They design for failure, building systems where AI errors are visible, contained, and recoverable. They protect the human judgement that makes their organisation distinctive, rather than optimising it away in the name of efficiency.
And they invest in sense-making. Not as a one-off workshop, but as a continuous practice. Regular rituals where teams interpret, challenge, and override AI outputs together. Where disagreement with the machine is not a failure of adoption but a sign that human expertise is still functioning.
Be Specific, Not Abstract
Name the model, the version, the vendor. Describe the concrete situation, not the abstract goal. Specificity prevents the kind of magical thinking that leads to misapplication.
Design for Failure
Assume the AI will get things wrong. Build systems where failure is visible, contained, and recoverable. The best integrations are the ones where humans can see what the machine did and why.
Protect Human Judgement
Do not let AI adoption erode the expertise your organisation depends on. If your people stop exercising their judgement because the AI provides a safer default, you have not gained a tool. You have lost a capability.
Create Rituals of Sense-Making
Establish regular practices where humans interpret, challenge, and contextualise AI outputs together. Keep the human capacity for judgement exercised and sharp.
Map the Contexts
Know where AI works standalone, where it works alongside human expertise, and where it does not belong yet. Revisit this map regularly, because the domains shift.
Feed It Better
The quality of AI output depends on the quality of what it learns from. Most organisational AI is trained on official documentation, which systematically lacks the informal, contextual knowledge that actually makes the organisation function. Find ways to include the knowledge that curation removes.
The Opportunity
AI is not going away, and it should not. In the Clear domain, it is already transforming how organisations operate: faster, more consistent, more scalable. In the Complicated domain, the combination of machine analysis and human expertise is producing results that neither could achieve alone. These are genuine gains, and organisations that are not pursuing them are falling behind.
The opportunity is to integrate AI with awareness of where it actually fits. Where it augments, where it shouldn't go yet, and where conditions are too volatile for any tool to help. To know which parts of your organisation are ready for automation, which need augmentation, which require human sense-making that no machine can replace, and which are in crisis where only human judgement will do. That awareness is an organisational capability, not a technology one.
The first question is what kind of system you are working with. AI capability follows from that. What is the nature of the system we are introducing it into, and have we designed the integration to match?
"Machines excel in the Clear domain. Humans are indispensable everywhere else."
From Our Practice
Praxis: Real-Time Decision Priming
The mapping exercise above is a starting point. Praxis is how we put it into practice. It is a structured, real-time method for helping leadership teams prime better decisions under complexity. Not a workshop. Not a framework presentation. A working session where the team builds shared awareness of the forces shaping their organisation and leaves with clarity on what to do next.
Learn more about PraxisFurther Reading
Snowden, D. "Algorithmic Induction." The Cynefin Co, 2024.
Snowden, D. "A New Animism." The Cynefin Co, 2025.
Snowden, D. "Lessons Learning." The Cynefin Co, April 2026.
Rosul, M. "A Cynefin Framework Lens: Where Machines Work Best and Why Humans Remain Indispensable." The Cynefin Co, 2026.
Ready to start?
If you are navigating AI integration and want to understand where it fits within the complexity of your organisation, let's have an honest conversation. No pitch. No pressure.
What happens next: Sherryl responds within 24 hours. If it's a fit, we'll schedule a conversation.