top of page

Leadership in the Age of AI

Updated: Dec 30, 2025


Article #1 in the series


It's Not Just About the Technology. It's About the Behaviour.

The Shift That's Actually Happening


There's a shift happening inside organisations, and yes we know AI technology plays a big part in this, but it's also about what leadership now needs to understand.


AI doesn't change who an organisation is. It changes the conditions people work in:

  • How they make decisions

  • What they trust

  • When they escalate

  • The shortcuts they take when pressure hits

And under those conditions, capability and responsibility matter more than ever.

And this isn't only true for big enterprises. It's just as true for:


  • Small and medium businesses (SMBs)

  • Industry associations

  • Consultants

  • Individual practitioners

  • Teams using AI tools every day (e.g., Microsoft Copilot, ChatGPT, Gemini)

Most of these organisations will never need ISO certification. But the behaviours ISO expects are the same behaviours everyone needs when using AI.


The risk doesn't come from the technology. It comes from what people do with it.

A Note on Interpretation and Perspective


Before going further, I want to be clear that frameworks such as ISO/IEC 42001, the EU AI ActNIST AI RMF and OECD AI principles do not explicitly describe themselves as "behavioural governance" frameworks. My perspective reflects how these requirements function in practice. They all rely on human judgement, clarity, oversight and behaviour at the point of use.


This interpretation is shaped by what I have been learning through my study of human-centred AI thinkers, including individuals such as Professor Stuart Russell who has spent decades researching how AI interacts with human judgement and behaviour. Their work consistently highlights that the most significant risks with AI emerge not only from the models themselves, but from how humans understand, interpret and act on their outputs.


This perspective is also informed by my own study in Applied Behaviour Analysis (ABA) and how behavioural environments shape decision making, oversight and reliability inside organisations. Behavioural governance simply gives us the language to describe this intersection in a practical way.

Why Behaviour Is Now More Important Than the AI Itself


AI changes the behavioural environments inside an organisation. 

Suddenly:


  • People trust AI suggestions without checking; or

  • They under-trust and avoid good recommendations; or

  • They rely on AI to make decisions outside their authority; or

  • They are afraid to speak up when something feels "off".


This is exactly why global frameworks like ISO/IEC 42001, the EU AI ActNIST AI RMF and OECD AI principles talk about:

  • Human oversight

  • Competence

  • Clarity

  • Communication

  • Accountability


While these appear as organisational or governance requirements, in practice they are behavioural requirements, because each one relies on how people interpret, verify, communicate and act when using AI systems. 

Because even the safest AI cannot compensate for unclear or inconsistent human behaviour.

What This Means for SMBs, Associations and their Members


Here's the misunderstanding I'm seeing:

We're small, we don't need AI governance

The truth?


AI governance is already happening quietly inside your organisation. The moment someone opens Copilot, ChatGPT, Gemini, Canva AI or the AI inside your CRM.


You don't need certification to need:


  • Clarity

  • Safe habits

  • Consistent judgement

  • Oversight

  • Escalation pathways

  • Leadership responsibility

  • Behavioural expectations


AI risk doesn't scale with organisation size. It scales with behaviour, and behaviour exists everywhere.

So yes, this matters for SMBs. It matters for Associations. It matters for every member who uses AI every day to work faster, write better or make decisions.

And this is why leadership matters more than ever.

Leadership in Action: What It Actually Looks Like


This is the part we aren't really talking about enough...yet.


  • Not the strategy decks

  • Not the frameworks

  • Not the buzzwords

The behaviour

Here's what leadership looks like in practice when AI becomes part of every day work.

 1. Leadership makes behaviours explicit, not implied.


Most teams don't need more values or messaging. They need clarity.


Real operational clarity sounds like:

  • "Always verify before you trust."

  • "If you're unsure, escalate."

  • "Check the output, not just the spelling."

  • "AI can draft, but you own the judgement."

  • "Pause when something doesn't feel right."


This removes ambiguity, one of the biggest drivers of unsafe decisions.


 2. Leadership creates the conditions for safe behaviour.


Behaviour doesn't change because we tell people to change. It changes when the environment makes the right behaviour the easiest behaviour.


Leaders do this by:


  • Reducing uncertainty

  • Reinforcing safe decisions (even when slower)

  • Simplifying escalation pathways

  • Giving permission to question AI outputs

  • Removing pressure that pushes people into shortcuts


Change management helps people through the transition. Behavioural leadership helps people through the reality.


 3. Leadership models the behaviour first.


People copy what leaders do, not what they say. If leaders:

  • Skip verification

  • Trust AI blindly

  • Cut corners

  • Reward speed over judgement


....teams will follow. Leadership is the most powerful reinforcement system inside an organisation, regardless of the size of the organisation.


 4. Leadership defines "responsible use" in practical terms.

Teams shouldn't have to guess. Responsible use becomes real when leadership translates it into:


  • What's OK

  • What's not OK

  • When to stop

  • When to double-check

  • When to escalate

  • Who has decision rights

  • Where AI fits in the workflow


This removes behavioural guesswork.


 5. Leadership watches for behavioural drift, not just process compliance.


Behaviour is part of the risk system. AI failures rarely occur because of the model alone. Most failures emerge when people:


  • Take silent shortcuts

  • Become overconfident

  • Stop escalating

  • Stop checking

  • Assume the AI must be right

  • Use the tool in ways leadership never intended


Spotting these patterns early is now a leadership skill.


Where Change Management Fits and Where Behaviour Steps In


This isn't about replacing change management. It's about recognising the limits in an AI-driven environment. 


Change management plays an essential role:


  • Helping people understand what's changing

  • Building awareness and readiness

  • Supporting adoption

  • Guiding transition periods


But AI introduces something different. A behavioural environment where judgement, oversight, clarity and escalation matter in every moment. 


This is where behavioural science adds value. Not as a formal methodology organisations must adopt, but as a way to understand:


  • Why people take shortcuts

  • How pressure shapes decisions

  • Why behaviour drifts over time

  • What conditions support reliability

  • How ambiguity impacts judgement

  • What reinforcement patterns lead to unsafe choices


Change management helps people learn the change. Behavioural governance helps people live the change safely.


AI requires both. But it is behaviour that determines whether the technology is used responsibly

How This Connects to Global Guidance | National Institute of Standards Technology (NIST)


This behavioural emphasis isn't theoretical.


As a SMB, an Industry Association, an Individual practitioner or simply a team member using off-the-shelf AI tools, you might be thinking:

"Does any of this Global Guidance really apply to me?"

The reality is Yes, but not in the way people often assume. NIST isn't telling SMBs, associations, or team members to become AI engineers. It's saying that anyone using AI needs to understand how humans behave around it. Because many practical risks appear at the point of human judgement, not at the point of model design.


NIST's Generative AI Profile (NIST MAP 3.4), directly calls for organisations to:


  • Evaluate whether humans interpret AI outputs correctly

  • Build proficiency in understanding AI-generated content

  • Develop training that includes real-world behavioural scenarios

  • Separate human skill assessments from AI capability tests

  • Monitor human-AI interaction patterns over time

  • Test behaviour under ethically sensitive or high-pressure situations

  • Involve end-users in prototyping and scenario testing


These are not primarily technical controls. They are behavioural expectations.

NIST is effectively saying:

AI safety depends on how humans behave, not just how the system performs.

This reinforces the central point: Leadership must understand behaviour, not algorithms.

A Simple Visual: ISO 42001 & NIST Map (3.4) Requirements vs Behaviour Requirements


The Bottom Line


AI failures rarely happen because of the model alone. They more commonly emerge because: 

  • Behaviour wasn't clear

  • Oversight wasn't practised

  • Shortcuts went unchecked

  • Responsibility wasn't defined


This is why the real shift in organisations right now is not about AI. It's about leadership and the behavioural environment leaders create.


In the next edition, I'll go deeper into behavioural capability. How organisations of any size can build habits, clarity, judgement and oversight needed for safe and confident AI use.


Because responsible AI isn't about certification. It's about behaviour.

Resources


Comments


bottom of page