Two Everyday Moments That Reveal the Real Competency Gap in AI Use
- Sandra Farquhar

- Dec 15, 2025
- 3 min read

Most AI failures start with human behaviour, not the technology.
It’s a global behavioural capability problem. AI systems come with technical vulnerabilities, but what’s becoming increasingly visible is a parallel challenge: the behavioural vulnerabilities that emerge in day-to-day use - how people interpret, trust and act on AI-generated content.
These vulnerabilities show up everywhere. Below are two real-world scenarios - one from Australia, one from Canada. Demonstrating that this issue isn’t confined to a particular industry or jurisdiction.
And these are just two publicly reported examples. There are many more, some widely covered, others happening quietly behind the scenes with no visibility at all.
1. The Real Estate Listing That Went Slightly Too Far (Australia)
A real estate agent uses an AI tool to “freshen up” a property listing.The AI produces a polished, fluent paragraph, and quietly adds:
"close to excellent educational facilities"
None of which is true.
The agent assumes the AI “knows how listings should sound.”
The senior agent skims it.
The copy goes live.
This mirrors a real media-reported incident in Australia, published by The Guardian, where an LJ Hooker office released an AI-generated listing that referenced non-existent schools.
The technology performed as designed - writing persuasively.
The issue wasn’t malicious intent or a model error.
It was a behavioural shortcut triggered by output that looked right.
The failure was behavioural:
fluency was mistaken for accuracy.
The missing capability?
Knowing when AI-generated content requires verification rather than acceptance.
2. The Chatbot Case That Became a Global Benchmark (Canada)
A traveller asked Air Canada’s website chatbot whether bereavement refunds could be claimed retroactively.
The AI confidently replied yes, providing specific instructions.
Trusting that information, he purchased full-price fares.
When Air Canada later refused the refund, the airline argued that the chatbot’s statements weren’t binding because:
“It wasn’t a human agent.”
A Canadian tribunal rejected that argument.
It ruled that businesses are responsible for incorrect information generated by their AI systems.
The issue wasn’t a model malfunction.It was human behaviour:
trusting confident output
skipping verification
assuming automation equals accuracy
Small decisions.
Large consequences.
What These Two Moments Reveal Across Two Countries, Two Industries and Two Teams
Despite happening in different jurisdictions and contexts, both cases expose the same behavioural capability gap:
polished AI output reduces scrutiny
verification decreases when content feels finished
humans assume AI understands context
oversight collapses quietly, not catastrophically
small inaccuracies compound downstream
behavioural habits, not technical skills determine reliability
These aren’t unpredictable failures.
They’re predictable human responses when AI is adopted without building the behavioural competencies that must accompany it.
Training teaches people how to use AI.
Behavioural capability determines how well they use it.
And that capability gap is becoming one of the biggest risks in AI adoption — no matter the industry, business size or jurisdiction.
Most AI failures start with human behaviour, not the technology.
Next Month’s Full Edition
In my next full-length article, I’ll go deeper into what behavioural capability actually looks like in practice. How cues, habits, reinforcement and environment shape the way people use AI, and why training alone cannot shift behaviour at scale.
Rather than jumping to solutions, we need to understand the behavioural mechanics underneath everyday decisions around AI. These foundations will set the stage for examining the specific competencies businesses must eventually build - but we are not there yet.
For now, the most important idea is this:
Without strengthening behavioural capability, businesses will continue to experience behavioural drift, inconsistency and avoidable risk - no matter how much training, policy or technology they deploy.
References
American Bar Association: BC Tribunal Confirms Companies Remain Liable for Information Provided by AI Chatbot



Comments