Insights

The AI Love Affair

A woman who is in love with AI.

For the past couple of years, we’ve had a love affair with AI.

It started innocently enough. AI was helpful, efficient, and impressive. It made things faster. Easier. Cleaner. For a while, it felt like it could do no wrong.

Then things started to get a little…unhinged.

The lying, the constant fact-checking, not to mention the feeling of needing to double-check everything, like a private investigator trying to work out whether the story actually held up.

Doubt crept in.

And when doubt enters any relationship, it usually means one of two things…a serious intervention is required, or the relationship comes to an end.

To be clear, we’re not breaking up with AI, but we are at the point where pretending everything is fine is no longer responsible leadership.

AI Governance & Risk Mitigation – A Practical Guide for Business Leaders
was designed for founders who want to use AI well, without compromising trust, data, or people.

Inside the guide, we cover:

  • the real risks businesses are already facing with AI
  • why “using it carefully” isn’t enough
  • what responsible boundaries actually look like in practice
  • and what you can put in place now to protect your business and your team

Used well, AI can save time, reduce noise, and create genuine leverage inside a business. It can support decision-making, streamline operations, and free people up to do higher-value work, BUT used without leadership, accountability, and boundaries, AI introduces real risk.

Risk to:

  • sensitive data
  • client trust
  • decision-making clarity
  • team wellbeing
  • privacy and intellectual property

And that’s what worries us most.

The Questions Leaders Aren’t Asking

What we’re seeing across the market is rapid adoption without reflection. Businesses are excited about what AI can do, but they’re not pausing to ask what it should do or how it should do it.

Questions like:

  • What should AI be allowed to do here and what should remain human-led?
  • Where does judgement, context, and responsibility still matter?
  • Who is accountable when something goes wrong?
  • How are we protecting client data, IP, and our people?

These aren’t technical questions, they’re leadership questions.

What Responsible AI Use Looks Like in Practice

After publishing our AI Governance & Risk Mitigation Guide, a client came back with thoughtful follow-up questions. They’re the right questions — and ones more leaders should be asking.

1. How do you assess whether an AI tool is “safe”?

Before approving any AI tool, we run it through a simple but disciplined review. Not because we’re risk-averse, but because we’re accountable.

Here’s what we look at:

Privacy:

  • Does the tool clearly state what happens to the data you input?
  • Can you opt out of data being used to train the model?
  • Can data or accounts be deleted?

Security:

  • Is data encrypted?
  • Does it offer two-factor authentication?

Permissions:

  • Is it asking for the minimum access it needs or far more than makes sense?
  • Any tool requesting full email, file system, or admin access gets extra scrutiny.

Reputation:

  • Is the company identifiable and credible?
  • Are there real reviews from trusted sources?

Red flags:

  • “Free” tools with vague business models
  • Unclear or unreadable privacy policies
  • Requests for passwords or blanket system access

This isn’t about paranoia. It’s about protecting trust.

2. How do you handle client approval and consent?

This is where leadership shows up in small, everyday ways.

What we do is:

  • cover AI use and recording permissions in our letters of engagement
  • ask explicit permission when recording calls, especially with new clients
  • explain how recordings or AI tools will be used
  • seek approval before using AI for specific scoped tasks (e.g. drafting a first pass of a policy)

A recent example: one of our team was asked to rewrite a policy document. Before starting, she asked the client for approval to use a paid version of GPT to create the initial draft.

That pause matters. Consent isn’t a legal box to tick, it’s how trust is built and maintained.

3. What about hallucinations and accuracy?

This is the big one. AI can produce fluent, confident outputs even when it’s wrong and human fact-checking can feel like it defeats the point of using AI at all.

There’s no perfect solution yet. Anyone telling you otherwise isn’t being honest, but there are ways to dramatically reduce risk without turning into a full-time auditor…

Force evidence-first outputs – Ask AI to provide exact quotes or data sources for every claim and to say “not found” if it can’t.

Separate extraction from interpretation – First, extract facts only, then, analyse using only those extracted facts.

Use structured outputs –Schemas, fixed fields, and “unknown allowed” rules reduce creative filling-in.

Allow and reward uncertainty – Explicitly tell the model it’s better to say “I don’t know” than to guess.

Spot-check strategically – Review high-impact items, outliers, low-confidence outputs, and random samples.

Run an adversarial review – Ask the model to critique its own output and flag unsupported claims.

Tools like NotebookLM can also help because they only work from sources you provide, though human oversight still matters.

Why Leadership Can’t Be Delegated Here

Governments are moving slowly and the giants of tech are moving fast, which leaves business owners stuck in the middle and responsible for outcomes whether we’ve intentionally designed for them or not.

AI governance isn’t about compliance theatre or red tape. It’s about ownership. It’s about deciding up front how AI fits into your operating model, where it supports people, and where it stops.

AI Governance & Risk Mitigation - A Practical Guide for Business Leaders

If reading this has raised questions about how AI is showing up in your business or where risk, responsibility, or decision-making might be sitting, we’ve put together a practical resource to help.

AI Governance & Risk Mitigation – A Practical Guide for Business Leaders
was designed for founders who want to use AI well, without compromising trust, data, or people.

Inside the guide, we cover:

  • the real risks businesses are already facing with AI
  • why “using it carefully” isn’t enough
  • what responsible boundaries actually look like in practice
  • and what you can put in place now to protect your business and your team

It’s not about fear or slowing things down, it’s about leading with intention and making sure AI supports your business, rather than undermining it.

FAQs

This guide is designed for business leaders, not engineers. You don’t need a technical background to use it. The focus is on leadership decisions, boundaries, accountability, and operating practices, not coding, models, or tool setup.

No. This information and guide isn’t about avoiding AI, it’s about using it intentionally and as safely as possible. AI can be incredibly valuable when it’s embedded into clear systems and supported by human oversight. The risk comes from adoption without leadership, ownership and accountability.

Much of what we’ve outlined in the guide can be implemented immediately often within days, not months. They’re based on clear decisions and simple policies, not large technology projects. The goal is progress and protection, not perfection. 

This technology is moving rapidly, and boundaries are constantly shifting, agility and flexibility will be required.

Further Insights