The Marmalade Marketing Blog

AI Joyriding: The Pros, Cons and Guardrails We Need

Written by Jo Perrotta | 26-Sep-2025 07:35:22

The AI revolution isn’t just coming, it’s here. Tools like ChatGPT and Canva’s Magic Studio have moved from novelty to necessity in record time. The temptation to experiment is huge. And in many cases, employees, freelancers and even leadership teams are diving in headfirst, often without oversight.

I call this phenomenon AI Joyriding: the freewheeling use of AI tools without guidance, governance or guardrails. Like driving without a licence, it feels fast and fun but without rules, the risks quickly outweigh the rewards.

So, let’s take a close look at the pros and cons of AI Joyriding, why it’s as much an HR issue as it is a business one and how organisations can put the right structures in place.

The Pros: Why AI Joyriding Happens

AI Joyriding doesn’t stem from recklessness,  it stems from enthusiasm and ambition. There are very real benefits:

  • Speed and Efficiency: Employees can draft content, analyse data or generate imagery in seconds.
  • Accessibility: AI lowers barriers, giving small teams enterprise-level tools at little to no cost.
  • Innovation Mindset: Joyriders are naturally curious, often spotting new ways to work smarter.
  • Empowerment: When time-consuming admin is reduced, people can focus on higher-value tasks.
  • Competitive Spark: Early adopters give organisations a sense of progress and modernity.

In other words, AI Joyriders are often your most forward-thinking people. They’re eager to go above and beyond to explore what’s possible and that energy should be celebrated and harnessed in equal measures.

The Cons: Where Joyriding Turns Risky

But enthusiasm without strategy comes at a cost. Without the rules of the road, AI Joyriding exposes businesses to significant risks:

  • Data Security and Privacy: Pasting confidential client or employee information into AI tools can breach GDPR and other regulations.
  • Brand Inconsistency: Unedited AI output can dilute brand voice, erode trust and feel robotic.
  • Bias and Ethics: Tools trained on biased data may reinforce stereotypes or discrimination.
  • Legal Liability: Using AI without understanding its terms can expose companies to compliance or IP risks.
  • False Confidence: Fast answers don’t equal correct ones, as multiple online news sites contain AI-generated articles riddled with errors.

Unregulated adoption often creates pockets of chaos across a business in the form of dozens of experiments with no shared learning nor awareness of the resulting cumulative risks.

Why AI Joyriding is Also an HR Issue

Much like ransomware attacks are not only a business crisis but also a test of leadership, AI Joyriding isn’t just about technology, it’s about people.

When employees adopt AI tools without guidance, HR leaders face several hidden costs:

  • Stress and Anxiety: Employees may feel pressure to “keep up” with AI, fearing punishment if they fall behind.
  • Skills Gap: Teams may lack the training to use AI responsibly, leaving them vulnerable to errors or compliance breaches.
  • Cultural Fragmentation: Without shared policies, different teams experiment in silos, creating inconsistency in workflows and culture.
  • Trust Erosion: If employees see leadership turning a blind eye, it undermines confidence in company values and governance.

AI Joyriding is as much about workforce wellbeing and organisational resilience as it is about data security or brand protection. Like cybersecurity, the human dimension is often overlooked, but it’s where the biggest long-term damage can occur.

Guardrails: Turning Joyriders into Responsible Drivers

The solution isn’t banning AI. It’s about balancing experimentation with responsibility. Here’s how:

  1. Set an AI Use Policy
    Define what’s acceptable, which tools are approved and how data must be handled.
  2. Map AI Activity
    Audit who is using what tools, for which tasks, and whether sensitive data is involved.
  3. Review Before Release
    Make human oversight non-negotiable. No AI-generated work should go live unchecked.
  4. Appoint AI Champions
    Identify tech-curious employees, train them and position them as in-house guides.
  5. Invest in Training
    Build AI literacy across teams so people know both the potential and the pitfalls.

Marketing’s Unique Vulnerability

For marketers, AI Joyriding is especially high-stakes. Marketing touches every part of the brand, from voice and visuals to client trust and reputation.

Done well, AI can strengthen brand storytelling. Done poorly, it undermines authenticity in seconds. Just look at:

  • Monzo, which openly shares its AI approach and trains tools to reflect its human tone.
  • CNET, which faced backlash when unreviewed AI articles were published with serious errors.

The lesson? AI can support a brand, but it can’t define it.

The AI Sandwich Model

Think of safe AI adoption like a sandwich:

  • Top Layer: Innovation - the new tools and creative ideas.
  • Middle Layer: Governance - the policies, training and oversights that hold it together.
  • Bottom Layer: Joyriders - the curious users pushing boundaries.

Without the middle layer, everything falls apart.

Slow Down to Speed Up

AI Joyriding is happening in every organisation right now. The question is: do you know where, how and to what extent?

Rather than fear it, businesses need to manage it, turning curiosity into excellence. That means:

  • Encouraging exploration
  • Setting clear guardrails
  • Training people to use tools safely
  • And building a company-wide AI “risk register.”

If we slow down now to put structure in place, we can speed up later, harnessing AI’s full potential without losing what makes business human, trusted and intentional.