Event: Beyond Pilot Purgatory: Getting AI Into Production

You've run the pilots. Some of them worked. But six months later, they're still stuck - waiting on security sign-off, unclear who owns what, and no one knows how to measure if it's working. The problem isn’t the model. It’s everything between “it works in a notebook” and “it’s live, governed, and used.”

Register Now
In Person

Beyond Pilot Purgatory: Getting AI Into Production

In-person event | Limited to 45 x senior data & AI leaders
21st April 2026 | 2.30 - 5.00pm
Databricks Office, Windmill Street, W1T 2JG

Beyond Pilot Purgatory: Getting AI Into Production, brings CDOs, Heads of Data and leaders in AI together to move your AI into production: building the operating models, deployment patterns, and prioritisation frameworks that separate high-velocity teams from those stuck in pilot purgatory.

You'll hear from practitioners who've shipped production AI at scale in regulated environments. What broke, what worked, and what they'd do differently. Then you’ll pressure-test your blockers with peers facing the same reality.

So why do most AI initiatives die between "it works" and "it ships"?
‍You know the pattern. The proof of concept delivers promising results. Everyone gets excited. Then it hits reality:

- Value, ownership and sponsorship are missing. ROI is unclear (or not believed), the build sits with the technical team, and no senior sponsor steps in to back it.
- “Good enough” isn’t defined. It works in the notebook, but nobody set production-grade evaluation criteria or what success looks like in the real world.
- Governance blocks delivery. Security, risk and compliance ask for answers and evidence that haven’t been documented, so progress stalls.
- The path to production is broken. MLOps handoffs turn Jupyter → production into a 6-month engineering project, and training data doesn’t match production reality.
- Nothing scales repeatably. There’s no reusability (so every use case starts from scratch and cost per use case stays high), no control of inference costs at scale, and no prioritisation framework to choose which use cases deserve production investment.

If these sound familiar, this session is for you. Designed for CDOs, Heads of Data/AI, and senior platform leaders accountable for delivering production AI, not just experiments.

Register now
Guest Speaker
Steve Janoo leads data, analytics and AI transformation at Diageo, shaping how one of the world’s largest consumer brands turns data into commercial impact at scale. With over 25 years of experience, he has delivered major transformation programmes across the full value chain, consistently driving growth, improving decision-making and accelerating the shift towards more consumer-centric business models. At Diageo, Steve sits at the intersection of business strategy and advanced technology, embedding AI into real-world operations across 200+ brands and 180 markets. He brings a clear, practical perspective on what it takes to move AI from experimentation into production and deliver measurable value at enterprise scale.
Steve Janoo
Global Data, Analytics & AI Leader, Diageo
Guest Speaker
Francis Hart is VP, Data, AI & Online Technology at SEGA Europe. He is responsible for ensuring the company leverages the latest technologies and best practices to deliver innovation and optimisation. He leads the strategy and implementation of data-driven and AI-powered solutions, as well as the development, architecture, and maintenance of SEGA Europe’s online systems, with a focus on turning AI from experimentation into production at pace.
Francis Hart
VP, Data, AI & Online Technology, SEGA Europe
Guest Speaker
Maria Zervou is Chief Data and AI Officer at Databricks. She works with organisations across industries to help them define and execute their data and AI strategies, focusing on how modern data platforms can enable analytics, machine learning, and AI at scale.
Maria Zervou
Chief Data and AI Officer, Databricks

We'll be focusing on how to: 

- Prioritise which pilots deserve production investment and which ones to kill early.

- Turn governance from a blocker into an enabler by building frameworks Risk and Security will trust.

- Design an operating model that makes AI repeatable, with clear ownership, handoffs, and decision rights between Data Science, Engineering, Product, and Risk.

- Build the technical foundations so use case production takes days instead of months, including feature stores, evaluation pipelines, monitoring, cost controls.

- Define "production ready" before you build: evaluation criteria, SLAs, explainbility requirements, rollback plans.

- Measure business value (not just model metrics) with ROI that stands up in the boardroom. 

At this event, you will: 

- Hear candid lessons form teams shipping AI in regulated industries, what broke and how they fixed it.
 
- See real production use cases, the challenges teams faced and the value they're creating now.

- Compare prioritisation frameworks with peers, what gets funded, what gets killed.

- Assess your blockers in comparison to peers: ownership gaps, stakeholder alignment, technical debt, cost management.

- Understand the deployment patterns that separate high-velocity teams from those stuck repeating the same mistakes.

- Leave with an approach to build an action plan with clear decisions, investments, and conversations to have when you get back.

Register

By submitting, you consent to allow Manuka to store and process the personal information submitted above to provide you the content requested, in accordance with our Privacy Policy. You can update your preferences at any time by contacting us.

Thank you for registering! We look forward to seeing you at the event!
Back to the event details
Oops! Something went wrong while submitting the form.
Event Catch Up

De-Risk Without De-Valuing: Turning Governance into an AI Growth Engine

On 20 January, we brought together a room of data, AI and business leaders at the Databricks London office to tackle a problem most organisations are living through right now:

Plenty of AI ideas. But not enough of them are high-impact, reusable, production-ready use cases, because the data foundations, governance and prioritisation aren’t pulling in the same direction.

If you couldn’t make it, catch up on the highlights and watch the full panel from our leading experts.

Catch Up Now