Building an AI Ecosystem

Strategy, Security & Scale — a practical guide from someone who built it, broke things, and figured out what actually works.

Paulo · Senior Revenue Operations Manager · FrankieOne

Scroll

Setting the Scene

Who we are, how we sell, and why this matters

FrankieOne is an Australian identity verification and fraud prevention platform. We sell B2B — helping banks, fintechs, and enterprises verify customers and fight fraud at scale.
💰
~$200K
Average Contract Value
🌏
60/40
Australia / International
👥
5 Agents
3 Enterprise + 2 SME

Deal Segments

We use the MEDDPICC framework across three deal tiers

XL

Enterprise XL

ACV > $1.2M  ·  12–18 month sales cycle

30%
of deals
L

Enterprise L

$400K – $1.2M ACV  ·  ~12 month sales cycle

50%
of deals
M

Mid-Market

ACV < $400K  ·  Under 6 month sales cycle

20%
of deals

What We Had

Our tech stack before the AI ecosystem

🛠️  The Tools
📊
HubSpot
CRM — source of truth for deals & contacts
📇
Lusha
Data enrichment — contact & company info
🔗
LinkedIn Navigator
Prospecting — finding & researching leads
🎙️
Gemini / Avoma
Conversation intelligence — call recording & notes
⚠️  What Was Missing
  • No unified deal intelligence — context scattered across tools
  • No AI-powered coaching — managers relied on manual 1:1s
  • No automated CRM updates — reps did it manually (or didn't)
  • No data-driven forecasting — gut feel, not deal scores
  • Reactive posture — always responding to problems, never ahead of them

The Challenges

Good tools, wrong workflow — here's what was breaking

The Problem

Every revenue team hits the same walls — here's what was slowing us down

🧹

Deal Hygiene

CRM data was always stale. Reps hated updating fields manually after every call. The friction between selling and admin meant the system of record was never the source of truth.

🔍

Lack of Visibility on Deal Details

Managers couldn't see what was actually happening inside a deal without scheduling a 1:1. Context lived in reps' heads and scattered notes — not in a single place leadership could access.

📉

Poor Forecasting Ability

Forecasts were gut feel dressed up in spreadsheets. Without structured deal data from multiple sources, every commit call was a guessing game.

🐢

Slow Coaching Pace

Coaching happened in scheduled 1:1s — if it happened at all. There was no scalable way to assess a deal, identify gaps, and guide the rep in the moment. Hard to coach when the data isn't there.

📏

Scalability

Processes that worked when managing a handful of deals started to break as the pipeline grew. Every new deal meant more manual effort, more context switching, more things falling through the cracks.

🪞

Reactive, Not Proactive

We were always responding to problems after they happened — a deal slipping, a forecast miss, a rep struggling. There was no way to see issues coming and get ahead of them.

"We didn't have a tool problem. We had a friction problem. The gap between what reps knew and what the CRM showed was where deals went to die."
🔑

The root cause: The CRM forces users to: find the record → find the field → edit it → save. That's four friction points per update. It's not how humans naturally work. Voice is. That's why we moved to it — and everything else followed.

How We Solved Each One

Four core problems, one AI-powered platform — Indi, built in-house at FrankieOne

Before
🧹

Voice Became the CRM Update

We made voice the conduit that updated CRM fields directly. Reps talk — the system listens, extracts, and writes. We eliminated the friction point entirely. No more manual data entry after calls.

Zero Friction
CRM updates
Before
🔍

Full Deal Visibility in Indi

Every deal detail — call summaries, qualification data, stakeholder maps, next steps — surfaced in one view. Managers see the real state of every deal without asking the rep. This increased visibility directly facilitates coaching — you can't coach what you can't see.

Real-Time
Deal intelligence
Before
📉

AI Deal Scores Replace Gut Feel

Deal scores in Indi crunch data from multiple sources to tell us if we're more or less likely to win. The more data we feed it, the smarter it gets. Forecasting went from opinion-based to evidence-based.

Data-Driven
Forecasting
Before
🐢

Every Deal Assessment = A Coaching Moment

We fed Indi the MEDDPICC framework and turned every deal assessment into structured coaching. Not a scheduled 1:1 — an always-on, AI-guided coaching loop that meets the rep where they are.

500+
Uses in 5 weeks
💡

The pattern: Every solution followed the same principle — remove friction, centralise context, and let AI do the heavy lifting so humans can focus on selling and coaching, not admin.

From Problems to Ecosystem

Solving these problems taught us how to build AI the right way

The New Baseline

Why every company needs an AI ecosystem — not just an AI tool

"We can now expect excellence in every part of the business. AI makes that possible — but only if every team is empowered to solve their own limitations, not wait in a queue for someone else to solve them."
🎯
Every Team
should be self-sufficient with AI
🔒
Zero Risk
increase to data security
10x Faster
than traditional build cycles
"How do we give every team the power of AI without increasing the risk of a data breach?"

Five Pillars of an AI Ecosystem

A sequential approach — each pillar builds on the last

01

Security Visibility

Give SecOps line of sight from day one — not after an incident. Start an AI committee organically: CTO + a key operator, then grow it cross-functionally.

02

Find Your Champions

Identify 1–2 key users who are already enthusiastically using AI. Open the doors — give them system access and data access. Their early wins become the proof that convinces everyone else.

03

Start Building (Direct Connections)

Don't wait for the perfect infrastructure. Connect directly to individual systems — HubSpot, Xero, your data warehouse. Ship PoCs. Generate value immediately while learning what scale actually requires.

04

Centralise the Data Layer

As more people want to build, you realise: scale requires fewer API connections, fewer keys, more centralisation. Build a governed data lake where the end user just worries about the AI interface and features — not plumbing.

05

Granular Access & Governance

Apply traditional security and governance at the data layer — with all the gates, locks, and field-level security access controls that SecOps needs. The data is centralised and controlled; the users are free.

Security Has Line of Sight from Day One

Making security an enabler, not a blocker

🛡️  The Principle

Security teams should see everything AI-related happening across the business in real-time — not discover it in a post-mortem.

This isn't about blocking innovation. It's about making security an enabler.

✅  What This Looks Like
  • Centralised logging of all AI tool usage and data access
  • Automated alerts when sensitive data enters AI workflows
  • Dashboard showing who is using what AI tools, with what data
  • Security team participates in AI tool selection — not just reviews after
💡

Would an AI Committee help? Yes — but don't overthink it. We did it organically. It started with just me and the CTO having regular conversations about what was being built and what data it touched. Over time, it grew naturally to include people from different departments — security, product, finance. The key is starting small and being consistent, not forming a committee and waiting for a charter.

Find Your Champions First

The single most important step most companies skip

🚀  The Play

Find 1–2 people in your org who are already enthusiastically using AI on their own. They're out there — probably quietly solving problems nobody asked them to solve.

Open the doors for them. Give them system access. Give them data access. Give them air cover. Their early wins become the case studies that convince the sceptics and the executives.

📋  What to Give Them
  • API access to core business systems (CRM, billing, analytics)
  • A sandboxed environment to experiment without breaking anything
  • Direct line to security — so they learn the guardrails, not avoid them
  • Permission to fail fast — PoCs that don't work are still valuable data
  • A stage to present their wins — visibility matters for momentum

These champions become your proof of concept for the entire AI ecosystem. When the CEO asks "does this actually work?" — you point at the deal calculator that replaced $70K in software, or the attribution dashboard that went from 0.1% to 33.3% accuracy. Real results from real people, not a vendor slide deck.

The Data Layer Evolution

You don't start with a data lake — you grow into needing one

Phase 1: Direct Connections

Connect AI tools directly to individual systems — HubSpot API, Xero API, Redshift queries. One connection per tool. Works brilliantly for early PoCs. Fast to set up, immediate value.

✅ Quick wins   ⚠️ Doesn't scale

Phase 2: Centralised Data Lake

As more people want to build, you realise: 10 tools each with their own API keys to 5 different systems is a security and maintenance nightmare. Centralise the data. Apply governance once.

✅ Scales   ✅ Secure   ✅ Governed

"The end user just needs to work with AI and worry about the interface, the features they want. The data is centralised and controlled by traditional security and governance — with all the gates and locks that can be applied at the granular, field-level security access level."
Marketing Team
✓ Can Access

Campaign data, attribution, web analytics

✗ Restricted

Revenue figures, customer PII, contract terms

Finance Team
✓ Can Access

Revenue, billing, forecasts, commission data

✗ Restricted

Product telemetry, engineering logs, HR data

Product Team
✓ Can Access

Usage analytics, feature adoption, NPS

✗ Restricted

Individual salaries, legal contracts, raw financials

Empower People Within Guardrails

The permission vs. forgiveness spectrum — and the right answer

🔒 Ask Permission First

Slow. Safe. Frustrating. Teams wait weeks for approvals. Innovation dies in committee.

🚀 Ask Forgiveness Later

Fast. Risky. One breach and trust is gone. Shadow AI spreads unchecked.

⚖️ The Right Answer: Structured Empowerment

Give teams personal API keys, sandboxed environments, and pre-approved toolsets. Let them experiment freely within boundaries that security has defined. The data layer handles the governance — so the users don't have to think about it. Fast and safe.

PoC → Production Pipeline

Ship fast, learn fast — but never skip the security check

1

Sandbox

Build PoC with direct API connections. Isolated environment, real business problem.

2

Validate

Security review + stakeholder sign-off on what data it touches and how.

3

Pilot

Run with real data, limited users, full monitoring. Catch issues early.

4

Scale

Migrate to centralised data layer. Roll out with documented guardrails.

💡

Key insight: If your PoC fails, the blast radius should be near zero. That's by design, not by luck. And when it succeeds, the path to production should be clear — not a second build from scratch.

How We Actually Did It

The real story — warts and all

Where We Started

From wild west to intentional ecosystem

The Wild West

Individual teams experimenting with AI independently. No central oversight, no data governance, no shared infrastructure. Marketing using one tool, Sales another, Finance building their own. Every tool had its own API keys to every system.

The First Wins

A single person in RevOps started building AI-powered tools — a deal calculator that replaced $70K/year CPQ software, commission reconciliation, marketing attribution that went from 0.1% to 33.3% accuracy. Delivered results that normally require dedicated teams. Leadership noticed.

The Realisation

These wins exposed a deeper truth: the potential is massive, but 15 different API connections to 5 different systems with different credentials scattered across tools is a ticking time bomb. One wrong data connection and sensitive customer data is in an AI prompt.

The Organic Committee

Started with just me and the CTO having honest conversations about what was being built and what data it touched. No formal charter, no monthly meetings with agendas — just regular, honest conversations. Over time it naturally grew to include people from security, product, and finance.

The Decision

Build it right. Centralise the data. Apply governance at the data layer. Let every team focus on their AI interface and features while security controls what they can see. Make AI a capability of the organisation, not a collection of experiments.

The Mistakes We Made

So you don't have to

⚠️ Built tools before infrastructure

We shipped AI-powered solutions that worked brilliantly — but each one was a standalone island with its own API connections, its own keys, its own data access patterns. No shared data layer, no consistent access controls. Scaling meant every new tool re-created the plumbing from scratch.

⚠️ Didn't involve security early enough

Security reviewed tools after they were built, not during. This created friction, rework, and delayed deployments that should have been straightforward. Bringing security in from the start would have been faster overall.

⚠️ Underestimated the "people" problem

The hardest challenges weren't technical. They were organisational — teams feeling threatened, unclear ownership, resistance to new workflows. The tech was the easy part.

⚠️ Too many individual API connections

Every tool connecting individually to HubSpot, Xero, Redshift with separate API keys. More keys = more risk surface. More connections = more things to maintain. We learned the hard way that centralisation isn't optional at scale — it's the only way.

The Human Challenge

This is where most AI initiatives actually fail

💻  Technical Teams Feel...
Threatened
"If anyone can build tools with AI, what's my value?"
Protective
"Non-technical people don't understand the risks of bad data practices"
Overwhelmed
"Now everyone expects instant solutions because 'AI can do it'"
📊  Non-Technical Teams Feel...
Blocked
"I can see the potential but I need 3 approvals and a sprint cycle to try anything"
Dismissed
"My ideas get deprioritised because tech owns the roadmap"
Impatient
"Competitors are shipping AI features weekly while we debate architecture"
🔑

The critical insight: Both sides are right. The technical team's concerns about data practices are valid. The non-technical team's frustration about speed is valid. The solution isn't to pick a side — it's to build infrastructure that makes both sides right at the same time.

Bridging the Gap

Both sides have legitimate concerns — the solution addresses all of them

⚙️

Redefine the tech team's role

They're not gatekeepers anymore — they're platform builders. Their job shifts from "build every tool" to "build the platform that lets others build safely." That's a higher-value role, not a diminished one.

🤝

Create clear swim lanes

Non-technical teams own their use cases and features. Technical teams own the infrastructure, security, and data governance. Neither blocks the other.

🏆

Celebrate early wins publicly

When a non-technical person builds something valuable with AI, celebrate it. When security catches a risk early, celebrate that too. Make both sides heroes.

👁️

Acknowledge the real concerns

Data leaks are real. Shadow AI is real. Skills displacement anxiety is real. Naming these fears openly and addressing them with structure is what builds trust.

🌱

Start with champions, not mandates

Find the enthusiastic early adopters. Give them access and air cover. Let their results do the convincing. Organic adoption beats top-down rollouts every time.

Your AI Ecosystem Readiness Checklist

Everything you need before scaling AI across the organisation

Does your security team have real-time visibility into AI tool usage?
Have you identified 1–2 AI champions who are already building?
Do you have an organic AI committee (even if it's just 2 people meeting regularly)?
Have your champions shipped at least one PoC with measurable results?
Do you have a plan to centralise your data layer as you scale?
Can SecOps grant granular, field-level access at the data layer?
Do non-technical teams have sandboxed environments to experiment?
Is there a clear PoC → production pathway with security checkpoints?
Have you addressed the human dynamics — fear, ownership, incentives?
Do you have an AI governance policy that's actually enforced?
💡

If you can't check most of these, you're not ready to scale AI — you're ready to start building. And that's exactly the right place to be. Find your champions, start your committee, ship your first PoC. The ecosystem grows from there.

The companies that win with AI
won't be the ones that moved fastest.

They'll be the ones that built the right foundation
and empowered everyone to build on it.

Paulo · Senior Revenue Operations Manager · FrankieOne