Strategy, Security & Scale — a practical guide from someone who built it, broke things, and figured out what actually works.
Who we are, how we sell, and why this matters
We use the MEDDPICC framework across three deal tiers
ACV > $1.2M · 12–18 month sales cycle
$400K – $1.2M ACV · ~12 month sales cycle
ACV < $400K · Under 6 month sales cycle
Our tech stack before the AI ecosystem
Good tools, wrong workflow — here's what was breaking
Every revenue team hits the same walls — here's what was slowing us down
CRM data was always stale. Reps hated updating fields manually after every call. The friction between selling and admin meant the system of record was never the source of truth.
Managers couldn't see what was actually happening inside a deal without scheduling a 1:1. Context lived in reps' heads and scattered notes — not in a single place leadership could access.
Forecasts were gut feel dressed up in spreadsheets. Without structured deal data from multiple sources, every commit call was a guessing game.
Coaching happened in scheduled 1:1s — if it happened at all. There was no scalable way to assess a deal, identify gaps, and guide the rep in the moment. Hard to coach when the data isn't there.
Processes that worked when managing a handful of deals started to break as the pipeline grew. Every new deal meant more manual effort, more context switching, more things falling through the cracks.
We were always responding to problems after they happened — a deal slipping, a forecast miss, a rep struggling. There was no way to see issues coming and get ahead of them.
The root cause: The CRM forces users to: find the record → find the field → edit it → save. That's four friction points per update. It's not how humans naturally work. Voice is. That's why we moved to it — and everything else followed.
Four core problems, one AI-powered platform — Indi, built in-house at FrankieOne
We made voice the conduit that updated CRM fields directly. Reps talk — the system listens, extracts, and writes. We eliminated the friction point entirely. No more manual data entry after calls.
Every deal detail — call summaries, qualification data, stakeholder maps, next steps — surfaced in one view. Managers see the real state of every deal without asking the rep. This increased visibility directly facilitates coaching — you can't coach what you can't see.
Deal scores in Indi crunch data from multiple sources to tell us if we're more or less likely to win. The more data we feed it, the smarter it gets. Forecasting went from opinion-based to evidence-based.
We fed Indi the MEDDPICC framework and turned every deal assessment into structured coaching. Not a scheduled 1:1 — an always-on, AI-guided coaching loop that meets the rep where they are.
The pattern: Every solution followed the same principle — remove friction, centralise context, and let AI do the heavy lifting so humans can focus on selling and coaching, not admin.
Solving these problems taught us how to build AI the right way
Why every company needs an AI ecosystem — not just an AI tool
A sequential approach — each pillar builds on the last
Give SecOps line of sight from day one — not after an incident. Start an AI committee organically: CTO + a key operator, then grow it cross-functionally.
Identify 1–2 key users who are already enthusiastically using AI. Open the doors — give them system access and data access. Their early wins become the proof that convinces everyone else.
Don't wait for the perfect infrastructure. Connect directly to individual systems — HubSpot, Xero, your data warehouse. Ship PoCs. Generate value immediately while learning what scale actually requires.
As more people want to build, you realise: scale requires fewer API connections, fewer keys, more centralisation. Build a governed data lake where the end user just worries about the AI interface and features — not plumbing.
Apply traditional security and governance at the data layer — with all the gates, locks, and field-level security access controls that SecOps needs. The data is centralised and controlled; the users are free.
Making security an enabler, not a blocker
Security teams should see everything AI-related happening across the business in real-time — not discover it in a post-mortem.
This isn't about blocking innovation. It's about making security an enabler.
Would an AI Committee help? Yes — but don't overthink it. We did it organically. It started with just me and the CTO having regular conversations about what was being built and what data it touched. Over time, it grew naturally to include people from different departments — security, product, finance. The key is starting small and being consistent, not forming a committee and waiting for a charter.
The single most important step most companies skip
Find 1–2 people in your org who are already enthusiastically using AI on their own. They're out there — probably quietly solving problems nobody asked them to solve.
Open the doors for them. Give them system access. Give them data access. Give them air cover. Their early wins become the case studies that convince the sceptics and the executives.
These champions become your proof of concept for the entire AI ecosystem. When the CEO asks "does this actually work?" — you point at the deal calculator that replaced $70K in software, or the attribution dashboard that went from 0.1% to 33.3% accuracy. Real results from real people, not a vendor slide deck.
You don't start with a data lake — you grow into needing one
Connect AI tools directly to individual systems — HubSpot API, Xero API, Redshift queries. One connection per tool. Works brilliantly for early PoCs. Fast to set up, immediate value.
✅ Quick wins ⚠️ Doesn't scale
As more people want to build, you realise: 10 tools each with their own API keys to 5 different systems is a security and maintenance nightmare. Centralise the data. Apply governance once.
✅ Scales ✅ Secure ✅ Governed
Campaign data, attribution, web analytics
Revenue figures, customer PII, contract terms
Revenue, billing, forecasts, commission data
Product telemetry, engineering logs, HR data
Usage analytics, feature adoption, NPS
Individual salaries, legal contracts, raw financials
The permission vs. forgiveness spectrum — and the right answer
Slow. Safe. Frustrating. Teams wait weeks for approvals. Innovation dies in committee.
Fast. Risky. One breach and trust is gone. Shadow AI spreads unchecked.
Give teams personal API keys, sandboxed environments, and pre-approved toolsets. Let them experiment freely within boundaries that security has defined. The data layer handles the governance — so the users don't have to think about it. Fast and safe.
Ship fast, learn fast — but never skip the security check
Build PoC with direct API connections. Isolated environment, real business problem.
Security review + stakeholder sign-off on what data it touches and how.
Run with real data, limited users, full monitoring. Catch issues early.
Migrate to centralised data layer. Roll out with documented guardrails.
Key insight: If your PoC fails, the blast radius should be near zero. That's by design, not by luck. And when it succeeds, the path to production should be clear — not a second build from scratch.
The real story — warts and all
From wild west to intentional ecosystem
Individual teams experimenting with AI independently. No central oversight, no data governance, no shared infrastructure. Marketing using one tool, Sales another, Finance building their own. Every tool had its own API keys to every system.
A single person in RevOps started building AI-powered tools — a deal calculator that replaced $70K/year CPQ software, commission reconciliation, marketing attribution that went from 0.1% to 33.3% accuracy. Delivered results that normally require dedicated teams. Leadership noticed.
These wins exposed a deeper truth: the potential is massive, but 15 different API connections to 5 different systems with different credentials scattered across tools is a ticking time bomb. One wrong data connection and sensitive customer data is in an AI prompt.
Started with just me and the CTO having honest conversations about what was being built and what data it touched. No formal charter, no monthly meetings with agendas — just regular, honest conversations. Over time it naturally grew to include people from security, product, and finance.
Build it right. Centralise the data. Apply governance at the data layer. Let every team focus on their AI interface and features while security controls what they can see. Make AI a capability of the organisation, not a collection of experiments.
So you don't have to
We shipped AI-powered solutions that worked brilliantly — but each one was a standalone island with its own API connections, its own keys, its own data access patterns. No shared data layer, no consistent access controls. Scaling meant every new tool re-created the plumbing from scratch.
Security reviewed tools after they were built, not during. This created friction, rework, and delayed deployments that should have been straightforward. Bringing security in from the start would have been faster overall.
The hardest challenges weren't technical. They were organisational — teams feeling threatened, unclear ownership, resistance to new workflows. The tech was the easy part.
Every tool connecting individually to HubSpot, Xero, Redshift with separate API keys. More keys = more risk surface. More connections = more things to maintain. We learned the hard way that centralisation isn't optional at scale — it's the only way.
This is where most AI initiatives actually fail
The critical insight: Both sides are right. The technical team's concerns about data practices are valid. The non-technical team's frustration about speed is valid. The solution isn't to pick a side — it's to build infrastructure that makes both sides right at the same time.
Both sides have legitimate concerns — the solution addresses all of them
They're not gatekeepers anymore — they're platform builders. Their job shifts from "build every tool" to "build the platform that lets others build safely." That's a higher-value role, not a diminished one.
Non-technical teams own their use cases and features. Technical teams own the infrastructure, security, and data governance. Neither blocks the other.
When a non-technical person builds something valuable with AI, celebrate it. When security catches a risk early, celebrate that too. Make both sides heroes.
Data leaks are real. Shadow AI is real. Skills displacement anxiety is real. Naming these fears openly and addressing them with structure is what builds trust.
Find the enthusiastic early adopters. Give them access and air cover. Let their results do the convincing. Organic adoption beats top-down rollouts every time.
Everything you need before scaling AI across the organisation
If you can't check most of these, you're not ready to scale AI — you're ready to start building. And that's exactly the right place to be. Find your champions, start your committee, ship your first PoC. The ecosystem grows from there.