Every MGA and MGU leader is living inside the same contradiction right now.
On one screen: a carrier partner asking when your underwriting team will “start using AI.” A broker channel wanting faster turnarounds. A claims vendor promising automation. A board deck that somehow needs to say AI without sounding like it was written by a chatbot.
On the other screen: headlines like “95% of GenAI implementations fail1” that read like a warning label.
Taken together, it can feel like you’re being told to sprint across a freeway.
But look closer at what that “95%” story is actually measuring, and the picture changes especially for MGAs and MGUs, where AI isn’t a sidecar. It has to plug into the parts of the business that already have sharp edges: rating, policy issuance, claims, compliance, bordereaux, billing, and the messy reality between them.
That’s where the real decision shows up. Not “AI or not AI.” Not even “which model.”
It’s whether you try to build mission-critical AI alone OR treat it like a core systems change and bring in a partner built for the job.
The headline version is clean: most GenAI projects don’t “work.” The fine print is less dramatic and more useful.
In a MIT/BCG study2, an effort only counts as successful if it produces measurable revenue or ROI in roughly six months. That definition excludes a lot of real-world wins such as cycle-time reduction, fewer touches per submission, less rekeying, back-office throughput; all of which take longer to show up as revenue.
Meanwhile, it's worth noting that over 90% of workers in those organizations surveyed in the study still reported using AI tools daily. So you get a weird split-screen reality: formal initiatives “fail,” while the people doing the work keep reaching for AI because it helps.
That gap is partly about measurement. But it’s also about execution.
And it’s where partner-led vs. DIY starts to matter.
If AI is framed as “connect a model to our data,” DIY looks tempting. Plenty of teams can spin up a prototype. Someone will paste a few underwriting guidelines into a prompt, run a handful of submissions through a workflow, and the demo will look magical.
Then production shows up.
A real AI workflow inside an MGA or MGU isn’t operating in a lab. It has to survive insurance as it actually exists:
Most MGA/MGU teams are built to run programs, not to stand up and maintain a production AI platform that stays secure, auditable, and integrated as the business changes. Teams can build prototypes; scaling them into robust systems is a different job.
Partner-led doesn’t mean outsourcing your brain. It means outsourcing the parts that are expensive to learn the hard way.
The difference is reps. A typical MGA might implement an AI-enabled workflow a handful of times. A specialist partner has done it dozens of times across lines, structures, and carrier expectations—and has already paid the tuition for the mistakes.
More specifically, partner-led AI tends to work when the partner brings four things:
Insurance-native building blocks
Submission intake, document parsing, rating flows, policy issuance, billing workflows, bordereaux processing, claims triage - these aren’t generic SaaS problems. They’re the gears of an insurance operation.
Implementation muscle memory
The edge isn’t just models. It’s knowing where implementations go sideways: data mappings, permissions, workflow ownership, exception handling, the awkward handoffs between teams.
Security and controls that are already “real”
Role-based access, segmentation by program or carrier, SOC 2-ready processes, human-in-the-loop review modes—these are table stakes in production insurance workflows, and costly to reinvent.
A platform that expects change
MGAs and MGUs evolve constantly: new programs, new capacity, new distribution deals. If every change means rebuilding the foundation, you’ll stop changing. A partner-built platform is designed to absorb that motion.
The data behind the “95% fail” headline shows partner-led approaches are about twice as likely to drive revenue compared with going it alone.
For most MGA and MGU leaders, the right answer isn’t a blanket “vendor everything” or “build everything.” It’s a boundary decision.
Partner (buy) the backbone—the workflows where failure is expensive, integration is heavy, and risk is high: submission ingestion, rating and rule-driven decisions, policy issuance and doc generation, bordereaux processing and reporting, claims intake/triage, and billing tied to money movement.
These are the systems that touch multiple stakeholders—carriers, brokers, insureds, vendors—and often sit under contractual expectations. Here, stability and governance beat bespoke cleverness.
Build on top for differentiation—once the rails exist, there’s plenty worth owning: niche underwriting checks, custom scoring approaches, program-specific workflow views, internal copilots and automations tuned to how your team actually works.
In other words: let the partner build the tracks. You decide where the train goes.
Picking a partner is step one. The rest is management.
The playbook is practical because it’s about sequencing:
Do those things, and a partner doesn’t slow you down. The partnership compresses the time between “idea” and “working system.”
The “95% fail” headline isn’t a reason to avoid AI. It’s a reason to stop treating AI like a weekend project.
For MGAs and MGUs, AI sits in the middle of core operations—submissions, underwriting, policy, claims, reporting. If the work touches core systems, the evidence suggests a clear pattern: partner-led implementations are materially more likely to translate into revenue impact than DIY.
DIY still has a place—experiments, narrow internal tools, learning. But for the systems that run the business, partner-led AI is emerging as the more reliable way to move fast without breaking the parts that matter.
To see partner-led AI in action, book a demo with us here.
Sources:
1Challapally, Aditya, et al. The GenAI Divide STATE of AI in BUSINESS 2025 MIT NANDA. 2025.
2Ransbotham, Sam, et al. “The Emerging Agentic Enterprise: How Leaders Must Navigate a New Age of AI.” MIT Sloan Management Review, 18 Nov. 2025, sloanreview.mit.edu/projects/the-emerging-agentic-enterprise-how-leaders-must-navigate-a-new-age-of-ai/.