Partner-Led AI vs. DIY AI: Which is 2x More Likely to Succeed
Every MGA and MGU leader is living inside the same contradiction right now.
On one screen: a carrier partner asking when your underwriting team will “start using AI.” A broker channel wanting faster turnarounds. A claims vendor promising automation. A board deck that somehow needs to say AI without sounding like it was written by a chatbot.
On the other screen: headlines like “95% of GenAI implementations fail1” that read like a warning label.
Taken together, it can feel like you’re being told to sprint across a freeway.
But look closer at what that “95%” story is actually measuring, and the picture changes especially for MGAs and MGUs, where AI isn’t a sidecar. It has to plug into the parts of the business that already have sharp edges: rating, policy issuance, claims, compliance, bordereaux, billing, and the messy reality between them.
That’s where the real decision shows up. Not “AI or not AI.” Not even “which model.”
It’s whether you try to build mission-critical AI alone OR treat it like a core systems change and bring in a partner built for the job.
What the “95% failure” headline really says
The headline version is clean: most GenAI projects don’t “work.” The fine print is less dramatic and more useful.
In a MIT/BCG study2, an effort only counts as successful if it produces measurable revenue or ROI in roughly six months. That definition excludes a lot of real-world wins such as cycle-time reduction, fewer touches per submission, less rekeying, back-office throughput; all of which take longer to show up as revenue.
Meanwhile, it's worth noting that over 90% of workers in those organizations surveyed in the study still reported using AI tools daily. So you get a weird split-screen reality: formal initiatives “fail,” while the people doing the work keep reaching for AI because it helps.
That gap is partly about measurement. But it’s also about execution.
And it’s where partner-led vs. DIY starts to matter.
Why DIY AI runs into a wall inside insurance
If AI is framed as “connect a model to our data,” DIY looks tempting. Plenty of teams can spin up a prototype. Someone will paste a few underwriting guidelines into a prompt, run a handful of submissions through a workflow, and the demo will look magical.
Then production shows up.
A real AI workflow inside an MGA or MGU isn’t operating in a lab. It has to survive insurance as it actually exists:
- The data is unruly.
Submissions arrive as emails, PDFs, spreadsheets, loss runs, and “see attached” attachments. Bordereaux comes from multiple capacity providers with their own formats and quirks. Legacy systems don’t always agree on what the “truth” is. - The workflows aren’t generic.
Authority and referrals. Producer hierarchies. Guideline exceptions. Approval paths that shift by program, by carrier, sometimes by a single clause in a binder. - The governance is non-negotiable.
You don’t get to “move fast and break things” when the thing you might break is an audit trail, a regulatory obligation, or a contractual SLA. Reproducibility and accountability aren’t philosophical, they’re operational. - And the people have to adopt it.
Underwriters, adjusters, and ops teams are busy. If a tool adds friction, they will route around it. If it feels unsafe, they’ll keep the spreadsheet open “just in case.” Change management is not a footnote; it’s the project.
Most MGA/MGU teams are built to run programs, not to stand up and maintain a production AI platform that stays secure, auditable, and integrated as the business changes. Teams can build prototypes; scaling them into robust systems is a different job.
What “partner-led” actually means—and why it wins more often
Partner-led doesn’t mean outsourcing your brain. It means outsourcing the parts that are expensive to learn the hard way.
The difference is reps. A typical MGA might implement an AI-enabled workflow a handful of times. A specialist partner has done it dozens of times across lines, structures, and carrier expectations—and has already paid the tuition for the mistakes.
More specifically, partner-led AI tends to work when the partner brings four things:
-
Insurance-native building blocks
Submission intake, document parsing, rating flows, policy issuance, billing workflows, bordereaux processing, claims triage - these aren’t generic SaaS problems. They’re the gears of an insurance operation. -
Implementation muscle memory
The edge isn’t just models. It’s knowing where implementations go sideways: data mappings, permissions, workflow ownership, exception handling, the awkward handoffs between teams. -
Security and controls that are already “real”
Role-based access, segmentation by program or carrier, SOC 2-ready processes, human-in-the-loop review modes—these are table stakes in production insurance workflows, and costly to reinvent. -
A platform that expects change
MGAs and MGUs evolve constantly: new programs, new capacity, new distribution deals. If every change means rebuilding the foundation, you’ll stop changing. A partner-built platform is designed to absorb that motion.
The data behind the “95% fail” headline shows partner-led approaches are about twice as likely to drive revenue compared with going it alone.
The practical question: what to buy vs. what to build
For most MGA and MGU leaders, the right answer isn’t a blanket “vendor everything” or “build everything.” It’s a boundary decision.
Partner (buy) the backbone—the workflows where failure is expensive, integration is heavy, and risk is high: submission ingestion, rating and rule-driven decisions, policy issuance and doc generation, bordereaux processing and reporting, claims intake/triage, and billing tied to money movement.
These are the systems that touch multiple stakeholders—carriers, brokers, insureds, vendors—and often sit under contractual expectations. Here, stability and governance beat bespoke cleverness.
Build on top for differentiation—once the rails exist, there’s plenty worth owning: niche underwriting checks, custom scoring approaches, program-specific workflow views, internal copilots and automations tuned to how your team actually works.
In other words: let the partner build the tracks. You decide where the train goes.

How to make a partner-led project succeed (instead of becoming shelfware)
Picking a partner is step one. The rest is management.
The playbook is practical because it’s about sequencing:
- Start where friction is high and customer risk is low.
Back-office workflows—invoice review, data entry, document checks, internal summarization—often create measurable time savings without putting your brand at the point of sale on day one. - Define outcomes in operational language.
Not “do AI.” Goals like: reduce manual touch time per submission by 40%; cut FNOL-to-first-contact time by 30%; eliminate rekeying of defense invoices. - Run in stages: prototype → pilot → scale.
Small group, single program, then expand after metrics and guardrails hold. - Put frontline users in the room early.
Underwriters, adjusters, ops teams—they decide if something becomes habit or gets quietly ignored. Workflow design is adoption. - Get explicit about data rules and ownership.
Multi-party ecosystems require clarity: what data is used, how it is segmented, how insights are shared (or not) across programs and carriers.
Do those things, and a partner doesn’t slow you down. The partnership compresses the time between “idea” and “working system.”
.png?width=648&height=460&name=Gemini_Generated_Image_bmcradbmcradbmcr%20(1).png)
The takeaway
The “95% fail” headline isn’t a reason to avoid AI. It’s a reason to stop treating AI like a weekend project.
For MGAs and MGUs, AI sits in the middle of core operations—submissions, underwriting, policy, claims, reporting. If the work touches core systems, the evidence suggests a clear pattern: partner-led implementations are materially more likely to translate into revenue impact than DIY.
DIY still has a place—experiments, narrow internal tools, learning. But for the systems that run the business, partner-led AI is emerging as the more reliable way to move fast without breaking the parts that matter.
To see partner-led AI in action, book a demo with us here.
Sources:
1Challapally, Aditya, et al. The GenAI Divide STATE of AI in BUSINESS 2025 MIT NANDA. 2025.
2Ransbotham, Sam, et al. “The Emerging Agentic Enterprise: How Leaders Must Navigate a New Age of AI.” MIT Sloan Management Review, 18 Nov. 2025, sloanreview.mit.edu/projects/the-emerging-agentic-enterprise-how-leaders-must-navigate-a-new-age-of-ai/.
