Why Big-Tech AI Solutions Put Your Insurance Data at Risk
Every policy document you send to a public AI API becomes someone else's data. For insurers, the stakes around contract data security are too high for that.
The Promise and the Problem
Big-tech AI is everywhere, frontier models perform the best on multiple benchmarks. Unfortunately, benchmarks aren’t everything and the bigger picture is one that indicates a different path for the future.
Here’s the question that matters: “What happens to your data after you hit send?”
Every policy document you send becomes someone else’s data—stored on servers you don’t control, subject to terms you might not have read, potentially used to train models that compete with you down the line.
What Happens When You Send Data to a Public API
When you route contract documents through a third-party AI service:
- Your data leaves your perimeter. Sensitive policyholder information, proprietary treaty terms, and renewal negotiations are transmitted over the internet to infrastructure you don’t control.
- Retention policies are opaque. Major AI providers update their terms regularly. Data may be retained, used for model improvement, or accessed by reviewers for quality purposes.
- Audit trails are incomplete. Regulators expect you to demonstrate where policyholder data lives, who accessed it, and how it was processed. Sending data to a third-party API makes that accounting significantly harder.
Insurance contracts aren’t like other business documents. A reinsurance treaty is a commercially sensitive document covering risk allocation, pricing, and negotiation.
The On-Premises Alternative
Running local AI is in our opinion the future, what we will all be doing in 3-5 years from now. However, even then the infrastructure requirements will remain different than today’s servers. The main difference is the requirements for GPUs and lots and lots of RAM. That is an investment that insurance firms certainly have the capital to purchase, but it’s also a risky move for a firm whose core business is not AI or computing infrastructure.
Contrail contains backends for multiple AI providers including our own infrastructure. With contrail you can choose to:
- Use third party providers (we don’t recommend this, but it is an option)
- Use rented infrastructure via Contrail’s API backend.
- Use Contrail privately managed GPU infrastructure.
The decision comes down to a number of factors. Want to use frontier models? Then you need to pay Anthropic and OpenAI and risk sending your data to them.
Using open source models - which is what we recommend - means renting or using your own infrastructure. Often these models are not as smart as the frontier models. That is where Contrail comes in, we provide the backend to use them, but we also spend time testing and recommending models for the given requirements.
What to Ask Your AI Vendor
If you’re evaluating AI solutions for contract processing:
- Where does my data go? If the answer involves any server outside your control, press harder.
- Is my data used to train other models? Some providers use submitted data to fine-tune general-purpose models. Others have closed this loop, but terms change.
- Can I audit the full processing pipeline? You should be able to trace every contract from submission through extraction to output.
- What happens if your service is compromised? Your regulatory exposure doesn’t change because the breach was at a vendor.