Over the last two weeks, I attended a couple of conferences —Insurtech Insights and EO summit. As one might expect, the discussion of AI was nearly constant. Funnily, however, at Insurtech Insights, AI ‘solutions’ were ubiquitous. At EO summit, people were joking about how they wouldn’t say “AI” in their presentations because we were tired of hearing it (they were not wrong). But as more vendors and internal teams throw around buzzwords like “LLMs,” “GenAI,” or “Agentic AI,” it’s essential for insurance professionals, investors, and regulators to understand what these technologies are, how they relate, and what they can (and can’t) do—especially when applied to geospatial and imagery data. I frankly was overwhelmed by the AI ‘noise’ at Insurtech Insights. EVERYthing was AI, to the point it became almost meaningless. Oh you have an AI solution? So do the next 4 vendors. I’m actually much more interested in what you’re doing with the AI (which problems you are solving). Give me your pitch without using the word AI.
For AI in insurance, I think there needs to be a much better understanding of the applications, and it’s important for the buyers to get informed.
I find that AI folks fall roughly into 4 buckets.
Bucket 1: AI Luddites—Terrified (or disinterested) in general. Raise both legitimate and illegitimate concerns around AI from “AI is coming for all the jobs” (more nuanced) to “AI will destroy the environment” (legitimate) to “there are serious ethical considerations” (very legitimate) to “I refuse to use AI on principle” (just misguided—you use AI every day without knowing it, even if you aren’t using chatGPT explicitly). For them, AI didn’t exist until chatGPT blew up (it did exist).
Bucket 2: AI Hype People—Fully bought into the hype but do not understand it. Think AI will solve all the world’s problems and can do all the things. Tend to blow right by issues, ethics, or climate concerns.
Bucket 3: AI pragmatists—Know enough to know how much they don’t know. See both the promise and the downsides. Recognizes the ethical issues, especially with regard to where training data came from, bias in the data, and how it may be used. Knows AI is imperfect but still valuable.
(all em dashes my own)
Bucket 4: Actual data scientists and machine learning developers
The goal of this post is to encourage more folks to move into bucket 3 (because not everyone wants to become a data scientist). Indeed, we need to move everyone making business decisions about AI that isn’t an actual data scientist into this bucket asap. It’s not going to get you all the way there, but I hope it inspires some deeper reading.
The AI Hierarchy— Layperson’s Addition
Artificial Intelligence (AI)
The umbrella for technologies that replicate human-like decision-making.Machine Learning (ML)
Algorithms that can learn from data. “The National Association of Insurance Commissioners (NAIC) defines machine learning as “an automated process in which a system begins recognizing patterns without being specifically programmed to achieve a pre-determined result.”” and that sounds about right to me.1Deep Learning
A subfield of ML that leverages multiple layers of neural networks to, well, learn better.Computer Vision
Leverages machine learning algorithms to analyze and interpret images to allow computers to ‘see’ them and interpret what they see.
Natural Language Processing (NLP)
In short, parses language to understand it. Using rules-based approaches, can help machines extract information and understand sentiment. Uses in insurance? Automates review of adjuster notes, claim descriptions, and inspection reports.Large Language Models (LLMs)
LLMs are trained on extremely large data sets to predict and generate the next logical thing in writing. I liked this description I found: “On the other hand, LLMs don’t rely on rigid blueprints and instead make use of a data-driven approach. They’re not able to be genuinely creative, but guided by patterns and connections from specific data sets, they can estimate a very good impression of creativity. This is why they’re able to generate human-quality text, translate languages creatively, and even have open-ended chats.”2 (emphasis my own)Generative AI (GenAI)
Includes LLMs, but also tools that create synthetic imagery or simulate scenarios.Agentic AI
Represents a future where systems autonomously make and act on decisions. Think: a system that reviews satellite data, determines storm impact, initiates outreach, and routes inspections without human intervention. We’re not there yet but we’re getting closer?
What’s Ready, and What’s Not, for Insurance Use
In my opinion
Computer Vision (CV): Widely deployed to detect roof condition, vegetation encroachment, or post-catastrophe damage. CV is the workhorse of geospatial AI in insurance today. If you aren’t using CV (or purchasing from a company that uses it to generate insights for you), you’re missing a trick.
Machine Learning: Standard for underwriting and catastrophe risk models, pricing tiers, and claim predictions.
Traditional NLP: Successfully deployed for document parsing and sentiment analysis of customer communications. I see tons of applications for this, from underwriting guidelines to claims analysis and beyond.
LLMs (with caution): Already delivering ROI in customer service, policy interpretation, and internal knowledge management. Must be governed for hallucinations and compliance risk.
Generative AI for Synthetic Imagery (with caution): Useful for data augmentation, but not yet reliable for risk assessments in regulated environments. Outputs may be photorealistic but lack real-world correlation.
Not Yet Prime Time
LLMs on Imagery: LLMs are text-based and not designed to interpret visual data. Or as a colleague put it succinctly, they “recognize not extract” in our text exchange concerning using LLMs on imagery. Be wary of anyone marketing “GPT for imagery” unless it’s part of a true multimodal stack, and even then, evaluate performance carefully.
Agentic AI: Still theoretical in P&C insurance (and everywhere else). Smart routing of claims or inspections may be in pilot(?), but full autonomous claims processing still needs human oversight. I’m not convinced this is the highest leverage point or will be for awhile. I’m also unconvinced it needs to be (right now). There is a large amount of low hanging fruit that insurers should grasp and understand first.
GeoAI in Insurance: Practical Use Cases
Underwriting: CV models assess roof quality, building footprints, and wildfire exposure. These are used to augment or even replace in person inspections.
Claims: Post-disaster imagery can be run through CV models to quickly triage total loss, minor damage, or no impact—speeding up response and reserving.
Reinsurance & Risk Modeling: AI helps scale high-resolution risk assessments across portfolios, informing CAT modeling or parametric trigger design. Synthetic data may support predictive modeling applications, and/or be used to fill holes in historical data.
NLP & LLMs in Geo Contexts: While not image interpreters, LLMs can be used to summarize reports generated from CV analysis, extract key insights from adjuster notes, or power natural language queries in geospatial platforms (“Show me all ZIP codes with high hail risk and 50+ claims last year”).
Ethical, Regulatory, and Climate Considerations
To start, follow and read experts, in particular those raising questions. You will need to separate the wheat from the chaff here, of people that are AI naysayers but not expert. But there does exist a cohort of people thinking about responsible AI development as well as flagging where it stands from a truly pragramatic point of view.
For instance, take a quick read of Apple’s new paper “The Illusion of Thinking”.3 It’s not so much that you need to unblinkingly take this view as there are some obvious arguments about Apple’s motivations, but more be aware of the debate and discussion.
Here are some things I believe it is important to consider as you think about incorporating AI into your business.
Ethical
Bias and Fairness: AI models can perpetuate or amplify historical biases. Special consideration needs to be given to the data used to train and the team used to build (hint: diversity is good). This is a BIG DEAL, don’t gloss over it.
Transparency and Explainability: Stakeholders (including regulators and customers) need to understand how decisions are made. Black-box models can undermine trust.
Human Oversight: AI should augment, not replace, expert judgment, especially in high-stakes decisions like claim denials or coverage exclusions.
Informed Consent and Data Privacy: Ensure data subjects are aware of how their data is being used, especially when repurposing third-party or public data. We’ve run into issues with this in telematics already. Respect legal and ethical standards around PII, especially when integrating external datasets (e.g., imagery, social media, IoT).
The Impact on People: Most companies still have employees that are, in fact, human. I’ve seen the gamut from companies telling their employees to get on the AI train ‘or else’ to companies that don’t allow the use of any AI tools out of an abundance of caution on data privacy. Both are bad. If you disallow the use of any AI tools, you hold your workforce back. Inevitably, the smart ones will leave for places where they can upskill for this new world. Eventually you’ll find yourself still spinning by hand while everyone else has upgraded to machines. Neither, however, should you bludgeon your teams with it. Make available, encourage, create space to try and fail, all good. Terrorize? Less good. Let’s learn from the bad parts of the Industrial Revolution, shall we?
Regulatory
Compliance with Emerging AI Laws: Be aware of regulations like the EU AI Act4 and state-level privacy laws in the US. Be mindful lobbyists, encouraging healthy AI laws that benefit society, not just AI builders. Remember, you have to live in this world too.
Model Governance: Establish clear protocols for model versioning, auditability, and retraining schedules, especially for models used in regulated decision-making.
Third-Party Risk: Vet vendors for compliance and transparency; many regulatory obligations extend to outsourced AI components.
Climate
Computational Impact: Training large AI models can be energy-intensive. Favor efficient architectures and cloud providers with transparent sustainability practices. Use the right tool. Does this work require an LLM or would a NLP model built for purpose be better?
Use AI for Positive Outcomes: Prioritize use cases that support resilience e.g., flood risk detection, wildfire mapping, disaster response planning. Gosh insurance is lucky here…there is a veritable plethora of use cases that can really support resilience, mitigation, prediction and beyond.
AI Lifecycle Assessment: Consider the environmental footprint not just of model training, but also ongoing inference and data storage.
It’s a lot, right? That’s why business leaders need to spend time reading and learning about this. It’s why sellers need to think about insights over buzzwords, and the value they are actually bringing to the table. Here’s to the pragmatists!
https://www.wtwco.com/en-us/insights/2023/06/how-insurers-can-use-machine-learning-to-speed-decision-making
https://www.elastic.co/blog/nlp-vs-llms
https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf
https://artificialintelligenceact.eu/