Solomon Philip is Shift Technology’s Head of Market Intelligence
Earlier this year ChatGPT took the artificial intelligence world by storm. Suddenly, it felt like everyone was experimenting with this exciting new technology, putting it through its paces, and trying to figure out exactly what it was all about. During this time, ChatGPT has become nearly synonymous with the larger category of generative AI and the Large Language Models (LLMs) that power it. Some pundits have even touted the introduction of ChatGPT as one of the biggest technological inflection points since the discovery of the internet.
Yet, despite all the hype, it is crucial to remember that while ChatGPT is very promising and has wide-ranging applications, it is only one entry in the much larger generative AI landscape. As important, the use of generative AI, especially in an enterprise setting, is still relatively new. Insurers seeking to incorporate generative AI into their technology stack would be advised not to go it alone. These organizations will benefit from the focus and expertise provided by AI providers experienced in not only building AI products at scale, but also those knowledgeable about LLMs and how to get the most out of them.
So, how can the insurance industry take full advantage of the benefits of generative AI while avoiding the pitfalls?
It Starts with the Data
One of the things that makes LLMs so interesting (and powerful) is that they have already been trained on massive amounts of data coming from mostly public sources such as Wikipedia, online journals, public textbooks, internet forums such as Reddit, and many others. These datasets of billions of words give these large language models the ability to perform many tasks - with the right prompts - right out of the box. At the same time, this superpower may be a weakness when talking about applying generative AI to specific business use cases, especially insurance.
Because these LLMs have been trained on generic, publicly available data sources, the insurance-specific and case-by-case data required to address the nuances and complexity of the industry, let alone your business, are simply not there. And It is highly unlikely for an out-of-the-box LLM to have been trained on your insurance policies and claims. LLM models not trained on insurance-specific data will need access to diverse claims, policies, and operational data sets with well-developed data models to be truly effective in this environment.
Furthermore, much of your in-house data comes in different formats than the one used for training these models. We have previously outlined how LLMs have primarily been trained on natural language texts such as books, encyclopedias, and internet forums. This is in stark contrast to insurer data which is often structured data from the claim management system or semi-structured data in the form of documents such as reports, invoices, or estimates, to name only a few. While these sources also contain much natural language, their meaning is enriched by how they are structured. For example, information is often laid in 2-dimensional structures such as tables in invoices. Since these large language models only work on 1-dimensional data, some adaptations must be made to ingest these data sources successfully. Insurers would need to find a way to make the insurer-specific data into a more natural language format on which these LLMs have been trained. For example, tables from documents must be converted to a CSV format before handing it over to the model. Advanced AI providers skilled at leveraging Artificial Intelligence will have these techniques already developed and ready for deployment, as many were also necessary for previous types of language models.
Providing Insurance Industry Context
Insurers adopting generative AI must also contend with limitations related to the context length of the model. While LLMs have been trained on billions of words for any single problem, they typically only have a few thousand words of “perfect memory,” after which their performance falls substantially. Since this memory is specified in the architecture when LLMs are trained , it cannot simply be extended at the user's convenience. While a few thousand words might seem substantial, we must remember that a typical insurance policy document, often written in font size eight or smaller, can already contain 1-2k words per page. Suppose we also need to include all the text from other related documents (invoices, medical certificates, doctor’s notes, letters, correspondence, etc.). In that case, we quickly fill up the whole memory of the model. Adding to the complexity is that the context length must also be used for not only the prompt but also the response. In many cases, these two combined can quickly accumulate to multiple thousands of tokens.
One way to overcome this limitation is by more intelligently choosing what to, and what not to, include in the model’s input. Doing so requires a deep understanding of the insurance and claim handling process to ensure that the correct information to help solve the problem is included. Once identified, the included data must also be processed in a way easily digestible by the model. As with the initial effort to create the best dataset, creating the right insurance context for the model requires a combination of industry and AI knowledge.
Technology vendors developing AI-based systems have already solved this problem through various interactive solutions, such as those capable of summarizing complex legal documents, medical artifacts, invoices, and financial instruments of various shapes and sizes. LLMs combined with models that can index vast volumes of documentation can serve as a workaround to memory issues, at least for the near term, until LLMs can process larger volumes of data without compromising on performance and accuracy.
While generative AI and large language models have shown great potential and promise to the insurance industry, avoiding key pitfalls is crucial to making the technology successful when applied to real-world use cases. Overcoming these limitations requires a combination of industry knowledge and AI expertise that addresses the data and context issues of industry specific generative AI as well as being able to apply specific techniques to adapt these models for insurance-specific use. With the right approach and support from providers adept at training AI models with insurance data sets, access to such at volume and scale, and the right infrastructure in place, carriers can look forward to maximizing this new technological innovation now at our fingertips.
Special thanks to Arthur Hemmer for his invaluable contributions to this post.
For more information about how Shift can help you adopt generative AI to meet the unique challenges facing the insurance industry contact us today