Skip to content
EN-US 

SHARE:

Solomon Philip is Shift Technology’s Head of Market Intelligence

To call the insurance industry highly regulated seems almost quaint. Not only are insurers beholden to the laws and regulations that govern the individual countries in which they do business, but also those that span borders. While many of these regulations are aimed at how insurers do business, such as solvency requirements or anti-money laundering rules, some are directed at the technology practices and approaches insurers adopt. Those regulations governing technology are often designed to protect policyholders and ensure that insurance remains fair and accessible to all consumers.

Insurance AI Bias and the realm of regulation

Although generative AI is the current topic of interest (and likely will be for some time), insurers have been adopting various types of artificial intelligence to help them automate processes and make more informed business decisions. Artificial intelligence is helping insurers determine which new products and services to develop and which consumers may be most interested in purchasing them.

Artificial Intelligence is making it easier to decide which claims - or claimants - may need the most attention and which claims can be fast-tracked for immediate settlement. AI is helping underwriters and the SIU make faster and more accurate decisions about suspicious policy applications or claims. In short, AI is helping insurance companies prevent premium leakage, improve loss ratios and the customer experience, among myriad other benefits.

Yet, even as AI continues to gain adoption within the insurance industry, there are concerns that for some policyholders this technology may be causing negative impacts. It has been long known that AI has the potential to develop unconscious bias over time, whether through poor design of its initial algorithms, using flawed data sets to train those algorithms, a lack of proper review and oversight, or a combination of all these and other factors. And since AI is often viewed as a “black box” technology, a customer whose policy application was rejected or a policyholder whose claims are consistently investigated or denied as fraudulent based on the use of this technology may feel they have little recourse.

Put into effect on May 25, 2018, GDPR is viewed as one of the world's toughest privacy and security laws. Although primarily aimed at protecting individuals’ data and how it is collected, used, and stored, there are also provisions within GDPR stipulating that individuals must have recourse to meaningful explanations of automated decisions concerning them. Failure to comply with GDPR can lead to hefty fines — up to four percent of a company’s annual global revenues or €20m.

Further, the EU Mission published its AI Act in 2021, an ambitious proposal for a comprehensive legislative framework for using AI at financial service institutions, including insurers. Although not yet approved and implemented, its provisions require high-risk AI systems to use high-quality data sets, training, validation & testing with appropriate governance and management practices. Insurers will be required to take suitable measures to prevent data poisoning, adversarial attacks, and exploitation of vulnerabilities.

The Act expects insurers to exercise due care when procuring data such that it is proportionate to the use case. The rise of algorithmic credit scoring, and the reliance on personal data in financial services, has amplified discrimination concerns. And if insurers think a GDPR violation is daunting, the AI Act ups the ante. Failure to comply with the AI Act could cost insurers up to €30m in fines or six percent of global annual turnover.

Ensuring Compliance

Insurers should not wait for proposed regulations to be adopted before working on certain fundamental principles underpinning bias and privacy regulations. The adoption of AI to support key insurance processes must be based upon customers' fair, unbiased treatment before and after the sale. Policyholders should also be sufficiently advised on the customer data that may be collected and how it may be used during the customer's journey. Establishing clear principles early on in the planning and development stages allows insurers to focus on executing their business strategy instead of worrying about compliance after the fact. 

Insurers should also seek to work with vendors and solution providers who are experts in building algorithms designed to mitigate conscious and unconscious bias from the outset. They should insist on working with vendors that take security and data privacy seriously and can demonstrate that commitment.

Finally, insurers should look for technology providers that can fully explain the decisions made by their solutions and provide an auditable history of all data or variables associated with the alerts and decisions generated by their system.

Balancing Innovation and Compliance: How Insurers Can Harness the Power of Artificial Intelligence

Artificial Intelligence has already proven to be an incredibly effective tool for insurers looking to improve the critical processes that drive their businesses. At the same time, in a highly regulated industry, insurers must be careful that the technology they deploy does not put them at odds with the law.

A carefully developed strategy, implemented with the right technology partners, is one of the best ways to reap the benefits of AI while remaining compliant with regulations.

For more information about how Shift can help you use AI to improve insurance decision making, contact us today.