The implications of AI on professional indemnity insurance
Firstly, just for the record can we begin by saying this article hasn’t been written by AI.
The world appears simultaneously excited and scared about the potential for artificial intelligence. While software could replace much of the administration and bureaucracy involved in business, cutting costs (and jobs), there are widely acknowledged risks of transferring too much responsibility to emerging technology.
Insurance is all about calculating risks and understanding the implications of decisions. In this article, we explore the power and potential of AI and the negative impact it could have on your clients and your reputation.
- What is AI?
- How do AI systems work?
- What are the risks of using AI?
- What are the implications for AI on GDPR?
- Implications for professional indemnity insurance.
- How can I manage AI in my business?
- Is legislation on AI coming?
What is AI?
Artificial intelligence (or, more accurately, generative artificial intelligence) describes a type of system that can create something. There are publicly accessible systems, like ChatGPT, that can create content like blogs and even computer code. Other AI systems can make music, images, videos, and more.
AI systems can work as standalone pieces of software or be incorporated into other programmes (like Gmail’s writing assistant). Some companies are experimenting with AI chatbots that can replace humans, at least in the initial stage of dealing with a customer query.
Behind the scenes, data analysts can access specialist AI systems that can be used to analyse information, identify trends, and calculate risk. In the insurance industry, it’s already being used to optimise the underwriting process.
While there’s huge potential for AI, we’re currently at the early stages, and the results are mixed if we are being kind. The technology has incredible potential, but it is in its infancy. OpenAI CEO Sam Altman leader of the team behind Chat GPT, claimed he doesn’t trust the answers provided by AI.
How do AI systems work?
We will focus on publicly available AI systems, like ChatGPT, Bard, and the thousands of other tools that use these systems as a basis for their own AI.
These AI systems use a data set to “learn” about language, grammar, sentence structure and syntax. They identify patterns in language and replicate them. AI systems can also access data to provide answers to questions. Ask it a question such as “What is professional indemnity insurance?” and it will analyse its data set and provide you with an answer. The system doesn’t “know” what professional indemnity insurance is, but it uses its data set (and the results of all other queries and instructions) to formulate a coherent answer.
Current AI systems have been trained with a data set. For ChatGPT free version, this is a “scrape” (roughly translated, a copy) of the internet up to 2021. This means that this version doesn't have access to the latest information, updates, rule changes, and regulation. As a result, its answers are only accurate for 2021.
However, two changes have impacted how AI works. Firstly, Microsoft has committed to giving ChatGPT free access to real-time information. Secondly – and most importantly – the system learns from the billions of instructions and pieces of information that it is given every day. It incorporates, processes, stores and “learns” from all data sets. This is the greatest strength of AI – and its greatest danger.
How can AI systems help?
One way AI systems can help is by answering complex questions. For example, you can ask it to provide details of all relevant legislation covering professional indemnity insurance. In a few seconds, you’ll receive a long list of legislation. You can ask it to provide links, to list these as references, as bullet points, or to arrange them into a song.
This use of AI is passive. You’re using the system as a more intelligent and intuitive search engine. Businesses in all services and sectors can benefit from faster and more accurate answers. But there are concerns about quite how accurate these answers are.
AI systems can also work as intelligent assistants. For example, you can ask systems such as ChatGPT to write emails for you, analyse documents, complete forms, and more. AI can create a grammatically correct email to a customer or analyse a 20-page document and summarise it in seconds.
AI systems will happily accept your data and information. But once you’ve given over this data and information, you can’t control it or get it back – and that’s a huge risk.
What are the risks of using AI?
We can broadly split the risks of using AI into two categories: passive search and active assistance.
Passive search
AI systems are only as good as their data sets, and – at least at this moment – that means they are poor. In our example above, the system may provide you with a list of all legislation, but you have no guarantees that this is correct.
If you or anyone in your business bases advice on the information provided by AI without checking, you could expose your business to significant risks. ChatGPT (like Google, Microsoft, or Yahoo) accept no responsibility for inaccuracies in the information provided by their platforms.
Active assistance
AI offers exciting potential to improve efficiency at work, performing routine tasks in seconds. It can write blogs, create emails, and generate reports. AI systems can also provide solutions to challenging tasks, even going as far as generating code to “fix” problems.
Any and all information shared with AI providers is incorporated into their learning systems – which can pose a huge risk to businesses for several reasons. Uploading confidential documents could see vital intellectual property shared with your AI platform provider. As soon as you upload it, you relinquish control.
If you share documents with customer data or staff information inside them, this can be incorporated into systems. This is in breach of GDPR guidelines and could expose your business to legal challenges.
The risk of data loss and breaches is so high that several high-profile companies are banning AI inside their organisations, while others are introducing strict controls restricting its use.
What are the implications for AI on GDPR?
The relationship between AI and GDPR is complicated. Your risks and responsibilities depend on how you use the AI system and, crucially, what information you share with it.
The Information Commissioner’s Office (ICO) has produced a detailed guide on the implications of AI on GDPR, which you can access here.
What are the implications for professional indemnity insurance?
This is very much a grey area. Businesses are responsible for the advice that they give. AI systems may provide you with information and answers, but it’s your responsibility to check and validate them.
AI tools are only as good as the data sets they have access to. In the case of ChatGPT free version for example, at the time of writing this ends at 2021.
While AI has the capability to learn, it’s in its infancy, and mistakes are to be expected. You must ensure that you check and validate and information provided by AI systems before sharing it with your customers or clients.
Professional indemnity insurance provides protection against claims from clients who have experienced financial losses as a result of bad advice. In the future, we may see companies asked if they are using AI tools and specific policy exclusions being included to protect insurance companies from the risk of AI providing bad advice. However, until we begin to see claims arising specifically from the use of AI and some case law giving clarity, this is likely to remain a grey area so exercising caution would be wise.
How can I manage AI in my business?
Every business should develop a policy for AI. While some businesses may never use the technology, others (such as marketing agencies or creative businesses) may already be using AI in the course of their work.
Some businesses that process sensitive data may introduce a blanket ban on AI. If you choose to use AI in any capacity, you must develop a robust policy that’s understood and agreed upon by all members of staff.
AI tools must be used in a safe and responsible way to protect yourself from risks. Companies should be proactive and develop policies as soon as possible. If AI tools are being used, staff must be given training to manage AI and mitigate risks.
Is legislation on AI coming?
Legislation around AI is in its infancy, but it’s coming. The EU, for example, is already working to limit the influence of tech giants and bring countries together to develop legislation. The legislation, once agreed, will influence professional indemnity insurance policies, providing a framework for its safe use.
Until legislation is in place, companies that provide advice to all types of clients must ensure they have professional indemnity cover.
If you’re using AI within your business, it’s highly likely that you will be asked to provide details when arranging insurance. Ensuring you have strong policies in place is critical to maximising your protection and minimising the risk of legal problems.
This guidance note is intended for information purposes only. It is not and does not purport to be legal advice or specific insurance advice. Whilst all care was taken to ensure its accuracy at the time of writing it is not to be regarded as a substitute for specific advice. If you require specific advice, please contact your own broker or call us on 0345 251 4000. This guidance note shall not be reproduced in any form without our prior permission. © All copyright is owned by Professional Indemnity Insurance Brokers Ltd.