Generative AI, liability and insurance

Wendy Poulton , 16 June 2025

Do you have liability exposure from using generative AI? Do you share bespoke design solutions with your AI models? Are you concerned about IP? How do you maintain confidentiality? In this article, Wendy Poulton takes the opportunity to recap some fundamental points about generative AI, liability and insurance.

Insurance cover for claims arising out of using generative AI

“Generative” AI describes artificial intelligence models that are capable of producing new content based on learning acquired from training data. ChatGPT, Co-Pilot, Llama and Grok are commonly given examples.

A typical professional indemnity policy covers claims arising out of work within the insured profession that is stated on the policy schedule, and does not dictate which programs or systems can be used to produce that work. In many ways, generative AI is simply a tool like any other system or software used by consultants, albeit one with enhanced capabilities, and professional indemnity insurance policies do not usually mandate or prohibit the use of any specific tools.

Generally, we would expect that if a consultant uses generative AI to produce any part of its professional services, and there are alleged mistakes in those services, the consultant’s professional indemnity insurance ought to cover resulting claims just as they would if one of the consultant’s human staff had produced the allegedly defective work. However, you would need to check with your insurance broker what your individual policy provides. Policy terms can differ, insurers can add exclusions to individual policies or to their standard policy terms at each annual renewal, and AI is a new field where capabilities and risks are being rapidly developed and re-evaluated.

Liability exposure from using generative AI

Certainly at this point in its evolution, generative AI is highly fallible. Reports have already emerged of lawyers in Australia and overseas facing disciplinary action for submitting to a court an AI-generated list of legal precedent cases on which some of the cases had been entirely fabricated or “hallucinated” by the AI. With that in mind, it’s alarming to contemplate how much mischief AI could get up to if asked to produce architectural or engineering design documents.

If a flaw in AI causes errors in your work, you will very likely bear liability to the party who contracted you to perform that work. If you sought to shift blame and liability to the AI provider, you would have to overcome a number of hurdles. The terms and conditions you accepted when purchasing or using the AI probably contain strong disclaimers and releases, including an express obligation for you to check the AI’s work, and the AI provider may be based in an overseas jurisdiction where it will be costly and/or difficult to institute any legal action against them.

For this reason, AI is not like a sub-consultant to which you could delegate risks and responsibilities by means of a sub-consultancy agreement, and which might have its own professional indemnity insurance to cover the cost of errors and claims.

Instead, as a rule of thumb, generative AI should be thought of as being akin to a junior employee, in the sense that the risk and responsibility for its work is carried by your business and is, practically, very difficult to delegate. For that reason, the AI’s work needs to be thoroughly checked.

A second liability risk is that a generative AI program that learns from uploaded data may end up incorporating aspects of that data into its library of knowledge, thereby sharing it with other users. Allowing AI to learn from data that belongs to your clients is likely to breach obligations of confidentiality you owe to those clients, except in the unlikely event that the client has given consent.

Other considerations

For the same reason, consultants who wish to protect their bespoke design solutions and intellectual property rights should be reluctant to share data with any generative AI that will learn from it and share elements of it with others. Providers of generative AI for the legal profession – for which maintaining confidentiality is a very high priority – are currently developing and marketing “closed” AI systems that are said to learn only from the law practice’s own data without sharing those learnings externally. Similar closed systems may be or become available in other industries, and we understand some providers offer technologies that create a closed system within an open AI environment.

A final consideration is that clients may impose restrictions on your ability to use AI, by including specific prohibitions in consultancy agreements or tender conditions. For example, government clients with understandable concerns about maintaining confidentiality over their information may prohibit the use of generative AI, or may require you to disclose where and how you use AI.

AI at informed by Planned Cover

How does this thinking apply to our work at informed by Planned Cover? Based on what we are hearing about AI capabilities at this point, we would think that generative AI could be used to conduct reviews of consultancy agreements with a fair to moderate level of accuracy. However, in order to verify the AI’s output and correct omissions, a human risk manager with legal qualifications would need to both review the contract and check the accuracy of the AI’s output. The amount of time this would take would largely negate the benefit of using AI in the first place.

We have no imminent plans to hand off our workload to AI at this point. All our content at informed by Planned Cover, from news articles to webinars and contract reviews, is still 100% human produced, and you can meet the delightful humans in our risk management team here, invite us into your offices to present training for larger teams, or join us at one of our upcoming national webinars.

Wendy Poulton is a Manager of Risk Services at informed by Planned Cover, the education and risk management arm of specialist insurance broker Planned Cover. Wendy and the team of legally qualified Risk Managers at informed provide online and live CPD, industry updates and guidance material, and deliver Planned Cover’s contract review service.

This article was originally published on the informed by Planned Cover website and has been republished with permission.