If investments and trust in AI are building up, insurance should follow suit sometime. More so, as robots can cause accidents, and can create a new kind of accountability for enterprises using them instead of humans. But insurance here is still a subject matter of unsolicitation.

UFO kidnapping, bad wedding, ghosts, moustaches and taste-buds. There is not just someone somewhere asking for insurance for every strange possibility but someone somewhere providing it too. And yet, we are far from figuring out questions like – who is liable for the death of that young bride-to-be who was crushed by the mistake of a robot in an automotive factory in Alabama? Or for a worker trapped amidst confused robots in world’s biggest e-commerce warehouse? Or a welder breaking his skull in a factory in Chakan – because he forgot to wear a helmet near a robotic machine?

AI’s Liability – or Penalty

Incidentally, as per some media reports – the serious injury rate in world’s top-most e-commerce player’s warehouses is high when the humans work with robots than in those without robots.

So why is AI being considered only as an enabler and not as a risk area, per se, by most insurers? After all, we have already started to witness accidents caused by robots. To add to that, humans can suffer not just unemployment, displacement and augmentation but also workplace-injury-risks when bots arrive in their factories and offices. Or is that a Gray Swan still far away somewhere?

Before we think of insurance, we need to ask some tough questions – What is a robot? Who takes the blame for it? Who owns it? Is cyber-insurance enough?

What will complicate or simplify matters here is how one defines a robot, as the Swiss Re paper argues. Also Policy language would be crucial for disputing parties when they guidance about coverage. Insurers also need to shape up intent and exposure. There is also time to flesh out who will bear liability when there are multiple contributors to a robot (like manufacturers, software designers, operators, data-service providers etc.). What will kick in – the owner liability, agency theories, traditional underwriting models or moth-balled corporate legal-entity theories that insurers have used so far?

Very few insurers have started to talk about, and think of, solutions in this emerging, but unfamiliar, terrain. Munich Re and Swiss Re – are the top ones that come on the radar here.
There is a solution called aiSure in Munich Re’s stable of insurance solutions. As Irmgard Joas, Spokesperson, Group Media Relations, Munich Re explains,

“Munich Re helps to insure the performance of AI solutions innovatively by e.g. absorbing risks of AI underperformance. Munich Re backs the performance guarantee of companies towards their clients.”

According to Munich Re reports, Modelling risk related to robots causing accidents is a new field for insurance risk management. “It is strongly related to the question of insurability of algorithms, which represent the fundamentals for robot actions.”

A Swiss Re paper rightly reminded us that – “Advanced robotics is going to thrust upon insurers a world that is extremely different from the one they sought to indemnify in the 20th century. And roughly 30% of leading organizations will create a chief robotics officer role or a similar role for their business in the next two years. Ready or not: the robots are here and more are coming.” It explains how more and more robots introduce new coverage and/ or liability issues for nearly every line of business in insurance.

So let’s ask just two questions for now.

Humans Are Insured Against Aliens, but Not Against AI... Why?

We are entering the worm hole to a new future. Are we insured against possible mishaps?

Whose Collar to Pull?

Indranil Bandyopadhyay, Principal Analyst, Financial Services, insurance, Data Science, AI at Forrester, offers an objective comparison between humans and robots here. “In general, the error rate of humans can be three to six errors per hour. Mechanical robots are, slightly, in a better shape that way. AI and robotic solutions are emerging fields and should be treated with cognizance to their novelty and not merely with a dystopian view. There’s always a probability of something going wrong. Some things do not work as envisaged. That’s where insurance and compensation for AI-failure can come in. I am aware of only one organization – Munich Re in that context. It’s again something that would need a ‘Horses for Courses’ mindset.”

“There’s a big difference between general cyber insurance and AI/ML insurance. Cyber insurance covers failures of digital systems, such as business interruptions, and information security and privacy liability breaches.” notes Suresh Pokhriyal, Vice President, Xceedance as he explains why AI/ML-specific insurance policies are still in their early stages, and why as the use of these technologies grows, more businesses will likely need to purchase coverage.

Pokhriyal avers:

“What will be covered by AI/ML-specific insurance policies is still unknown. Still, the policies will probably protect companies from losses related to data corruption, model theft, and adversarial attacks.”

What to ask for – and under which tab?

AI-related insurance can be under various areas like Commercial General Liability, Product Liability, Employment Practices Liability, Technology Errors and Omissions, Workers’ Compensation, Cyber Coverage, Professional Liability, and Directors and Officers Liability, and, of course, standalone robotics policies.

The Swiss Re paper pointed out that – “Bundled or hybrid policies that include many component coverages are attractive as one-stop offerings because insureds often prefer broad coverages (vs. numerous standalone policies). Bundled offerings can simplify purchasing and help reduce an insured’s risk of insurance gaps.”

These contours will get even more clear when we have better and precise standards and guidelines for this new field. In their 2021 report ‘AI Accidents: An Emerging threat’, Zachary Arnold and Helen Toner from Center for Security and Emerging Technology, Georgetown University, outline how Policymakers can help reduce these risks. Policymakers should – among other things- invest in AI standards development and testing capacity, which will help develop the basic concepts and resources needed to ensure AI systems are safe and reliable, they point out.

The area of standards seems to be in progress with proposals from the International Standards Organization (ISO), as well as the American National Standards for Industrial Robots (ANSI) and the Robotic Industries Association (RIA).

Looks like there is a long way to go to capture this side of AI. And if we do that soon, we will remove a lot of cynicism around robots too. ‘The monster we do not know is always more scary than the one we can sketch.’ Right?
That’s why Bandyopadhyay strongly recommends that we should be excited about technology.

“Every industrial revolution has been plagued by a cynical view. Let’s not succumb to the idea that ‘Machines will eat us’. Robots are consistent, efficient, precise; and they lead to lesser production costs, and better Economics, eventually.”

This article is originally from MetaNews.

LEAVE A REPLY

Please enter your comment!
Please enter your name here