New Vulnerabilities Accompany Growing Reliance on AI, Machine Learning

Robot hand holding globe in palm with computer screen in background

June 26, 2023 |

Robot hand holding globe in palm with computer screen in background

Artificial intelligence and its potential applications have become an inescapable topic recently. But, as reliance on artificial intelligence (AI) and machine learning (ML) systems grows, so too do new vulnerabilities, according to a new report from Swiss Re.

Meanwhile, insufficient risk awareness and governance expose greater numbers of AI services to new attacks, the reinsurer suggests.

The risk of AI getting hacked, and the technology's systemic vulnerabilities, is one of two emerging risk themes assigned a high potential impact in the 2023 Swiss Re Sonar: New and Emerging Risk Insights report.

The other emerging risk theme assigned a high potential impact in the new Sonar report is exclusive markets in which polarization of geopolitical alliances impedes global (re)insurance businesses.

Swiss Re's annual Sonar report is intended to identify emerging risks and inspire discussion about them to help the insurance industry and its clients build risk resilience. This year's report examines 13 emerging risk themes and four emerging trend spotlights.

Examining the risks associated with artificial intelligence, the Swiss Re report cites several potential impacts, as follows.

  • The widespread use of AI and its systemic vulnerabilities raise concerns that adversarial machine learning could lead to the accumulation of risks or losses.
  • Model evasion could have an impact on automated insurance claims processing and facilitate fraud.
  • Fraudsters might target machine learning systems used for insurance distribution and pricing, resulting in unwanted shifts in risk exposures.
  • In professional liability lines, machine learning failures or data breaches could trigger software producers' or distributors' professional indemnity and errors and omissions claims, and possibly also directors and officers claims.
  • Public reports of hacks or adversarial machine learning attacks could cause reputational damage.
  • Targeted incidents of "data poisoning" could lead to unexpected high failure rates and trigger casualty or health claims in such cases as autonomous cars or medical diagnosis software.

The Sonar report suggests that while the use of machine learning systems is increasing rapidly, those using them often don't understand, consider, or protect against adversarial machine learning. The adversarial ML threat involves any targeted exploitation or hacking of AI systems that leverages machine learning-specific vulnerabilities, Swiss Re says.

"Together with recent progress in Deep Learning, research interest in adversarial ML has increased notably," the Sonar report says. "Professional hackers are not only able to trick models into making mistakes or leaking information. They can also harm model performance by corrupting training data and/or stealing and extracting ML models."

The report cites several examples of potential machine learning vulnerabilities.

One is data or model poisoning, known as "backdoors." Those backdoors are triggered by specific data patterns and produce predetermined outcomes such as an exceptionally high creditworthiness or insurance score. "Backdoors can be introduced using malicious pre-trained models (used as a basis for specialized ML) or malicious data, for instance by a disgruntled employee," the Sonar report says.

Another possible vulnerability, model evasion, would involve attackers using adversarial machine learning to produce patterns that mislead ML systems in ways that are difficult for humans to notice, Swiss Re says. The report cites the example of a visual pattern that could be applied to a car as a sticker that would cause an automated auto insurance claims tool to misjudge damage.

A third example of potential machine learning vulnerabilities included in the Swiss Re report is membership interference attacks. Such attacks can leak sensitive information used in the original training of ML models, such as a confidential set of data used during the system's learning phase. "By constructing a series of targeted queries, attackers can extract training data points with high probability," the Sonar report says. "This raises concerns over data protection issues."

The Swiss Re report notes that while the use of complex machine learning systems is becoming more widespread, by design it's difficult to check their output for mistakes—either by humans or other machine learning systems. In addition, the ML systems are inherently vulnerable to adversarial attacks, Swiss Re says, with those vulnerabilities posing risks to insurance companies and other businesses, as well as creating a challenge for regulators.

The Swiss Re report suggests that artificial intelligence and machine learning raise a variety of risks relevant to insurers beyond cyber insurance. Among them, artificial intelligence and machine learning bring increased opportunities for fraud, as well as potential claims in such lines as professional indemnity and errors and omissions resulting from machine learning failures and data breaches, the Sonar report says.

"Furthermore, adversarial ML attacks leaking to the media (however small the impact might be) could directly impact the reputation of insurers and/or their assets," the report says. "Model stealing could lead to intellectual property (IP) loss. Beyond that, data leakage or non-compliance with current and upcoming data and AI regulation might trigger fines."

In addition, it's possible that machine learning malfunctions could cause harm or accidents such as autonomous car crashes or medical misdiagnosis that trigger casualty or health insurance coverages, Swiss Re says.

Swiss Re's Sonar report suggests several steps to reduce exposure to vulnerabilities raised by artificial intelligence and machine learning.

"Strict access management, suitable usage limits, and data governance can go a long way in reducing attack surface (areas of vulnerability that can be attacked)," the Sonar report says. "While not always applicable, simply not exposing models to the Internet, and using only trusted (e.g., not automatically collected) data, are powerful risk mitigation strategies."

According to Swiss Re, getting AI and ML risk mitigation right requires making security and data governance core features of the development and deployment of machine learning systems, as well as balancing usability with privacy and the protection of intellectual property. But adopting such an approach now will make future machine learning applications more resilient, Swiss Re says.

June 26, 2023