FERMA: AI Has Growing Implications for Risk Management
February 15, 2019
Earlier this month, the Federation of European Risk Management Associations (FERMA) announced that it welcomes the European Commission's (the Commission) "draft" guidelines on artificial intelligence (AI) ethics (released December 2018). FERMA stated that it believes the development and application of AI have growing implications for European risk managers.
In its response paper, FERMA said that it "appreciate[s] the fact that the proposed guidelines are voluntary and built on a set of existing fundamental values, rights, and principles."
The Commission states in the executive summary that the document sets out a framework for "Trustworthy AI" that encompasses ensuring AI's ethical purpose "by setting fundamental rights, principals, and values that [AI] should comply with."
The summary also states that "from [these] principals, the document covers guidance on the realization of Trustworthy AI" that is both ethical and technically robust.
Lastly, the document offers a list that assesses "Trustworthy AI" adapted to "specific use cases."
The Commission said it aims to move past providing "yet another list of core values and principles for AI, but rather offer[s] guidance on the concrete implementation and operationalisation [of such values] into AI systems"
FERMA said in its response to the paper that the organization "see[s] the draft as a starting point to efficiently manage the ethical challenges of AI."
Further, from FERMA's perspective, "AI should be clearly defined as a technology using a series of diverse techniques (statistics, algorithms, data processing …), upon which rules are coded and programmed to learn without human intervention. The definition should avoid anthropomorphic terms such as 'perceiving' and 'behaviour,' and instead focus on the actual tasks carried out by AI. Such an approach would ensure that AI capabilities would be neither under- nor over- estimated."
Philippe Cotelle, FERMA board member responsible for digital transformation, said, "We see the development of ethical rules as the opportunity to ensure there is accountability in the sphere of AI. We expect the professional practice of risk management to play a fundamental role in the implementation of AI to ensure that organisations conduct a diligent assessment of all risks facing an organisation using AI, including the ethical dimension, through a holistic risk management methodology."
The 2018 FERMARisk Manager Survey revealed that more than one-third of risk managers are already involved in identifying and assessing the risk involved with new technologies for their organizations, and FERMA expects this number to grow.
In its comments on the draft, FERMA also draws attention to the implications of AI for the ethical control in the insurance underwriting process and the opportunities and threats of AI technologies for the insurability of organizations. In particular, FERMA expresses concern that if AI is deployed in underwriting, insurers will know considerably more than the insurance buyer about how AI has been integrated into the underwriting process. "This asymmetry of information could potentiallyaffect the ability of organisations to purchase insurance and consequently impact their business," FERMA said.
According to Mr. Cotelle, risk managers themselves are likely to find AI becoming part of their own job processes, especially in risk assessment in very large enterprises. "We are at the beginning."
He continued, "[sic] the Risk Manager Survey found that around 9 percent of risk managers are already using AI and other technologies such as blockchain in their work."
FERMA has planned a high-level panel discussion on AI for the 2019 FERMA Forum, which will take place November 17−20 in Berlin.
February 15, 2019