You’re sat in the passenger seat of a fully autonomous car. You’re cruising along nicely catching up on some work emails when you notice a child running towards the road chasing a stray ball. Impact is likely but swerving into the adjacent lane to miss the child would mean almost certain collision with an oncoming truck.
Human instinct would be screaming for action at this point, but all you can do is trust in the powers of “strong” artificial intelligence (AI) and let the machine make the decision. But, how does the self-driving car decide how to respond to the situation? This conundrum has nothing to do with law or regulation. Really, it’s a question of ethics.
The ethical debate is just one of the areas of concern around “strong” AI technologies, according to the Allianz Global Corporate & Specialty (AGCS) report, ‘The Rise of Artificial Intelligence: Future Outlook and Emerging Risks’. Other areas in debate include: software accessibility, safety, liability and accountability.
AGCS’s report defines “strong” AI as machines with consciousness, sentience and mind, or even general intelligence almost reaching human cognitive abilities. “Strong” AI is still theoretical, but the insurer expects it to be on the market around 2040.
“AI, in general, is shifting decision making processes from human beings towards machines. ‘Strong’ AI, sometimes defined as general intelligence, goes a step further with machines becoming capable of experiencing consciousness. There are many challenges linked to that,” explained Michael Bruch, head of emerging trends at AGCS.
“The ethical debate is tricky. The challenge when developing AI agents is to instill the agent with a distinction between good and bad. This cannot be dictated by regulatory bodies or authorities alone; it has to be driven by society as a whole. There’s still a long way to go with that.”
Ethics is also closely linked to liability and accountability – two other major concerns highlighted by AGCS. The AI report states the basis for accountability of an AI agent lies in the transparency of its decisions. Consumer protection rules state humans have a “right to explanation,” which means a consumer should be able to query and potentially challenge an AI agent’s conclusion.
AI applications, at present, are developed by humans and are taught with data that’s often human-generated. This means there’s strong potential for prejudice and implicit bias within the data, resulting in AI agents sometimes reaching partial and unfair decisions. Insurers need to keep on top of this (i.e. if they use chatbots) with “appropriate scrutiny requirements,” Bruch explained.
“The autonomous vehicle market is a good example of the legal liability problem,” Bruch told Insurance Business. “When the decision-making process behind the wheel moves from humans to machines, who is responsible if something goes wrong: the programmer, the software developer, the hardware developer, or the manufacturer? The list goes on.
“Leaving the legal liability decisions to courts may be expensive and inefficient if the number of AI-generated damages start increasing. A solution to the lack of legal liability would be to establish expert agencies or authorities to develop a liability framework under which designers, manufacturers or sellers of AI products would be subject to limited tort liability.”
As for safety with “strong” AI capabilities, the AGCS report finds that the race for bringing AI systems to market could lead to rushed, and therefore potentially negligent, validation activities, which are needed to guarantee the deployment of safe and functional AI agents. This could lead to an increase in defective products, recalls – and call some of the liability, accountability and ethical questions back into debate.