The recent accident involving an autonomous vehicle being tested by Uber, which led to the death of a woman in Arizona, has brought to the forefront several safety and liability issues with the technology.
According to “The Rise of Artificial Intelligence: Future Outlook and Emerging Risks”, autonomous or self-driving vehicles are expected to become the most influential application of AI in the future. By 2030, the penetration rate for these vehicles is expected to reach 5%, with sales having a compounded annual growth rate of around 40% between 2025 and 2035.
Among the benefits of autonomous vehicles being touted is enhanced road safety through removal of human error. AI is also predicted to reduce road accidents by up to 90% and CO2 emissions by 60%.
However, several complications exist. Aside from the oft-discussed liability issues, an ethical dilemma is posed. The report provides the example of a vehicle’s AI having to choose between running over three pedestrians or sacrifice its single passenger. The AI must make the choice according to a set of principles programmed into it by its maker. Differently from human drivers, which are more likely to save themselves out of reflex, an AI may be designed to act in a way that saves the pedestrians but goes against its passenger.
The report said that while many arguments centred on AI are around road safety, this may cause a shift in risks, which will require the insurance industry to adapt accordingly.
“Personal risk and liability coverage will be needed to protect passengers from autonomous vehicles making decisions that, even if taken according to design, turn against the driver,” the report said. “Seemingly, new product liability coverages will be needed to protect manufacturers against undesired autonomous vehicles’ decisions that damage either passengers, pedestrians, or goods.”