Rebuilding Automotive Insurance for the era of Autonomous Vehicles

By, Srinivas Reddy Aellala, Product @ Ridecell

Ridecell - AI and Data
6 min readJun 10, 2021

By 2035, over 23 million fully autonomous vehicles are estimated to be traveling on US roads. Tesla Autopilot, an L2+ ADAS system, does a significant portion of the driving on highways for most of its users already today. But the current actuaries models that calculate the risk of driving and thereby decide the insurance premium take into account pre-dominantly the driver demographics like age, location, past driving record, and no data about the vehicle’s autonomy software. Moreover, for AV or ADAS systems, historical demographical data does not apply. Every software version is different and is only more advanced than the previous version. More Insurance companies today are now realizing the need for insights/visibility into AV operation, technology, data details, and some have started renewing their actuaries models to reflect the new reality better.

We at Nemo (www.nemosearch.ai), with our scenario extraction data pipeline, are coming up with new approaches bridging this gap between insurance companies and OEMs, enabling the development of automotive insurance for the new era of Autonomous Vehicles. We offer a scenario extraction data platform for the AV and ADAS teams that comes pre-integrated with renewed actuaries models of insurance companies.

The fundamental changes needed to build insurance models for autonomous vehicles

The type of coverage needed for an Autonomous Vehicle fleet would differ from today’s personal automotive insurance. The industry has to deal with Product liability for Enterprises when it comes to Level 4 Robotaxi and Autonomous Truck services. But, there is also an argument in the industry that personal auto coverages would not go completely obsolete, as there would always be L2+ vehicles on the road where both vehicle and the driver are sharing the driving job for every trip. 3rd party liability, bodily injury to passengers, and traffic congestion caused by AV operational errors could be the other type of coverages needed in the industry.

For providing such coverages, the insurance industry will need to make the following fundamental changes, as also suggested by many independent insurance research bodies (reference)

  1. Shift the underlying risk assessment actuaries from driver-centric model to vehicle-centric model
  2. Upgrade internal infrastructure and resource capabilities to access and analyze larger pools of data to calculate the driving risk of autonomous vehicles.

Usage and Telematics based Insurance have a somewhat close relationship with the practices needed for AV insurance in the future, but the difference is still significant. When we move to a vehicle-centric model for risk assessment, the data points one needs to look at become more elaborate and complex. The categories of some of these data points are listed below:

  • Compliance with standards — ISO2448/UL4600/ISO 26262 etc. (Self-audit of AV companies against a checklist provided by standards, including cybersecurity standards)
  • Scenario Coverage of the AV software measured against a diverse scenario database specific to the Operational Design Domain (ODD)
  • AV Test performance — accidents (L1, L2, L3, L4 incidents) per million miles and compare it to humans.
  • Driving attributes from sample logs of the AV software version — comparison to human driving performance in the same ODD
  • AV hardware version (BOM Costs, sensor positions, etc. )
  • Black box data (needed for scene recreation for instant claim processing)

Introducing Scenario-based Risk Assessment for ADAS and AV

Historical claims data associated with the driver demographics defines most of the traditional actuaries models of automotive insurance. In the case of autonomous vehicles, historical claim data does not exist. Even as we accumulate them, the software and hardware architectures will be evolving continuously, resulting in no correlation with the performance of previous versions. Cybersecurity insurance models offer a good insight of using alternate indicators for threats and incidents to calculate risk in the absence of such past claim data.

We at Nemo, in partnership with leading insurance companies, are developing a method of using driving metrics in individual scenarios and events of interest for calculating the risk of AVs and ADAS.

Nemo platform identifies the scenarios of interest from raw sensor data and provides key driving attributes (such as time to collision, driving speed, etc.) associated with that given scenario, which is then streamed to the actuarial teams in insurance companies for assigning risk scores and insurance premiums. You can refer to the anatomy of a ‘driving scenario’ and all the data layers constituting it in our previous blog post here. The figures below show the architecture of our approach for calculating driving risk for test fleets and production fleets, respectively.

Fig 1. Scenario based risk scoring — Architecture of our approach for AV and ADAS Test Fleets
Fig 2. Scenario based risk scoring — Architecture of our approach for ADAS Production Fleets

The key performance indicators (KPIs), safety metrics, or surrogate safety measures such as TTC (Time to Collision), extracted from the relevant scenarios can then be converted into

(a) A corresponding crash/claim frequency (how often an accident is likely to occur and how often a claim is likely to be submitted), and

(b) A corresponding crash/claim severity (likely monetary loss given an event); the product of which yields the eventual risk and estimate.

The use of such alternate driving behavior indicators to calculate risk is, infact, what powers today’s Usage-Based Insurance (UBI). But even in UBI, the models consider only a few pseudo metrics like hard braking, acceleration, cornering, and speeding. In AVs, these metrics related to vehicle dynamics would not give a complete picture of its safety or performance because of the conservative nature of how they are programmed to drive. Braking, acceleration, turning, speeding will always be in safe ranges for an AV, but what matters is how it deals with the surrounding traffic e.g., waiting time at an intersection, gap distance with VRUs like pedestrians and bicycles, TTC values with respect to the lead vehicle, reaction time with respect to a debris/stalled vehicle on lane, etc.

Moreover, events like hard braking are not always bad driving behaviors to penalize. Below are two examples of hard braking events we derived from the data pipeline of a dashcam company. In both these instances, the driver avoided a potential accident, once avoiding a sudden cut-in and another time avoiding a jaywalking pedestrian.

Fig 3. Scenarios of hard braking events wrongly penalized for bad driving in Usage-Based Insurance, which was actually due to (a) sudden cut-in (b) jay-walking pedestrian

The situational context we derive from Nemo’s scenario extraction pipeline (possible both on cloud and on-vehicle) looking at the camera data, augmented with road and weather information, makes it possible to estimate risk more accurately.

Another main reason for tracking driving performance in individual scenarios to calculate risk is that it offers a method of comparison between human drivers and autonomous driving software versions. You get to compare the driving metrics like waiting time at a stop sign intersection and gap distances with pedestrians/lead vehicles with a good human driver when you apply this method for production L2+ vehicles where both driver and vehicle driving data is available. Using these indicators, one can segregate the policyholders (be it a human driver or vehicle software) into pools or risk classes. Each pool reflects a risk level from which premiums are derived.

The insurance micro-service on the Nemo data platform comes in two operating modes

  1. Cloud mode for AV test vehicles (shown in figure 1)
  2. In-vehicle mode for production vehicles (L2+ and beyond) (shown in figure 2)

A typical in-vehicle mode of Nemo uses the data from high fidelity sensors like camera/radar in an ADAS vehicle to create low-bandwidth scenario description files streamed back to the cloud. For an insurance use case, we break this down even further and stream only the driving metrics like TTC (time to collision) and TTR (Time to react) along with situational details like road, weather conditions to the backend actuaries engine. These pseudo metrics can then be fed into Bayesian risk estimation networks to estimate claim severity and frequency and thereby the final aggregated risk value (reference literature here)

Watch out for more details and updates about our insurance micro-service in this space. Reach out to us if you are an insurance company interested in renewing your actuaries models with our scenario extraction platform or an AV or ADAS fleet operator interested in our scenario data pipeline, which comes pre-integrated with Insurance backed actuaries models.

--

--

Ridecell - AI and Data
Ridecell - AI and Data

Written by Ridecell - AI and Data

Exploring the intersection of AI and Data in the mobility world. Brought to you by the team at Ridecell.

No responses yet