Hill88

Innovation Meets Scale

(415) 497-5567

Hill88 helps corporates optimize their innovation strategy and helps early stage startups scale their revenue and business operations.

A Big Challenge for AI... And It's Not Technical

As Sven Beiker, Philip Reinckens, and I were concluding a GABA panel discussion on the Future of Automotive Innovation at Stanford Research Park last month, we got this question from the audience:

Which is the better approach when it comes to autonomous vehicle development?

  1. Roll out incremental automation improvements over a number of years; the driver still supervises the technology for the foreseeable future.  This approach is favored by most of the traditional OEMs.
     

  2. Get rid of the steering wheel ASAP and go full autonomous - no driver; the AI fully operates the vehicle within a gradually increasing geographical boundary.  This approach is being pursued mainly by tech industry players including Cruise, Waymo, and Zoox.

I intuitively favored approach #1, and as I formulated my answer, I realized that AI has a big challenge that has largely been ignored so far – how society reacts when the machine makes serious mistakes.   

The most recent case study is Cruise Automation.  Up until recently, Cruise was on a roll, announcing plans to increase the number of deployed robo-taxis in San Francisco from 100 to 5,000 and continuing its expansion to five additional cities, with plans to make $1B in revenue by 2025.  But last month, the California DMV suspended its permits after a Cruise vehicle dragged a pedestrian for 20 ft, severely injuring her.  The Cruise vehicle was not the cause of the original accident, a human-driven vehicle first hit the pedestrian and threw her into the Cruise vehicle’s path.

Our moral outrage was not brought upon the human driver that initiated this accident and then fled the scene, but on the machine that made the pedestrian’s injuries worse.  The AI was not trained on this particular scenario - so it followed standard instructions to pull over after encountering an accident - in this case with a human being trapped underneath the vehicle.   Since the accident, Cruise has entered a tailspin, with its founders resigning and the company announcing that it will scale back its operations to one city. 

This accident was certainly a tragedy, but it should be noted that human drivers kill an average of 118 people every day in the US.  And the latest data shows that self-driving cars are already safer than human driven cars on a per-mile basis.  So why is there such an emotional reaction for one serious machine-caused incident?

It can be explained as a combination of a System 1 gut reaction heavily influenced by our moral intuition - and this has profound implications for the coming AI rollout.   Let me explain further.  Psychology Professor Daniel Kahneman established that our thinking is done in one of two modes – System 1, which is quick and intuitive, or System 2, which is more logical and strategic.   For Star Trek Fans, System 1 is Captain Kirk, while System 2 is Mr. Spock.  Just as with Captain Kirk on the Starship Enterprise, our brain heavily favors System 1 decision making - it's fast and efficient, if somewhat imperfect.  System 1 thinking also produces quick judgements based on our moral intuitions.  As Social psychologist Jonathan Haidt describes in his book, The Righteous Mind, these intuitions include Fairness, (avoidance of) Harm, Loyalty, Authority, and Sanctity.

When we hear that an autonomous machine has harmed a human being, our System 1 brain judges very quickly using our moral foundations of Harm and Sanctity.  "How dare a machine injure a human?" Captain Kirk is outraged before Mr. Spock can explain the logic: “But Captain, autonomous vehicles improve safety and increase mobility options for many, even if there are some “negative externalities” for a few unlucky individuals.  Even at the current state of technology, society would already be better off with its large-scale deployment."  This is when Captain Kirk pushes Spock aside, and starts to reason with the machine, causing it to enter an infinite logic loop and explode.

Our collective System 1 moral judgement will be moving the goal-posts for AI acceptance, especially if human lives are at stake.  80% or 95% or even 99.9999% accuracy will not cut it, even if the AI is already exceeding human-level performance.  When it comes to self-driving cars, the OEMs have the right strategy here – humans will need to stay in the loop for much longer than our tech industry is anticipating.  Captain Kirk will be watching!

Don't Miss Another Blog Post - Subscribe Here!

* indicates required

Intuit Mailchimp

More from Our Archive

Copyright 2024 Hill88 Consulting.    All Rights Reserved.