A Big Challenge for AI... And It's Not Technical
As Sven Beiker, Philip Reinckens, and I were concluding a GABA panel discussion on the Future of Automotive Innovation at Stanford Research Park last month, we got this question from the audience:
Which is the better approach when it comes to autonomous vehicle development?
Roll out incremental automation improvements over a number of years; the driver still supervises the technology for the foreseeable future. This approach is favored by most of the traditional OEMs.
Get rid of the steering wheel ASAP and go full autonomous - no driver; the AI fully operates the vehicle within a gradually increasing geographical boundary. This approach is being pursued mainly by tech industry players including Cruise, Waymo, and Zoox.
I intuitively favored approach #1, and as I formulated my answer, I realized that AI has a big challenge that has largely been ignored so far – how society reacts when the machine makes serious mistakes.
The most recent case study is Cruise Automation. Up until recently, Cruise was on a roll, announcing plans to increase the number of deployed robo-taxis in San Francisco from 100 to 5,000 and continuing its expansion to five additional cities, with plans to make $1B in revenue by 2025. But last month, the California DMV suspended its permits after a Cruise vehicle dragged a pedestrian for 20 ft, severely injuring her. The Cruise vehicle was not the cause of the original accident, a human-driven vehicle first hit the pedestrian and threw her into the Cruise vehicle’s path.
Our moral outrage was not brought upon the human driver that initiated this accident and then fled the scene, but on the machine that made the pedestrian’s injuries worse. The AI was not trained on this particular scenario - so it followed standard instructions to pull over after encountering an accident - in this case with a human being trapped underneath the vehicle. Since the accident, Cruise has entered a tailspin, with its founders resigning and the company announcing that it will scale back its operations to one city.
This accident was certainly a tragedy, but it should be noted that human drivers kill an average of 118 people every day in the US. And the latest data shows that self-driving cars are already safer than human driven cars on a per-mile basis. So why is there such an emotional reaction for one serious machine-caused incident?
It can be explained as a combination of a System 1 gut reaction heavily influenced by our moral intuition - and this has profound implications for the coming AI rollout. Let me explain further. Psychology Professor Daniel Kahneman established that our thinking is done in one of two modes – System 1, which is quick and intuitive, or System 2, which is more logical and strategic. For Star Trek Fans, System 1 is Captain Kirk, while System 2 is Mr. Spock. Just as with Captain Kirk on the Starship Enterprise, our brain heavily favors System 1 decision making - it's fast and efficient, if somewhat imperfect. System 1 thinking also produces quick judgements based on our moral intuitions. As Social psychologist Jonathan Haidt describes in his book, The Righteous Mind, these intuitions include Fairness, (avoidance of) Harm, Loyalty, Authority, and Sanctity.
When we hear that an autonomous machine has harmed a human being, our System 1 brain judges very quickly using our moral foundations of Harm and Sanctity. "How dare a machine injure a human?" Captain Kirk is outraged before Mr. Spock can explain the logic: “But Captain, autonomous vehicles improve safety and increase mobility options for many, even if there are some “negative externalities” for a few unlucky individuals. Even at the current state of technology, society would already be better off with its large-scale deployment." This is when Captain Kirk pushes Spock aside, and starts to reason with the machine, causing it to enter an infinite logic loop and explode.
Our collective System 1 moral judgement will be moving the goal-posts for AI acceptance, especially if human lives are at stake. 80% or 95% or even 99.9999% accuracy will not cut it, even if the AI is already exceeding human-level performance. When it comes to self-driving cars, the OEMs have the right strategy here – humans will need to stay in the loop for much longer than our tech industry is anticipating. Captain Kirk will be watching!
More from Our Archive
-
October 2024
- Oct 4, 2024 What Happened to the Singularity? Oct 4, 2024
-
June 2024
- Jun 25, 2024 The VC Void in Germany Jun 25, 2024
-
December 2023
- Dec 7, 2023 A Big Challenge for AI... And It's Not Technical Dec 7, 2023
-
October 2023
- Oct 4, 2023 About That Good Idea... Oct 4, 2023
-
August 2023
- Aug 3, 2023 Are we in an Innovation Winter and is ChatGPT to Blame? Aug 3, 2023
-
May 2023
- May 26, 2023 How to Build a Growth Factory May 26, 2023
-
March 2023
- Mar 22, 2023 ChatGPT and the Pareto Principle Mar 22, 2023
-
November 2022
- Nov 8, 2022 Disrupted! Nov 8, 2022
-
April 2021
- Apr 5, 2021 Irrational Exuberance Apr 5, 2021
-
February 2021
- Feb 11, 2021 What You (May Have) Missed... Feb 11, 2021
-
December 2020
- Dec 3, 2020 State of Seed Investor Landscape 2020 Dec 3, 2020
-
August 2020
- Aug 7, 2020 Out-Innovated Aug 7, 2020
-
July 2020
- Jul 13, 2020 Are Listening Posts Underrated? Jul 13, 2020
-
April 2020
- Apr 8, 2020 The State of Innovation in Silicon Valley: 2021 Apr 8, 2020
- Apr 8, 2020 CES & The Emerging Trends of 2020 Apr 8, 2020
- Apr 8, 2020 Our AI Overlords Are Here – And We Feel Fine Apr 8, 2020
-
July 2019
- Jul 8, 2019 The New Dinosaurs Jul 8, 2019
-
May 2019
- May 30, 2019 Peak Silicon Valley? May 30, 2019
-
December 2018
- Dec 21, 2018 Subscription Hallucination Dec 21, 2018
-
September 2018
- Sep 26, 2018 Scooter Madness Sep 26, 2018
-
June 2018
- Jun 8, 2018 Is Innovation Best Suited For the Young? Jun 8, 2018
-
April 2018
- Apr 16, 2018 Six Years Inside the Automotive Innovation Pipeline: What I Learned Apr 16, 2018
-
January 2018
- Jan 4, 2018 So.. You Want to Launch a Consumer Hardware Startup? Jan 4, 2018
-
November 2017
- Nov 28, 2017 Wearables Meet Air Nov 28, 2017
-
July 2017
- Jul 26, 2017 Learning to Sell Jul 26, 2017
-
May 2017
- May 3, 2017 Is Virtual Reality Going the Way of 3D TV? May 3, 2017
-
March 2017
- Mar 3, 2017 Traction: the Elixir of Life Mar 3, 2017
-
February 2017
- Feb 9, 2017 Artificial Intelligence: Back to the Future? Feb 9, 2017
-
September 2015
- Sep 30, 2015 Silicon Valley Is Not the Solution to Your Problem Sep 30, 2015
-
August 2015
- Aug 11, 2015 The Multi-Dimensional Startup Aug 11, 2015
-
February 2015
- Feb 6, 2015 Wearables: What's next? Feb 6, 2015
-
October 2014
- Oct 15, 2014 Startup Burn Rate: Simmer or Meteor? Oct 15, 2014
-
August 2014
- Aug 27, 2014 5 Surprises About Internet of Things Aug 27, 2014
-
June 2014
- Jun 19, 2014 How to Talk to Corporates Jun 19, 2014
-
April 2014
- Apr 30, 2014 How to Talk to Startups Apr 30, 2014
-
March 2014
- Mar 11, 2014 Avoid the Corporate Disease Mar 11, 2014
-
February 2014
- Feb 11, 2014 The Corporate Disease Feb 11, 2014
-
January 2014
- Jan 28, 2014 Founders vs. Owners Jan 28, 2014
- Jan 10, 2014 CES at the Fringe Jan 10, 2014
- Jan 3, 2014 Silicon Valley Lite Jan 3, 2014
-
December 2013
- Dec 17, 2013 How Corporates are Plugging In Dec 17, 2013
- Dec 10, 2013 Can't We All Get Along? Dec 10, 2013