Hill88

Innovation Meets Scale

(415) 497-5567

Hill88 helps corporates optimize their innovation strategy and helps early stage startups scale their revenue and business operations.

Artificial Intelligence: Back to the Future?

We are at the peak of the third hype cycle since Artificial Intelligence research was founded in the late 1950s by Marvin Minsky, Herbert Simon, Allen Newell, and others.  Will this cycle crash like the two before it, bringing a period of “AI Winter”?  Or have we entered a permanent new era of computing?

“Machines will be capable, within twenty years, of doing any work a man can do.” Herbert Simon, 1965
“My 11-year-old son should never need a driver’s license.” Chris Urmson, 2015

My first job out of university was joining the AI Technology Center at IBM in the late 1980s.   AI had emerged from its first winter, and building rule-based (expert) systems was all the rage.  I built such systems for Union Pacific to route its railroad cars and Swissair to schedule gates at Zurich airport.   Back then, I was a Knowledge Engineer, meeting with human experts for interviews, and then adapting this information into a data model as well as a set of rules that the system would execute.  Expert systems never became a commercial success – human decision making is not fully driven by rules that can be written down, but a large part is played by instinct or “gut feeling”.

So, a second winter period set in while AI researchers went back to the drawing board.   Actually, the prevalent form of AI in use today already existed back then, but computers were not powerful enough to process them.  Neural networks had been invented to closer mimic the “gut feeling” component to human decision making – learning to recognize patterns in the web of data we are exposed to, and, based on prior experience, make the decision that feels optimal.   Neural networks require a lot of processing to be performed in parallel, which is a challenge for normal computers as their central processing unit (CPU) is designed to perform one calculation at a time, serially.

Enter the GPU

Hardware finally caught up in the 21st century, driven by the computer gaming industry’s demand for high performance graphics. The challenge of figuring out how each pixel should be lit for a few hundredth of a second on a 1920 x 1080 pixel computer screen requires massive parallel computations.  New chip architectures were invented to do this, and high performance gaming computers arrived with powerful Graphic Processing Units (GPUs) in addition to the traditional CPUs that handled the remaining programming tasks.

It turns out that these GPUs are also perfect for processing neural networks.  It should be no surprise that NVidia, a company founded in 1999 to provide high performance GPU chips for the computer gaming market is now calling itself “the AI computing company.”

Big Data

Neural Networks require exposure to a diverse set of real-life scenarios so they have an insatiable demand for training data.   It is all about pattern recognition. If the pattern hasn’t been seen before, the network can fail drastically and ridiculously.    Fortunately, a second computing trend emerged in the 21st century – new storage hardware and architectures that allow businesses to store massive amounts of raw data.  Where once a retailer like Macy’s may have stored a simple list of current-year customer transactions in an SQL database, eBay uses a 40 PB Hadoop cluster to track far more variables, including behavioral data, so that it can provide search, recommendation, and merchandising services to its global customer base.

Internet of Things / Cloud Computing

The third computing trend currently in progress is to build a global wireless network that provides connectivity at such low cost that we can assume ubiquitous connectivity in our near-term future, even for simple devices like an occupancy sensor.    This trend works well in conjunction with Big Data - we now generate even more raw data for neural network training.

But IoT also works in reverse, pushing a powerful Artificial Intelligence into very simple network-connected devices.    Amazon’s Echo device is a great example of this.  Without network access, Alexa is a dumb Bluetooth speaker.   With network access, she can answer a range of questions, control your household electronics, even order pizza!    Amazon Echo devices sell for as little as $49, yet they can recognize conversational speech and provide sassy answers to many of your questions.  How is that possible?  These devices are simple (and cheap) data conduits – all the real processing is performed in the cloud at a remote Amazon data center.  Cloud computing has become so cheap that Amazon is happy to perform the processing for free – in return for the intelligence (data) gained so that it can provide you an enticing product offer the next time you log into its website.

The Golden Age

Together, these three trends (GPU, Big Data, IoT) make it improbable that we will enter another AI winter period. AI is here to stay, but more innovation is required. Neural networks are great at pattern matching to solve problems such as vision processing, fraud detection, language translation, speech recognition, etc.  But they are useless without training data and cannot handle new, never-before-seen situations.  Machines can imitate, while humans improvise and innovate.   

Last year, IBM’s Watson was credited with saving a woman’s life by diagnosing a rare form of leukemia.  In ten short minutes, Watson cross-referenced the patient information with its training data of 20 million cancer research papers and made the proper diagnosis.   Human doctors, by contrast, had been stumped by the patient’s symptoms for months, no doubt trying to diagnose more common ailments from their limited experiences.

In Florida, a Tesla running on Autopilot hurtled towards an intersection where a large tractor trailer was entering from a perpendicular angle.  There was a lot of lighting interference, and this particular scenario had not been encountered before. The neural network took the closest match from its training database – concluding that it was approaching an overhead billboard.  It had been taught to ignore such billboards, so the car continued at full speed, resulting in a disastrous collision.  Would a human driver have made the same mistake, or perhaps slowed down and focused more effort on recognizing the object in question?

AI is here to stay and in the next ten years we will continue to see surprising and interesting progress – but at other times it will be disappointing and even shocking.

 

Copyright 2023 Hill88 Consulting.    All Rights Reserved.