3 Times AI Failed Embarrassingly

Introduction

In recent years, Artificial Intelligence has spearheaded some of the greatest social and industrial impacts known to mankind.  As we enter into a new decade, AI-based technologies are bound to influence key decision-making processes across critical industries like healthcare, transportation, commerce, national security, and more.

The responsibilities associated with this monumental revolution in the AI sector rests in the hands of data scientists, machine learning experts who are charged with creating innovative machine learning models to solve some of the most pressing problems in the world. However, time and again we come across news where the industrial machine learning models fail miserably in its decision making.

In this post, we will see some of the famous failures in the AI industry that became a matter of embarrassment for both the company and their data scientist involved.

1Amazon’s AI recruits only Men

In 2014, Amazon decided to build a machine learning model into its recruitment system which could evaluate hundred and thousands of applications and spit out the best profiles for recruitment. The project was to be used for internal purposes only.

So, they started using thousands of job applications that the company had received over the past10-15 years to train their model. But by 2015 their machine learning engineers started seeing a glaring problem.

It turned out that for the role of technical and software jobs the model was showing preferential bias towards male applicants and giving low ratings to female applicants.

Reason behind the failure

Data scientists had used decade-old data of job applications when women’s presence in software and technical jobs used to be very less.

So, the machine learning model learned the incorrect correlation that men are better for technical jobs than women.

2IBM Watson’s AI prescribes Wrong Cancer Treatment

Watson is IBM’s flagship AI product which gained media attention when it won the game show Jeopardy by beating a human champion in 2011.

After the Jeopardy success, in 2012 IBM partnered with doctors at Memorial Sloan Kettering Cancer Center and started training Watson on data of cancer patients and their treatment done by the doctors.

The vision was to create a machine learning model that could diagnose and recommend treatment plans for cancer patients like actual doctors. It was soon adopted by many healthcare institutes worldwide. But things did not turn out to be as easy as the data scientist had anticipated.

Watson’s recommendation could never be relied upon by the doctors because at times it prescribed medication that was biased. In fact, some of its treatment plans were completely wrong.

Reason behind the failure

Data Scientists faced a major practical problem in keeping Watson up to date with the fast and regular advancements in oncology research.

If a new recommendation came out of clinical trials on a handful of few patients it was difficult for Watson to learn this because of the lack of enough trial data. So, it still showed a preferential bias towards older treatments that were going out of date.

3Google Photos captions Black people as Gorillas

In May 2015, Google launched Google Photos that offered unlimited cloud storage for personal photos and also hosted some useful machine learning features like tagging photos, grouping similar photos together, etc.

Soon after the launch, on June 28th, 2015, Google found itself in a very embarrassing public situation when it was reported that it’s Google Photos app had classified a black person as gorillas.

Google quickly issued a public apology and removed that particular classification from the app but not before some negative media coverage.

Reason behind the failure

It is obvious that their machine learning model was not trained with diverse images of black people with the correct classification. Therefore, the app showed such a bias and offending classification.

LEAVE A REPLY

Please enter your comment!
Please enter your name here