Many argue that the “A” in AI should stand for “augmented” intelligence, not “artificial” intelligence. However, it also can be argued that the “I” in AI isn’t necessarily intelligence, either — at least not without humans providing context and common sense.
I recently had the opportunity to chat with Oliver Ratzesberger, president and CEO of Teradata, who points out that “AI is actually very simple math at its core. It is not intelligence. It is not sentience. It is not that you can train an algorithm and and it will warn you that its doing something that may be wrong. Whatever bias was in that training set will come at you full force when that algorithm runs.”
There are many challenges with developing a successful AI initiative. Speed and scale of data are essential. At the same time, there needs to be human involvement and engagement with AI to add the intelligence into AI efforts.
The challenge with AI efforts these days is 80% to 90% of AI projects around the world fail, Ratzesberger points out. “If you were to look at what went wrong, the most common theme that we see is the inability to operationalize something. It’s one thing that a data scientist builds a model, which may take weeks to do. Then have the model, and now everybody says, ‘So what? What are we going to do with this?'”
Taking an idea from inception “and injecting that into operational systems and workflows is really, really hard work,” he continues. “Especially when the algorithms were built in a separate data silo, on a separate platform for one particular algorithm. The data is quickly outdated, and when the when algorithms are trained, they’re actually not trained with latest and greatest of data.”
Add to the mix what has been “governance shortcomings that have been creeping into enterprises around the world over the last 10 to 20 years,” Ratzesberger continues. “There was a time companies said need to understand data very well, modeling it, describing it, assuring data quality.” Now, however, with massive amounts of data flowing through enterprises and ending up in unrestricted repositories such as data lakes, finding the right information to properly develop and train AI algorithms is an issue. “You just don’t find what you need anymore, and with algorithms, garbage-in garbage-out is even more pronounced. As amazing as algorithms can be, there is something big missing.”
This is where the human element is important,” he says. “When you tell a human do something, and if for some reason that doesn’t make sense because something in environment has changed, most humans will raise their hands and at least say something like, ‘That’s silly, that makes no sense.’ Algorithms have no intuition. They don’t understand when something has drifted away.”
Governance and explanability are crucial as organizations increase their reliance on AI algorithms, Ratzesberger advocates. “Analytics on algorithms is something most companies have only started to think about. It’s supervising the algorithms, making sure the algorithms they have trained do not exhibit strong biases. When all of a sudden algorithms around the world seem to be preferring white males over other individuals, it’s not like anybody who trained these algorithms intended them to do that. There is inherent bias in every training data set in every country in every product.”