In last month’s post about how AI is changing human behavior, I wrote: “the problem with AI is that humans don’t know the logic behind its recommendations. But that’s another topic for another blog post.”
This month I want to dive into that “unknown” aspect of AI.
Advances in computer technology since the 1950s have made parallel processing faster, cheaper, and more powerful. At the same time, data storage capacity has expanded to almost unfathomable proportions, opening the floodgates for Big Data as we collect massive amounts of text, images, transactions, and more.
All this has proven to be the perfect growth environment for AI technology, which has grown exponentially since 2015. As I wrote in my New Year 2017 post about business trends to watch, cheap and plentiful AI is being embedded into almost everything we manufacture.
Machines that think for themselves
AI that is structured in neural network systems modeled on the human brain is capable of “deep learning”. These machines mimic biological learning by observation and experience. Essentially, they teach themselves rather than being programmed with code rules developed by humans.
Human-coded algorithms are slowly being replaced by AI’s deep learning. Machines are writing their own algorithms. But what happens when machines make decisions, and their rationale is indecipherable?
This question is amplified by the fact that deep learning AI is being used in important decision-making that impacts people. From medical diagnoses to stock market trades, it’s transforming entire industries.
People understand the linear algebra behind deep learning—engineers can trace the numbers inside a neural network’s layers. But the models it produces are practically impossible for humans to read. So we can build the models, but we don’t know how they work.
This is a known problem about AI. But we need to remember that every problem has its solution.
With the shift to deep learning, humans will still guide machines—it will just happen differently than it has in the past. Big Data will provide a continuous stream of new data to retrain machines on. Engineers will need to adjust the math of neural networks through intuition, trial, and error.
The fact is, building machines that can learn is more scalable than writing code. It opens up unforeseeable possibilities for humankind. We may even reach a point where we trust machines to make decisions over humans, when the decision is meant to be a rational one not swayed by emotion or bias.