BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

An Artificial Intelligence Rant: Neural Networks Are Not Magic, They’re Code

Following
This article is more than 2 years old.

I was reading yet another document about artificial intelligence (AI). The introduction was covering the basics and the history of the subject. The authors mentioned expert systems and the real flaws that tactic had. Then the authors said that, luckily, there was an alternative called “machine learning.” Sigh. Yet more people who think anything older than them couldn’t be classified the same way as the things they know. Yes, expert systems are machine learning, what they aren’t is a neural network (NN). I think the problem comes from even folks who should know better thinking that NNs are magic.

Machine learning is about software learning information from data. It’s about advanced analytics. Years ago, I mentioned that I had to admit that machine learning wasn’t only restricted to AI techniques, but that compute power even meant that statistical analytics, such as basic regression analysis, can learn things we didn’t code the systems to learn, recognizing clusters and exceptions humans might not recognize in the large volume of information to analysis. Expert systems did analyze data, even the early systems such as Mycin. They used rules and probability to make assumptions about input. The authors of the paper mentioned that neural networks were known. What wasn’t available was the compute power to make them useful.

It's the complexity of neural networks that seems to confuse people. The many layers and the nodes in each layer makes explanations difficult, especially since many coders don’t want to explain what they’ve done. The way people talk about NNs is as a magical black box about which we shouldn’t bother expecting explanations. That’s not the way to look at it. Let’s compare expert systems and NNs.

Expert systems are still around, now rebranded as “rule-based systems.” They are code where humans define specific rules for identifying the features of data that matter and assigning percentages to balance them when making predictions. When new features are needed, new rules can be added. When we learn new information about each feature, the percentages for certainty of decisions based on the features can be adjusted.

How are NNs different? To be honest, more by quantity rather than quality. Each layer in a NN is analyzing a specific feature. What people tend to overlook, ignore, or forget, is that each “node” in a layer is a block of code. It’s the same kind of code used almost everywhere else. It is designed to analyze a specific feature of the data and pass information forward to the next layer. Each node also has set confidence levels, percentages to help define the certainty that the node might have found something it was supposed to find. The qualitative difference, if there is one, is that there’s a manager overlooking the nodes, mediating between the different nodes in each layer based on set parameters and percentages.

Whether in supervised or unsupervised learning modes, when a neural network learning pass is complete, percentages in each layer are adjusted, either automatically by the system or manually by programmers. That improves the accuracy of the network.

Because of the complexity of modern NNs, with both large numbers of nodes and multiple layers, it is impressive about what NNs can find and learn. That is why NNs are so much more powerful than expert systems and procedural code in analyzing large volumes of data for things about which we are uncertain or to flag sparse events.

What it also means is that code can be analyzed. Processes can be reported upon. There is zero reason why we shouldn’t have more transparency in NNs, both to better help programmers fine tune the systems and to explain to the customers of the systems why the results can be trusted.

On reflection, there is one reason: but it’s not a fatal one. The added code and communications for transparency mean there is going to be a performance impact. As system continue to improve, that impact can be minimized through better design, and the transparency can improve adoption of the technology.

Machine learning isn’t limited to the latest technology, and neural networks aren’t magic. As much as many people prefer to think of NNs as revolutions, because that justifies higher costs, they are more evolutionary – as are their impacts on many aspects of business. Neural networks are very impressive and extremely useful bodies of code, but never forget that they are code.

Follow me on Twitter or LinkedInCheck out my website