Amidst the hype surrounding artificial intelligence, experts say AI is actually a misnomer for neural networks, which do not address fundamental types of human reasoning and understanding.

Neural networks, however, is still in its early days and has its limits, but the technology will see broad use and holds much promise, according to a panel of experts at an event marking the 50th anniversary of the Alan Turing Award.

The discussion of deep learning was particularly relevant given Turing’s vision that machines would someday exceed humans in intelligence. “Turing predicted [that] AI will exceed human intelligence, and that’s the end of the race—if we’re lucky, we can switch them off,” said Stuart Russell, a professor of computer science at Berkeley and AI researcher, now writing a new version of a textbook on the field.

“We have at least half a dozen major breakthroughs to come before we get [to AI], but I am pretty sure they will come, and I am devoting my life to figure out what to do about that.”

He noted that a neural network is just part of Google’s AlphaGo system that beat the world’s best players.

“AlphaGo … is a classical system … and deep learning [makes up] two parts of it … but they found it better to use an expressive programme to learn the rules [of the game]. An end-to-end deep learning system would need … [data from] millions of past Go games that it could map to next moves. People tried and it didn’t work in backgammon, and it doesn’t work in chess,” he said, noting that some problems require impossibly large data sets.

Russell characterised today’s neural nets as “a breakthrough of sorts … fulfilling their promise from the 1980’s … but they lack the expressive power of programming languages and declarative semantics that make database systems, logic programming and knowledge systems useful.”

Neural nets also lack the wealth of prior understanding that humans bring to problems. “A deep-learning system would never discover the Higgs boson from the raw data” of the Hadron Collider, he added. “I worry [that] too much emphasis is put on big data and deep learning to solve all our problems.”

Limits in self-driving cars, image recognition

Neural nets hold significant promise and limits in areas such as self-driving cars and image recognition, said other top researchers.

“I work on self-driving cars … systems [that] must be robust,” said Raquel Urtasun, who teaches machine learning at the University of Toronto and runs Uber’s advanced research centre there. “This is quite challenging for neural nets because they don’t model uncertainty well.”

Neural nets “will say [that] there is a 99% probability [that] a car is there … but you can’t tolerate false positives … when you make a mistake, you need to understand why you made a mistake.”

She agreed with Russell of Berkeley that “deep learning won’t solve all our problems.” Blending neural nets with graphical models “is an interesting area” that might help systems tap the kind of prior knowledge that humans bring to bear.

Given their limits, users need to “understand [that machine-learning] systems can have biases … [and sometimes] will make unfair decisions,” she said.

Urtasun attributed the success of today’s neural nets to “a few tricks that make training better, but [there’s been] no fundamental change [to the core algorithms] in the last 25 years. Breakthroughs came in part from the availability of big data sets and better hardware that made it possible to train larger scale models,” she said.

Nevertheless, deep learning has “enabled apps we hadn’t thought about in health, transportation—we see it almost everywhere.”

Stanford’s Fei-Fei Li, now on sabbatical as chief scientist at Google Cloud, agreed that neural nets are at a peak of hype with real promise and real limits. She just finished teaching 770 students in Stanford’s largest class to date on neural nets.

Li characterised the moment as the end of the beginning, in which machine learning has emerged from lab experiments to commercial deployments. A broad set of industries and scientific fields are “being impacted by massive data and data analytics capabilities,” she said.

Nevertheless, “the euphoria that we’ve solved most problems is not true. While we celebrate the success [of ImageNet in image recognition], we hardly talk about its failures … many challenges remain that involve reasoning.”

“An AI algorithm makes the perfect chess move while the room is on fire,” she said, repeating a joke coined by another researcher about the lack of contextual awareness in deep learning.

More broadly, “we have very limited understanding of what human cognition is. Because of that, both fields are at the very beginning.”

The bulls and bears of neural networks

It’s too early to say just how far neural networks will take us, argued the most bullish member of the panel, Ilya Sutskever, co-founder and research director of OpenAI and a former research scientist at Google Brain.

“These models are hard to understand. Machine vision, for example, was incomprehensible as a program, but now we have an incomprehensible solution to an incomprehensible problem,” he said.

Although algorithms about back propagation at the core of neural networking have been around for years, the hardware to run them has only been available recently. New architectures in the works for neural nets promise that “in the next few years, we’ll see amazing computers that will show much progress,” added Sutskever.

Speaking in a separate panel, Doug Burger, distinguished engineer working on FPGA accelerators at Microsoft’s Azure cloud service, agreed. “Despite being at the peak of the hype curve, neural networks are real … there’s something deep and fundamental here [that] we don’t fully understand yet.”

Start-ups, academics and established companies are working on processors to accelerate neural nets, many using vector multiplication matrices with reduced precision, he noted. “That will play out over three or four years, and what will come after that is really interesting to me.”

Fellow panellist Norm Joupi agreed. The veteran microprocessor designer and lead of the team behind Google’s TPU accelerator called neural nets “one of the biggest nuggets” in computer science today.

Michael I. Jordan, a machine-learning expert at Berkeley, was the bear in the AI panel. Computer science remains the overarching discipline, not AI, and neural nets are a still-developing part of it, he argued.

“It’s all a big tool box,” he said. “We need to build the infrastructure and engineering [around neural nets, and] we are far away from that. We need to have systems thinking with math and machine learning.”

Like other speakers, he pointed to human reasoning capabilities outside the scope of neural nets. “Natural language processing is very hard. Today, we are matching string to strings, but that’s not what translation is.”

For example, he noted enthusiasm in China over chatbots. The automated conversation agents can engage humans, but without support for abstractions and semantics, they can’t say anything that’s true about the world.

“We are in an era of enormous learning, but we are not [at AI] yet,” he concluded. Nevertheless, he agreed that neural nets are significant enough that they need to become a part of a revised computer science curriculum.

First published by EE Times U.S.