Award Celebrates AI Developments of Hinton, LeCun & Bengio
The Turing Award and its $1 million prize will be shared by three AI pioneers in neural networks, according to the Association for Computing Machinery (ACM), the world’s largest society of computing professionals. The organization announced the prestigious award for AI work on March 27. It’s been called “the Nobel Prize for computing.”
Geoff Hinton, an emeritus professor at the University of Toronto and a senior researcher at Alphabet Inc.’s Google Brain, Yann LeCun, a professor at New York University and the chief AI scientist at Facebook Inc., and Yoshua Bengio, a professor at the University of Montreal as well as co-founder of AI company Element AI Inc. made major advances in neural network development.
“Working independently and together, Hinton, LeCun and Bengio developed conceptual foundations for the field, identified surprising phenomena through experiments, and contributed engineering advances that demonstrated the practical advantages of deep neural networks,” the ACM wrote.
The researchers spent more than a decade working on the technology, which is, “accelerating the development of face-recognition services, talking digital assistants, warehouse robots and self-driving cars,” according to The New York Times.
Bloomberg.com noted that this year’s three winners are often referred to collectively as the “Godfathers of Deep Learning.” Past recipients have included Tim Berners-Lee, who invented the world wide web, and Whitfield Diffie, who helped pioneer public-key cryptography.
“Today, deep neural networks using backpropagation underpin most advances in artificial intelligence, from Facebook’s ability to automatically tag your friends in photos to the voice recognition capabilities of Amazon.com Inc.’s Alexa and Google’s translations from English to Mandarin,” the Bloomberg story said.
In an interview with Bloomberg, Hinton said he believes deep learning will eventually give computers human-like or even super-human intelligence. “I think we will discover that conscious, rational reasoning is not separate from deep learning but a high-level description of what is happening inside very large neural networks,” Hinton said.
The Times story described how in 2004, with less than $400,000 in funding from the Canadian Institute for Advanced Research, Dr. Hinton created a research program on “neural computation and adaptive perception.” He asked Dr. Bengio and Dr. LeCun to join him.
“He is a genius and knows how to create one impact after another,” said Li Deng, a former speech researcher at Microsoft who brought Dr. Hinton’s ideas into the company, quoted in The New York Times article.
According to ACM, these technical achievements led to awarding the recipients:
Geoffrey Hinton
Backpropagation: In a 1986 paper, “Learning Internal Representations by Error Propagation,” co-authored with David Rumelhart and Ronald Williams, Hinton demonstrated that the backpropagation algorithm allowed neural nets to discover their own internal representations of data, making it possible to use neural nets to solve problems that had previously been thought to be beyond their reach. The backpropagation algorithm is standard in most neural networks today.
Boltzmann Machines: In 1983, with Terrence Sejnowski, Hinton invented Boltzmann Machines, one of the first neural networks capable of learning internal representations in neurons that were not part of the input or output.
Improvements to convolutional neural networks: In 2012, with his students, Alex Krizhevsky and Ilya Sutskever, Hinton improved convolutional neural networks using rectified linear neurons and dropout regularization. In the prominent ImageNet competition, Hinton and his students almost halved the error rate for object recognition and reshaped the computer vision field.
Yoshua Bengio
Probabilistic models of sequences: In the 1990s, Bengio combined neural networks with probabilistic models of sequences, such as hidden Markov models. These ideas were incorporated into a system used by AT&T/NCR for reading handwritten checks, were considered a pinnacle of neural network research in the 1990s, and modern deep learning speech recognition systems are extending these concepts.
High-dimensional word embeddings and attention: In 2000, Bengio authored the landmark paper, “A Neural Probabilistic Language Model,” that introduced high-dimension word embeddings as a representation of word meaning. Bengio’s insights had a huge and lasting impact on natural language processing tasks including language translation, question answering, and visual question answering. His group also introduced a form of attention mechanism which led to breakthroughs in machine translation and form a key component of sequential processing with deep learning.
Generative adversarial networks: Since 2010, Bengio’s papers on generative deep learning, in particular the Generative Adversarial Networks (GANs) developed with Ian Goodfellow, have spawned a revolution in computer vision and computer graphics. In one fascinating application of this work, computers can actually create original images, reminiscent of the creativity that is considered a hallmark of human intelligence.
Yann LeCun
Convolutional neural networks: In the 1980s, LeCun developed convolutional neural networks, a foundational principle in the field, which, among other advantages, have been essential in making deep learning more efficient. In the late 1980s, while working at the University of Toronto and Bell Labs, LeCun was the first to train a convolutional neural network system on images of handwritten digits. Today, convolutional neural networks are an industry standard in computer vision, as well as in speech recognition, speech synthesis, image synthesis, and natural language processing. They are used in a wide variety of applications, including autonomous driving, medical image analysis, voice-activated assistants, and information filtering.
Improving backpropagation algorithms: LeCun proposed an early version of the backpropagation algorithm (backprop), and gave a clean derivation of it based on variational principles. His work to speed up backpropagation algorithms included describing two simple methods to accelerate learning time.
Broadening the vision of neural networks: LeCun is also credited with developing a broader vision for neural networks as a computational model for a wide range of tasks, introducing in early work a number of concepts now fundamental in AI. For example, in the context of recognizing images, he studied how hierarchical feature representation can be learned in neural networks—a concept that is now routinely used in many recognition tasks. Together with Léon Bottou, he proposed the idea, used in every modern deep learning software, that learning systems can be built as complex networks of modules where backpropagation is performed through automatic differentiation. They also proposed deep learning architectures that can manipulate structured data, such as graphs.
https://amturing.acm.org/acm_tcc_webcasts.cfm
Leave A Comment