By: Stephen Chartrand- Specialty News Editor
Geoffrey Hinton, a leading scientist in the field of artificial neural networks research and a founder of the ‘deep learning movement,’ was hired last year by Google in March to join its team researching and developing artificial intelligence technologies. Currently Professor of Computer Science at the University of Toronto, companies like Google, Microsoft, Apple, Facebook, and many others, have been taking an intense interest lately in the kind of work that has occupied Hinton for decades: A.I.
As a high school student in Britain, Hinton was fascinated by the idea that our brains store information and memories by spreading it across a vast network of neurons. “I got very excited about that idea,” he recalls, “that was the first time I got really into how the brain might work.” He believed this concept could have profound implications if applied in the fields of computer technology and software programming. In 1970, Hinton entered Cambridge University, studying experimental psychology, and in 1978 he received his PhD in artificial intelligence from the University of Edinburgh.
In a 1986 paper on neural systems and machine learning disciplines, Hinton demonstrated “how to construct a self-adaptive Multi-Layer Perceptron with a backpropagating learning method.” In other words, computer programmers and software designers could build machine learning models and practical applications capable of analyzing vast quantities of information to organize data sets into patterns; similar in ways to how our own brains store information and function.
Technologies like Google’s Street View, Apple’s Siri virtual personal assistant, or Microsoft’s speech recognition software, are just a few examples of the type of applications and technologies utilizing artificial neural networks, or deep learning.
“Deep learning, pioneered by Hinton, has revolutionized language understanding and language translation,” said Ed Lazowska, a computer science professor at the University of Washington.
According to Hinton, however, for much of the three decades he and his colleagues had spent researching and designing this technology, the academic world was largely uninterested. It wasn’t until 2004 when Hinton founded the Neural Computation and Adaptive Perception program through funding provided by the Canadian Institute for Advanced Research (CIFAR).
A consortium of computer scientists, psychologists, neuroscientists, physicists, biologists and electrical engineers, Hinton hoped that with such an organization “dedicated to creating computing systems that mimic organic intelligence,” not only might it be possible to further advance AI technology but “change the way the rest of the world treated this kind of work.”
While many scientists are optimistic about the focus and potential of deep learning artificial neural networks technology, there are still many skeptics who doubt its ability to fully develop technology that actually operates in the way the human brain does. As Gary Marcus, professor of psychology at N.Y.U., points out, “deep learning is only part of the larger challenge of building intelligent machines. They are … still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used.”
However true this may be, Hinton is hopeful for the future: “I am betting on Google’s team to be the epicenter of future breakthroughs,” he said.