Google neural network teaches itself to identify cats

Google neural network teaches itself to identify cats


LONDON – A software simulation of a large-scale neural network, distributed it across 16,000 processor cores in Google's data centers, has been used to investigate the difference between learning from labeled data and self-taught learning. Researchers from Stanford University (Palo Alto, Calif.) and Google Inc. (Menlo Park, Calif.) trained models with more than 1 billion connections and amongst other things the network learned how to identify a cat after a week of watching YouTube videos.

Google, best known for its search engine capability, said the advantage of self-taught neural networks is that they don't need deliberately labeled data to work with. Adding labels to data, for example tagging images that have cats in them, consumes energy and makes teaching networks expensive.

The research is expected to have applications outside of image recognition, including speech recognition and natural language modeling, Google said.



After a training period one neuron in the network had learned to respond strongly to cats. Source: Google.

"Our hypothesis was that it [the neural network] would learn to recognize common objects in those videos. Indeed, to our amusement, one of our artificial neurons learned to respond strongly to pictures of cats. Remember that this network had never been told what a cat was, nor was it given even a single image labeled as a cat. Instead, it discovered what a cat looked like by itself from only unlabeled YouTube stills," said Google Fellow Jeff Dean in a posting at Google's website.

In addition, using this relatively large-scale neural network, Google achieved a 70 percent relative improvement in the state-of-the-art accuracy on a standard image classification test by mixing the freely available unlabeled images posted on the internet with a limited set of labeled data.

Google researchers want to increase the size of the network further to see if exponentially improved performance comes with scale. Whereas the current network supported a network with a billion connections the human brain supports around 100 trillion connections, Dean said in his blog.

Google researchers are presenting a paper on the neural network learning at  the International Conference on Machine Learning (ICML 2012) being held in Edinburgh, Scotland, June 26 to July 1.


Related links and articles:

Building high-level features using large-scale unsupervised learning


News articles:

Book review: CHIPS 2020

AMD looks smart in the brain engineering era

IBM demos cognitive computer chips



Previous
Next    Applied tips dielectric etch tool for 3-D NAND production