Google’s Artificial Neural Networks and Weighted Graphs
http://www.popsci.com/these-are-what-google-artificial-intelligences-dreams-look
https://datajobs.com/data-science-repo/Neural-Net-[Carlos-Gershenson].pdf
In steps towards achieving artificial intelligence, Google has created a huge network of artificial neurons. These are computational models resembling neurons in the brain in which an input signal is received, and if the signal is strong enough, then an output is emitted. Google then attempts to teach this network about the world. To do this, they show the network millions of pictures of something, attempting to make the computer understand what it is. After this was attempted, they also tried having the computer generate an image of the object it had learned about. This provided some pretty abstract interpretations from the computer, many examples of which can be found in the article. This has provided enormous insight into how computers learn and see the world.
The algorithm used in this learning process is called the backpropagation algorithm. To make this network as accurate as possible, the artificial neurons have certain weights to indicate their influence in the network. The question is how to figure out the correct weights for such a huge network. The artificial neuron network is built in layers with an input layer, output layer, and some number of them in between. The backpropagation algorithm works by setting random weights and then using supervised learning by giving the network examples of the correct inputs and outputs. An input is then provided and the computer looks at the actual output vs the expected. The algorithm then propagates the errors back through the network, adjusting the weights to get a better output.