Skip to main content



The Black Box of Neural Networks

https://www.wired.com/story/new-theory-deep-learning/

The article above discusses how some researchers and computer scientists have theorized what is behind the “black box” of neural networks. A neural network is often what is used in machine learning to narrow down many possible options to one outcome. As the article shows, an example is if an image of a dog is inputted into a program, the neural network would repeatedly put the image through several layers (like edge detection, feature id, etc) and narrow down till there is only one possible explanation for the image – that it is a dog. It is really cool how the algorithm can do something this powerful and this human in a sense. For years what goes on within the neural network has been called a “black box,” since it somehow always took inputs and gave highly accurate outputs. But now, a cascade has been theorized by scientists from Jerusalem, Toronto, New York that gives a little more insight to the network workings within this machine learning mechanism.

 

I think this content relates highly to our class. As I was reading this article, I thought of how we discussed search engines’ algorithms to narrow down to the best search results. It ranks pages based on the links to those pages, and then ranks the credibility of those links, and repeats the process until there is a narrowed outcome of the best search results. This seems very similar to how the neural network described in this article searches for edges in an image, and then the combination of edges, and then other features and so on, until it narrows it down to WHAT the image represents from its database. The fact that networks and information cascades through them can empower the understanding of machine learning and other big things like search engines is really quite incredible to me.

Comments

Leave a Reply

Blogging Calendar

October 2017
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Archives