Skip to main content



Artificial Neural Networks

Source: https://www.explainthatstuff.com/introduction-to-neural-networks.html

In class we’ve been learning about networks as an abstract structure, with nodes connecting edges that, depending on the context, satisfy/uphold certain properties. These properties include the structural balance property, which applies to networks with edges that can be either positive or negative, and states that the networks tend to be separable into two sets where edges between the sets are negative and within are positive. Another property that can be abstracted from a network is that of power – in networks where “power” can be conceptualized as within ties to other nodes, certain nodes can be much more powerful than others based on their location within the network. One very complex network that we have yet to talk about in class is that of human brains – the interconnections and affective interactions of billions of neurons allows for learning. One property of networks that have the ability to adjust their own connections between nodes is that is the potential to learn – it turns out the “learning” property of the network of neurons in the brain can be simulated artificially in what is called an Artificial Neural Network.

The way artificial neural nets work are based upon human brain networks, although much simpler. They consist of different layers of “units”, where each unit is analogous to a neuron. The first layer is the input layer, where data is input (such as from an image). The units of the input layer then connect to units in several other layers, eventually leading to the output layer, where the result is returned. To actually “learn”, neural nets need huge amounts of sample data with the correct outputs labeled. From there, the sample input is passed into the network, and the output is compared to the labeled correct output, and some backtracking and “rewiring” occurs between units of different layers to make the network better for that particular input. By repeating this process over huge sets of data, very good neural nets can be created.

It is amazing to think that, just from the interconnection of many many small pieces in a network, a higher order property of “intelligence” can come out of the network. Some examples of common neural networks are described below. Neural networks are used by translation programs – without even understanding the semantics of a language the words are translated based on the context of the other words being translated and the millions of example translations the network was trained with (the European Union has many documents translated in several languages, for example). Self driving cars employ multiple neural networks to identify signs, pedestrians, other cars, driving lanes, and more all in various weather conditions. Neural networks can also be used for very cool image processing – such as removing the rain from a picture when it is raining, or adding color to black and white images!

Comments

Leave a Reply

Blogging Calendar

October 2018
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031  

Archives