Skip to main content

Neural Networks

Networks have applications in the study of cognition. The brain is comprised of interconnected neurons. Each neuron is made up of a body, dendrites , and an axon which is capable of connecting to a dendrite at a synapse. When a neuron ‘activates’, it sends a signal down its axon, stimulating the synapse at the other end. If a neuron receives enough stimulation via this synapse, it activates as well.

When studying these neural networks, a simplified graph model is used. In this model, each neuron is represented by a generic abstract ‘unit’. Structurally, each unit accepts zero or more directed inputs, and produces either zero or one directed output. Inputs and outputs are represented as a numeric quantity, representing how rapidly activation is taking place. Each input has an associated weight, and the total stimulation of a unit is equal to the sum all inputs multiplied by their respective weights. When the total stimulation exceeds a built in threshold value, the unit activates. When this occurs, the exact nature of the signal which is transmitted varies depending on the model. Often, it is binary and units will always stimulate their downstream peers by the integer quantity of 1. When studying computational systems, units with zero inputs are generally sources of new information coming into the network, and units with zero outputs represent the final output.

Fascinatingly, boolean functions can be described quite easily using networks of units. Here is a graphical description of the AND function. When both the input signals are 1, this neural network will output 1, otherwise it will return 0.


Here is a depiction of the OR function. When either input is 1, the network will return 1.


One practical application of neural networks is in machine learning. Various algorithms exist which alter a neural network so that it may perform a specific task. A human operator may not know how to implement the necessary algorithm, but a neural network can be trained to perform it by presenting it with various examples of inputs and expected outputs.

Another point of note is the similarity between the spreading activation of neural networks and cascading behavior in social networks. In a social network, when the proportion of a node’s peers adopting a behavior exceeds a critical threshold, it is likely that the node will adopt the behavior as well. This is highly analogous to a unit firing after its stimulation threshold has been exceeded by input from its upstream peers.


Leave a Reply

Blogging Calendar

November 2011
« Oct   Aug »