## Scale Invariance in Real World Networks

Source:http://arxiv.org/PS_cache/cond-mat/pdf/9910/9910332v1.pdf

In “Emergence of Scaling in Random Networks”, Albert-Laszlo Barabasi and Reka Albert show that a class of networks called “random networks” exhibits a phenomenon well-known to physicists and mathematicians: scale invariance. Scale invariance simply means that the overall features of our network look the same (at least statistically) under dilatations. Most of us are familiar with scale invariance from fractals. If you click the link below (feel free to mute the psychedelic music, as I did) and watch the Mandelbrot set zoom, you’ll notice that we effectively see the same thing over and over again no matter how much we “zoom in” on the fractal, and the same would be true if we were to zoom out.

http://www.youtube.com/watch?v=G_GBwuYuOOs&feature=related

Of course, in the real world we can’t zoom in or out of something infinitely, but a physical scale invariant system might exhibit this fractal-like behavior for orders of magnitude! So, what does scale-invariance tell us about networks? Well, in the case of the random networks modeled by Barbasi and Albert, it tells us something very interesting. Before I get to the punchline, let me tell you how they set up such networks:

- Start with a small number, m_0, of nodes. Label each node by some integer i, so that you can keep track of it.
- At every increment of time, add a new node.
- Let this node make m connections with the other nodes, where m is some number less than or equal to m_0.
- Now further restrict the model. Say the new node has a probability p_i of connecting with node i such that p_i = k_i/∑k_j , where the summation is over j. k_i is simply the connectivity (number of connections) of the ith node. The sum just tells us the total number of connections in our network. Thus, the more connected i is, the more likely our new node is going to make a connection with it. This is called “preferential attachment.”
- Iterate this process for t increments of time to get a random network with t + m_0 nodes and mt connections.

As it turns out, the random networks produced by this process are scale-invariant. So, why do we care if a network is scale invariant? Well, it means that the following law applies:

The probability P(k) that a node in our network has connections to k other nodes decays like k^(-γ), k to the power -γ. γ is a number that depends on what network we’re talking about. For the simple model delineated above, γ happens to be 2.9. It turns out this number doesn’t depend on the number of time increments t they used in building the network. Their model can also be modified to give different values of γ if they consider such complications as directed edges. Note that the preferential attachment requirement leads to a “rich get richer” effect, which we might see in real-life networks.

Real world networks, including some networks we discussed in class, exhibit scale invariance. The graph of movie actor collaborations (think Six Degrees of Separation from Kevin Bacon) follows a power law where γ = 2.3, where nodes are movie actors and edges are collaborations in movies. The internet is another scale invariant network, where the nodes are webpages and the connections are hyperlinks to other pages. γ = 2.1 for the internet. A third network that obeys a power law is the network of scientific publication citations, where nodes are scientific papers and edges are citations among them. γ= 3 for this network. Note that in class we talked about the network of scientific paper collaboration, where nodes were scientists and edges were co-authorships. Perhaps this network is scale invariant as well.

So, scale invariance and the concomitant power law decay seem to be present in a variety of large, complex real world networks, including some in-class examples. Scale invariance can be set up with a relatively simple model to give a particular value for the decay exponent. All you need is two ingredients: network growth and preferential attachment. Enhancements to the model, such as incorporating directed edges, tweak the exponent’s value. As we study these scale-invariant random networks, we may get some insight into networks that grow and exhibit preferential attachment, such as social networks, business networks, or perhaps even biological networks.

[…] post: Scale Invariance in Real World Networks | Networks Stop Snoring | Affordable SEO […]