The Ultimate Social Networker
Facebook: The Ultimate Social Networker?
In a social network, there is a tendency for two people with a mutual friend to form some sort of link. This is known as triadic closure. The basic principle behind this idea is that new links are likely to form between people with one or more common friends due to a number of factors: proximity/opportunity or homophily (friends having similar attributes). Whatever the case may be, Facebook, in a nutshell, offers a quick and easy guide to who a person is and who they affiliate themselves with in the real world. Although Facebook is one of the most popular social networking techniques, there are two sides to this networking machine: building social ties in a positive or negative manner. According to the article attached by link, job candidates who maintain a profile on Facebook are beginning to understand that their personal images and descriptions of how they present themselves to their friends may hinder their chances at getting the job they desire.
The fact that Facebook offers an individual’s “personal” information to an array of different audiences has shown to create friends and enemies in a social network. The truths and details of personal information may begin to leak to people who were not meant to see them. In this particular article, many individuals who were well qualified for the jobs ended up not getting the positions based solely on the way they depicted themselves on Facebook. Many Facebook users “often don’t expect their personal information to be monitored by potential employers, and many consider their online profile information to be private.” It seems as though Facebook’s quick and easy approach (internet access) to making new ties and social networks with different people may in fact be its true downfall. Too much private information has the ability to fall into the wrong person’s lap at the wrong times.
Therefore, Facebook has the ability to form two different groups of friends that have opposing feelings towards one member of one of the groups. In this example, both groups of friends have complete graphs with balanced triangles. In the real world, one group would be the job hiring council and the other would be all of the friends of the Facebook user trying to get a job. Everyone in their particular group is friends with everyone else in their group. And the only way to balance the two groups would be for each member of one group to be enemies with each member of the other group. In turn, each member of the job hiring council is enemies with the Facebook user trying to get the job and any other friends of this particular individual who portray themselves the same way on their Facebook accounts (if they are also trying to get the job). If this scenario were to get even more extreme, the reputation of this individual may even linger with him to other job interviews. This would create more negative ties in the social networking scheme. In conclusion, Facebook is a great social networking tool but with its great power allows for social networking to get out of hand in some cases.
Source: http://www.msnbc.msn.com/id/20202935/
Markets For Electricity
By Miro Gavin
Electricity is very much a regional business. Beyond the national regulations, each region has its own regulations and philosophy on the best way to ensure a reliable supply of cheap and efficiently-produced electricity. Because electricity production and delivery are so complex, there are a variety of ways to approach the problem. However, there are two general categories of approaches: regulated and deregulated.
In a regulated system, there is one standard price. An oversight authority sets the price based on what can reasonably be expected and such that the utilities make some reasonable profit. Historically, this has meant the local utility was vertically integrated to produce, transmit, and distribute the electricity (where transmission refers to transport of high voltage electricity from production facilities to substations, and distribution refers to the local delivery of electricity from substations to houses).
The term deregulated comes about because the norm since the Great Depression had been highly regulated utilities, and only in the 1980s did some countries like Chile [Rudnick 1994] began to deregulate electricity. In the United States, the process started in the 1990s and is ongoing, with some states who deregulated their systems experiencing problems and deciding to revert back to a more regulated approach. The idea is to separate what can be made efficient with a free market from what lends itself to a natural monopoly. Of course, transmitting electricity over a set of wires is usually a monopoly. However, it turns out that only the distribution of electricity to local houses is considered a monopoly, and even in deregulated markets, distribution is usually the responsibility of the local utility who charges a state approved “tax” on all electricity it distributes to cover the costs of up keeping the distribution network. In contrast, transmission lines are generally not considered to be part of a natural monopoly because there are usually multiple redundant paths from generators to substations, and in fact, the transmission lines may not even be privately owned. Thus, power generators may bid on rights to use a certain percentage of a wire’s carrying capacity to transmit electricity to their customers. And since the market has been deregulated, customers now have the ability to buy electricity from anyone who can produce and transmit it to them.
Another system for cheaply producing electricity and of special interest to this class is similar to the government contract auction style, where companies place bids and the lowest bidder wins. Instead there is a second-price auction with multiple slots similar to what we learned about when studying sponsored search markets (i.e. markets for buying advertising slots on search engines like Google). Each company places bids for how cheaply they can produce a given amount of electricity. The market operator then starts with the lowest bid and successively accepts higher and higher bids until they have enough electricity for the expected demand. They then pay each company based on the cheapest bid that did not win.
From here, the details can become quite complicated. The supply must exactly equal the demand, or the transmission frequency of the electricity on the network will begin to travel outside the tolerance range expected and promised. To ensure that this won’t happen due to unpredictable spikes in electricity usage, the market must also pay for back-up generators to standby at all times. There is, however, an international trend to deregulate the electric power markets as much as possible. For more information, please refer to the sources below.
Sources For Further Reading:
Electric Power Supply Association. https://www.epsa.org/industry/primer/ Accessed November and December 2014.
PA Power Switch http://www.papowerswitch.com/about-switching-power/ Accessed December 2014.
Joskow, P. California’s Electricity Crisis. Oxford Review of Economic Policy. Vol. 17. No. 3.
http://economics.mit.edu/files/1149 Accessed December 2014.
Rutnick, H. 1994. Chile: Pioneer in deregulation of the electric power sector. IEEE Power Engineering Review. 14(6):28
Wikipedia and Epidemic Outbreaks
This article is written about a new study from the Los Alamos National Laboratory. After examining 14 disease outbreaks in nine countries, this study shows that Wikipedia “may have prophetic tendencies in the world of infectious diseases.” In summary, researchers have noticed that the amount of page-views about the disease has peaked before the outbreak and therefore try to find the underlying reason behind the search. One possible explanation for this, according to the article, is that people may start to research their symptoms to see if matched up with a certain disease. This data most strongly predicted influenza in the United States, due to high levels of Internet use. However, a limitation of this study is that it was harder to track the outbreak of HIV/aids because it was a slower moving condition, in addition to the fact that these diseases often tend to outbreak in areas with little internet access. The takeaway for this is that nothing moves faster than the Internet, and having this data correlation may be another preventative measure for the outbreak of certain diseases and potential outbreaks.
This directly relates to concepts in Networks because it highlights the cascade effect of knowledge and information, as well as epidemics. For example, if someone is feeling as though they are under the weather and they feel as though they’ve heard about the disease either on the news or heard from other people, they may be feel as though they are getting the disease, and therefore search for it. This may also lead to the study of epidemics. Patterns are recorded and can almost be predicted by the Wikipedia use of page views. Since both diseases and the diffusion of ideas can spread from person to person, across similar kinds of networks that connect people, in this case, Wikipedia, it is clear that this spread of ideas/diseases can be referred to as a “Social Contagion,” quite literally. One person searches for a disease on Wikipedia and they believe they have it and tell their friends, then their friend will search, and maybe get in contact with them which will further lead to increased incidence of disease and increased page views- a correlation.
http://mic.com/articles/104546/scientists-are-about-to-use-wikipedia-to-fight-global-epidemics
http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003892
Social Media Activism
From the Ice Bucket Challenge to trending hashtags such as #Ferguson and #BlackLivesMatter, it is no doubt that social media has become an integral part of social activism today. Social media has certainly proved to be a powerful method of making a cause reach a wide audience. I personally have seen it in action when I participated in the Wisconsin Protests of 2011 to recall Governor Scott Walker. A small event made by a high school student eventually turned into 30,000 people marching in front of the state capitol.
Of many argued advantages social media brings to activism causes, one of the most brought up is that “Social media has the potential to bring to people fair and balanced news coverage with little or no bias of mainstream corporate media or propaganda, thereby becoming the de facto news” (Khan-Ibarra). Upon first look, this certainly seems to be true, because the daily users of social media are our everyday peers and not the corporate media. However, after learning about information cascade in Networks, I am a bit more skeptical about the argument that social media is un-biased.
A perfect example is the role of Hashtag Activism. We see trending hashtags on Twitter and even Facebook everyday. Hastags show up on the front page of these social media powerhouses when enough users choose to include the same hashtag in their tweets or status updates. Since most of these tweets and statuses are public and made to be seen by a wide audience, this is the perfect basis for information cascade. I have noticed that when a hashtag is trending, it is hard to immediately find the opposing view point of the debate. Hashtags make it seem like everyone is preferring option A, while those who prefer option B are underrepresented and could be convinced to switch over simply due to the rules of information cascade. If this is the case, is social media activism really the voice of people or just an example of information cascades in action?
Sources:
Khan-Ibarra, Sabina. “The Case for Social Media and Hashtag Activism.” http://www.huffingtonpost.com/sabina-khanibarra/the-case-for-social-media_b_6149974.html
Twitter. “FAQ about Trends on Twitter.” https://support.twitter.com/articles/101125-faqs-about-trends-on-twitter
Cascading Riots
Riots are an interesting example of large crowds acting in unison. It seems logical to assume that this synchronized behavior has some network based driving force behind it; given that these riots often occur in absence of strong communication, it is worth considering whether individuals are being driven by personal information or by information cascade once a movement reaches critical capacity. The linked article argues that rioting is an imitative behavior, areas where rioting occurs typically have conditions appropriate for sparking a riot for a long time but nothing seems to happen. Eventually, one of these conditions or events reaches a critical mass to spark an information cascade; individuals whose private information would otherwise drive them to refrain from rioting observe the behavior in others and are compelled to imitate. “Perhaps they know something about the likelihood of punishment that I don’t, surely all of these people rioting suggests that its risk is lower than my personal information suggests.” Once this critical number of rioters is reached, the riot grows as the cascade builds to overpower individuals more and more reliant on their own personal information.
This argument for riot growth seems logical, although as with most crowd events it is difficult to determine if this is the sole reason. There are many other factors that likely all contribute in unison to the spark and uncontrolled growth of a riot; likely the event that sparks the riot could occur at a point where conditions are most favorable they’ve ever been for a riot to take place. Perhaps, rather than each individual looking to the growing crowd and changing their personal decisions, the riot occurs at a time and place where enough individuals realize through cost-benefit analysis that they’ve got less to lose than they stand to gain from a riot. Additionally, herd mentality it a powerful motivator of people; perhaps rather than them deciding that they’ve likely mis-evaluated the risk of the riot, they simply realize that a growing herd is gaining momentum and they are driven to support that herd.
http://stumblingandmumbling.typepad.com/stumbling_and_mumbling/2011/08/riots-sell-offs-cascades.html
The Rise and Fall of Social Media
We all know Facebook, a now social media powerhouse has quickly risen to power in its few years of operation. Originally a small social networking company started by Harvard students for Harvard students, Facebook has grown into a multi-billion dollar company with over 152 million users in the US and Canada and 1.35 Billion worldwide. But how did a company who started from no one become so big so fast?
In January of 2009, Facebook officially overtook all other social networking sides as the premiere and most widely used. This is just a few years after Myspace, a company very similar in terms of premise, who held a majority of the market share a year prior. Chunka Mui, a contributor to Forbes magazine argues that the decline of the social media giant was due to “fragility of social media, where fickle consumers and changing tastes can make sensations out of servers,” and I tend to agree. Social networks rely on the premise of effective communication with friends and family, if your friends are not on a social networking site there is minimal reason for you to be on it either. Myspace fell Vitim to a simple cascade. Mui insist that Facebook was able to catch and surpass Myspace because it provided a much better platform to “interact with friends”, whereas Myspace focused on “serving eyeballs and advertisers.” Facebook stayed on a definite goal to attract long lasting users to their network while Myspace was more inclined to make a profit despite their declining user base which in turn helped them climb to the greatness they are now.
Networks such as social media are always prone to a cascading behavior. Cascading behavior suggests that as more people use an item which increase in value with greater user base, it is inevitable the even more people will join in as the value of the product increases. As Facebook began to gain users for its product, said users friends began to see the increased value for choosing Facebook over Myspace, and as more and more people switch over to Facebook, the cascade continued to perpetuate, all the while Myspace continued to lose traction with it users. Although network cascades are a simple theory to follow, as shown in the case of Facebook and Myspace, they can lead to very big changes.

Sources
http://www.forbes.com/sites/chunkamui/2011/01/12/why-facebook-beat-myspace-and-why-myspaces-revised-strategy-will-probably-fail/
http://expandedramblings.com/index.php/by-the-numbers-17-amazing-facebook-stats/
Spam-Resilient SourceRank
The dominant PageRank system brings with it certain weaknesses. The primary weakness of PageRank is that, as explored within class, it is highly susceptible to spamming. For example, when creating a single new legitimate site, spammers may create extra dummy sites to pool their resources and manipulate their mutual authority and therefore hub rank to place the legitimate site at an artificially high rank. To counter this, the authors of this paper have developed a new ranking system referred to as Spam-Resilient SourceRank. The major divergences from PageRank include a “hierarchical source view of the Web”, a “source-based influence flow”, and Influence Throttling.
The hierarchical view organizes pages into groups called sources. Transforming your pages into these new sources, the sources are then linked by directed edges, just as the pages are in PageRank. This helps catch duplicate sites made from one developer, as they will be grouped within a source. The influence flow modifies the edge strength within the graph, based on a source consensus edge that weights based on quantity and distribution of unique pages within a source. This helps prevent highjacking, or imbedding of fake links into honest pages, as it would have less effect on the consensus edge. Finally, Influence Throttling prevents spammers from creating multiple sources as they made multiple pages in PageRank. It does so by including self-edges, that effectively reduce the outputted weight of the source. The throttling vector, varying from 0 to 1, determines how strongly the source is throttled, and depends on a number of variables such as size of dataset, link density, and spam-proximity. When compared to PageRank, it managed to reduce a spam impact percentage from 80 percent inflation to just 4 percent inflation.
This directly relates to and builds off the Search Engine information presented in class. When exploring the PageRank system in homework problems, we were requested to act in some extent as spammers, to manipulate a small set of pages to our favor. By using the Source Rank upon the solution used to answer the homework problem, our group of colluding sites are now represented as a single source, effectively canceling the benefit of having a page pool. This is a simple demonstration of the continuing struggle of offensive web advancement (spamming) and defensive counter-measure development.
Dangerous Curves
Dangerous Curves by Zack Budryk: https://www.insidehighered.com/news/2013/02/12/students-boycott-final-challenge-professors-grading-policy-and-get
Zack Budryk’s article Dangerous Curves appearing in the online journal “Higher Ed” is the extraordinary story of how John Hopkins University students unanimously boycotted their final exam in order to all get A’s. The grading curve at John Hopkins functions by giving the highest score on any final exam an A and then adjusting all lower scores accordingly. Students realized that if they collectively came together and refused to enter the exam room, the highest score would be a zero by default, and thus everyone would be entitled to an A. Surprisingly the students pulled it off. Their ability to take advantage of the grading curve loophole is a prime example of a Game Theory, specifically called The Nash Equilibrium.
Andre Kelly, one of the student organizers of the boycott explained, “if you were able to walk into the exam with 100 percent confidence of answering every question correctly, then your pay-off would be the same for either decision. Just consider the impact on your other exam performances if you studied for [the final] at the level required to guarantee yourself 100. Otherwise, it’s best to work with your colleagues to ensure a 100 for all and a very pleasant start to the holidays.”
This protest is an example of game theory outcome that is exemplified by two types of Bayesian Nash Equilibria. The Nash Equilibrium is a list of strategies, one for each player, such that each player’s strategy is a “best response” to each of the other players. In the described example both equilibria depend on what all the students believe there peers will do. For example, 1) If all students believe that everyone will boycott with 100% certainty then everyone should go through with the boycott; and 2) If anyone presumes that at least one of their peers will break the boycott, then anyone may alter their choice and be forced/decide to take the exam. In this rare example, a stable outcome occurs where no student has an incentive to change his/her strategy after considering the strategies of the other “player”/ student. The only flaw in The Nash Equilibrium is that it doesn’t demonstrate what students are likely to do. Other than that the first equilibrium is highly unlikely.
If someone caves under Equilibrium #1 (in which no one takes the test) and proceeds to take the test despite knowing others have agreed to refrain, then that equilibrium collapses and everyone will end up having to take the test after all. This scenario will devolve into Equilibrium #2 (in which everyone takes the test and gets the grade they deserve). This conjecture is a prime example of The Nash Equilibrium game theory. The irony, at the end of the day, is not the students’ cooperation in order to achieve a rare equilibrium, it is their professor’s decision to honor the original grading system and award everyone an A.
Using Information Cascading Principle on YouTube vidoes
As a part of the generation immersed in a generation that’s defined by the number of votes you get on social networking sites, be it ‘likes’ on Facebook, ‘upvotes’ on YouTube, ‘retweets’ on Twitter or ‘followers’ on Instagram, I have always wondered how certain celebrities manage to rake up millions upon millions of followers and hits, even though they contribute very little, if not nothing to the development of the entertainment community or society at large. In this blog post, I shall discuss how celebrities make use of Information Cascading to garner these millions of seemingly popular votes, by taking advantage of information cascading. I will focus primarily on YouTube videos.
A few months ago, Korean superstar Psy’s music video, ‘Gangnam Style’ took the world by storm. In about a little more than 2 months it had about 540 million viewers. On the day this blog was written, viewership stood at a staggering 2.1 Billion views. Similarly, Jennifer Lopez’s ‘On the Floor’ has about 790 million views. A more recent success, Meghan Trainor’s ‘All about that Bass’, has significantly less hits, at 299 million. So how exactly do these videos get such popularity? One of the answers, is something we’ve learnt in INFO 2040 this semester: the concept of Information Cascading.
Artists like Psy, Trainor and the like have access to a vast number of resources. They use advertising, analyze market trends and then develop very simplistic videos on often very controversial topics to generate a vast majority of these views and hits. Information Cascading is the principle on which this operates. Artists release videos at times when they are likely to generate the greatest public reaction – this is indeed why all movies and videos are released on Thursdays and Fridays. The first, very active, 10,000-100,000 or so viewers are very easy to attract because the artists primarily reach out to the already large following they have on YouTube. Because most of these subscribers are passionate fans, the first set of upvotes is very easy to come by. The surge in the upvotes soon after, is explained by information cascading. Later viewers, often see only 60-70% of these music videos. Seeing the first base of likes that the video already has, they resort to liking it as well. This tends to continue for a period of about 2-3 weeks, when these videos receive a staggering number of likes very quickly. Trainor’s new song is currently at that nascent stage. The number of likes that these videos get is proportional to the number of views. In the 2-3 weeks that the cascade lasts, the number of views grows exponentially. After this period, the number of new viewers is reduced which leads to a stagnation in the number of likes (as shown in the graphs). The YouTube market then shifts to the next pop star and the next music video. Below are images demonstrating this: The information cascade has already happened for the older ones, and is taking place right now for the other.


Resources:
http://stats.stackexchange.com/questions/41286/model-for-predicting-number-of-youtube-views-of-gangnam-style
http://www.reelseo.com/youtube-statistics-competition/
Apple – First Trillion Dollar Company?
Wall Street has never had a company break the trillion dollar mark before, but Apple seems to be on track to becoming the first company to do so. Shares of the company have reached an all time high of $117.57, and the company is currently capitalized at $680 billion (greater than the combined capitalizations of Microsoft, Amazon, Netflix, and Twitter). There are many trends, now and in the past, that have been influencing the company’s stock: the Russian Revolution, the tech bubble of the 90’s, Nasdaq’s struggle to recover from the 2000 crash, Apple’s lack of favor with institutional investors, and capital expansion, to name a few. The company also faces the skeptics, bear arguments, and critics involved whenever a milestone, especially one as large as a trillion-dollar capitalization, approaches.
This is an example of network effects covered in class. At the advent of the company, the dominating force within the tech world was Microsoft; indeed, the company had essentially been bailed out by Microsoft during the 90’s. At this stage, the tipping point equilibrium, z, was low, and consumers were uncertain to the future success of Apple. However, as more and more consumers began to use Apple’s products, the tipping point was surpassed, and Apple became more and more successful over the years. Now, at the current level, it is likely that Apple’s equilibrium will continue to rise, given the upward pressure applied by the market. As mentioned before, Apple lacks favor with institutional investors; if said institutions invested more heavily in Apple, the company’s stock would rise even more. At this point, at least 5 analysts have updated their price targets for Apple; if these targets keep increasing, it is likely that market pressure and network effects could push Apple towards becoming the first trillion dollar company.
http://www.forbes.com/sites/markrogowsky/2014/11/23/trillion-dollar-baby-can-apple-go-where-no-stock-has-gone-before
keep looking »