Just as there are many different forms of democracy, there are many voting
systems. In some states, you vote for a party instead of voting for individual
candidates. In others, such as Australia, voters produce a rank ordered list
of candidates according to their preferences. All systems, however, are designed
to solve the same problem of converting aggragated voter preferences to a
A common problem with voting systems is their disincentivization of indicating
true preferences for many voters. For example, in the United States, those with
political philosophies outside the two party system often express a fear of
‘throwing your vote away’ by voting for a party which matches their preferences,
instead opting to choose Republican or Democrat depending on which they see as
the ‘lesser of two evils’. The reason they do this is that they see little
chance for their more marginal preferences to win the winner-take-all election,
instead opting to influence the outcome.
What, then, is the most fair voting system? This conversation can be informed by
a discussion of Arrow’s impossibility theorem. This theorem states that no
rank-order voting system can exist for which the following statements are all
If every voter prefers A to B, then A is preferred to B by the electorate
If every voter’s relative ranking of A and B is unchanged, then
the electorate’s relative ranking of A and B is unchanged
There is no ‘dictator’, ie: a single voter who can dictate the outcome
I won’t give a proof here, but if you are interested, one can be found at
Arrow won the Nobel prize in economics for his work. However, some scholars
suggest that it should have gone to Gibbard and Sattherwaite for their theorem:
For all deterministic voting systems which has voters submit an ordered list
of candidates, one of the following must hold:
The system is dictatorial
There is some candidate who can never win
The rule is susceptible to tactical voting.
So the problem designers of voting systems are faced with is one of balancing
the first two Arrow properties. However, Arrows impossibility theorem does not apply
in all cases. Arrow’s theorem applies to all ordinal voting systems, those which
require voters to submit a rank-ordered list of candidates. Cardinal voting
systems, on the other hand, require voters to give each candidate a grade
independent of the grade given to other candidates. Examples of cardinal voting
systems include range voting as the classic example; in range voting, each
candidate is rated by each voter. The ratings are summed, and the candidate with
the highest rating is the winner.
This system has some very nice properties, which are expounded upon very
richly at (4). It seems from this survey that range voting is a good way
to avoid the irrationality inherent in ordinal voting systems.
Does graduating from one of the top 10 ranked universities lead to a higher salary, more prestigious companies, and an altogether better future? Ideally, no, it shouldn’t in the “land of opportunity.” In the area of entrepreneurship, this degree supposedly has no effect. However, while this is ideal, this is not actually the case. While the person who starts a business (for example Mark Zuckerberg and his creation of Facebook) may not have an actual degree, the people that this person works with are likely to be of an equivalent intelligence level as the founder, “smart people recognize smart people” (Sangster). Mark Zuckerberg’s co-founders were his roommates at Harvard. Another example, the entrepreneur who co-founded LinkedIn was a graduate from Stanford University. As he now coaches and mentors, he is more likely to choose a student from Stanford to mentor than a student from any other university, regardless of actual intelligence, just because of the connection a person has to his alma mater. Multiple distinguished companies now utilize a program called MindSumo. The point of this program is to provide multiple challenges in which students participate and complete these challenges to prove that they could potentially be of value to a company. Though this seems as though it would give everyone an equal chance, not all universities are included. While there are several state schools involved, it mainly consists of those esteemed colleges.
For many students, getting into an Ivy League school is the goal that they, as well as their parents, push them towards. Why? Because the “brand name” of these schools is something that has been around for decades and it gives those students who attend, an extra boost into a high salary job. While this is something that contradicts the idea of the “American dream”, a place where everyone has equal opportunity, the idea itself of entrepreneurship is one that completely encompasses it. However, those people who start their own business are more likely to use the reputation of these brand name schools as well as the connections that branch from them to choose their future employees. As the article states, ” it isn’t so much the classes that students take, but rather the network of people that surround them and the doors that are opened because of a college’s reputation.” This relates to chapter 22, where cars that have the insurance sell better. In that example, the cars with the insurance are sold at a higher price (sold at the buyer’s price for a good car) because the insurance is a certificate of good quality and the buyer knows that he is getting a good product. A similar theme can be seen in the obtaining of a degree from a prestigious college. These universities have a good reputation that is well known throughout so the employers are more tempted to choose a student with a “special degree” rather than a state school degree. It comforts the employer to know that he is hiring someone with something similar to a “certificate of good quality.” Also, these good colleges are given more of an advantage because of the general concept of networks. The people who are creating these new technologies usually come from “top tier” schools. They will usually have a good image of their alma mater and therefore creating a positive tie. So anyone who hails from the same school will be given an advantage because of the connection that is made. Hence why LinkedIn’s Konstantin Guericke is more likely to coach a student from Stanford. So, does a top tier university degree help a student for the future? Though many people who try to insist that connections and degrees don’t matter, they do. They provide more incentive and reason for employers to look at their applications more and to hire them in the end.
Cascades are one of the most observable concepts we have covered this semester. Why is Facebook so stable? Why did Google+ fail? The answer is cascading effects in networks. I have over 500 friends on Facebook. If Facebook were a country, it would be the third largest in the world. Basically, the likelihood that the majority of people you know have a Facebook is very high. How did this happen? How did Facebook get to be so big? A small group of people started using it and it spread through friend groups. As a portion of people’s friend groups began to use Facebook, the whole group used it and so forth. Now, Google+ attempted to create this phenomenon again, but the problem was people did not have a reason to leave Facebook. There is a large “buy in” price for Google+. You need to get an account and convince all of your friends to join as well. The larger the network, the more powerful, and the more valuable. In a sense Facebook can be thought of as “too big to fail” and too big to be re-done.
In 2010, Indian government had unexpectedly, but proudly, set record revenue of 677 billion rupees (approximately U.S. $15 billion) from auctioning 3g mobile licenses. Two years have passed and the same government daringly sought to reenact what they had achieved in their previous airwave auction with high hopes; however, failed miserably to their dismay. Prime Minister Manmohan Singh’s government had initially targeted to collect 400 billion rupees ($7.3billion) from the 2g airwave license auction; however, ended up raising only 94 billion rupees The government was surely over confident with its upcoming auction which turned out to be a failure on their part. But then, why did this happen? Well, those who have taken Econ 2400 should have heard of the term ‘reserve price’ alongside with the different kinds of auctions. According to the lecture, a seller who values the item at u>0 should announce a reserve price, r, to be set above r>u in order to guarantee the seller’s value on the item. However, what if the seller values too much on the item compared to the bidders value on the item? Theoretically no one will bid for the item. This is what had happened to the 2g airwave auction held by the Indian government a week ago. The starting price, or the reserve price, set on the 2g license was 140 billion rupees which is equivalent to quadruple of reserve price (35 billion rupees) set on 3g auction two years ago. And the result? Well, the government clearly failed to meet its goal.
According to the Cellular Operators Association of India, which is a body representing GSM operators, the auction outcome “indicated an artificially high reserve price that bore no congruence to market realities was the key reason for the failure.” Also, COAI argued that “the high reserve price would ensure that there would be limited players coming into the market to bid, and had also indicated that there would be extremely muted bidding with several circles that would have no bidders at all.” After all, the bidders in the auctions were operators that had been offering services before losing their licenses and were therefore “compelled to participate despite the high prices and the limited availability, simply in order to sustain their customers, businesses and to protect their years of investments.” According to the Bloomberg article below, “India may be left with as few as five operators after the latest round of spectrum auctions,” offering services to 906 million domestic users. Such reduced number of operators will naturally decrease the level of competition which could lead to possible increase in price and call rates in the near future.
In order to solve the problem, companies and analysts proposed that lowering the reserve price by setting 35 billion rupees as the starting price for 2g auction is reasonable and will encourage the bidders to bid.
Related Link :
One unusual application for social networks? Food.
That’s right — Lada Adamic, a computer scientist at the University of Michigan and at Facebook, has been working on network analysis of recipes, ingredients, cooking methods, and nutritional profiles. Her algorithm accomplishes something remarkable: it predicts with 80% accuracy the number of stars a recipe will receive on allrecipes.com using over 50,000 recipes and 2 million reviews. Using this information, she was able to construct a mapping of ingredients based on how often pairs of ingredients appeared together on a recipe.
The Food Network seems like a pretty aptly named channel now, doesn’t it? Adamic’s food network doesn’t just show us closely coupled ingredients, like milk and eggs, which in and of itself isn’t all the interesting; it can be used to predict which recipes will be successful, and even provide information about which food items can serve as substitutes for each other. This fascinating application of networks as predictive technologies really illustrates how anyone can learn to cook by appreciating the flexibility recipes can have. Novice cooks often fail because they don’t understand how ingredients are related to each other or are unfamiliar with what is proven to taste good. Or, they will find similar recipes on a website on allrecipes.com and be hesitant to choose one over the other. With a food network at the ready, they can get the hang of cooking techniques much faster and feel more comfortable experimenting.
Harvard physicists went a step further and analysed ingredient flavor profiles, which tell us a lot about prevalence, food categories, and chemical compounds within ingredients. The flavor profile network examines how different cuisines choose different categorical pairings, which are seen as strongly-connected large components in the graph. The interactions between these components provide instructions as to what works well together, and what doesn’t (or what hasn’t been tested).
Food networks are a more natural way to imagine they way we cook and eat, and demonstrate that network theory can be used to predict certain patterns across recipes and cuisines. The secret to better pizza is out!
– cuckoo for coco puffs
Collusion is a fascinating tool created by Atul Varma that keeps track of how the websites you visit send information about you to other websites. It is an add-on to the popular Mozilla Firefox internet browser, and once the add-on is started, it gradually adds nodes to an interactive web as you surf the internet, connecting and creating nodes when the website you visit sends information to a third-party or another website. Usually the data sent is used for advertising purposes; however, due to the fact that internet users are given no indication of when and where their data is being sent (which is potentially illegal), there is a persistent fear that such routes may be exploited for the purpose of scamming users and illegal acts of digital terrorism.
Behavioral targeting and tracking is a proliferating, and now multi-billion dollar, industry that started its boom following the start of widespread internet advertising. The working philosophy behind gathering all this data is that advertisements that cater to a user’s interests are more likely to be clicked on and followed through than randomly selected advertisements, leading to potential increased revenues and page visits. This is related to the advertising tied to search behavior market we discussed in Networks (Chapter 15.1). The methodologies differ though, and the clear disadvantage of the advertising networks we discussed are that the website can only track your information while you do your browsing within their webpages, leading to possibly assumptions on user preferences and interests.
Rather than relying on just the results of search engine queries and resulting links clicked, third-parties that behaviorally track your internet usage keep a running total of all the websites you’ve visited in the past, as well as how long you stayed on particular pages and any links followed on those pages. This presents an even greater opportunity for savings than normal advertising networks. While this serves to increase the chance that advertisements may be paid attention to, the vast majority of this information aggregation is done so without the knowledge of the user at hand. I was surprised to find that after an hour of my own internet surfing, my collusion diagram had become an extremely dense network of hundreds of lines and nodes, among which only a dozen or so nodes were websites I actually visited. I was also surprised to find just how many of these trackers were somehow connected to all the websites I visited on a daily basis.
The fears and privacy concerns discussed earlier become relevant when people realize just how many of the websites they visit are linked to parties that want their information. For the most part, users are protective of their privacy, and advertisers and publishers assure users that they do not store any personal information and data. The scare is that most times, the sheer quantity of “anonymous” data accumulated on your browsing habits is enough to identify who is who (for instance, this example from a few years back).
However true or close at hand these fears may be, for now this tool is an excellent way for the typical internet user to monitor how the websites he/she visits use their browsing information. The add-on graphically depicts websites and third-party organizations as they are sent information on how you browse the internet, giving users the opportunity to selectively block unknown or untrustworthy trackers (using other add-ons or extensions such as TrackerBlock).
We encounter probability everyday just as we face our own memory. How was my lunch? What was the person sitting next to me wearing? How true is my “vivid” memory? The one of TED’s invited speaker shared his speech on mainly two events where people’s “reconstructed memory” doesn’t actually give out the truth. In both events, when the teenagers testified someone guilty without sufficient evidence but enough confidence and when people logically assumed that the second tower fell soon after the first on 911, the brain, “abhor to vacuum,” filled holes in memory up with assumptions and post-scene information, and it made us believe in our memory.
This chance of recalling the false information in our brain is why Bayes’ theorem is very important. Having broader application than just on memory, the function describes the relationship between the prior probability of event A, the prior probability of event B, the posterior probability of A given B, and the posterior probability of B given A when a universe can be divided into two separate parts. The specific example given in the book about eyewitness testifying the color of the cab in a hit-and-run accident and related calculation actually said that when a witness provides his/her testimony in this case, the actual color of the cab still has equal chance (50:50) of either being yellow or black, assuming no outside effect was introduced during the testimony. This should suggest the court to be cautious towards eyewitness testimony because of the equal chance in this case. This also adds to the characteristics of information cascade: 1) can easily occur; 2) can lead to non-optimal outcomes; 3) can be fundamentally very fragile. The information cascade of wrong testimony can begin easily after the first two witnesses if they witnesses are to be asked sequentially. Even during separate interrogation, people who have already talked to other people about certain event at the first scene can be potentially influenced and thus provide similar information like he/she would as if he was the second person in the information cascade. Bayes’ theorem that calculates the probability of correctness of certain statements in this case thus become really important to show the judges whether a witness’s testimony should be considered the primary evidence in court. However, since all the probability should be less than or equal to 1, the fallibility of eyewitnesses should be taken into account even if the calculation did give report statement more than 50% of credibility according to Bayes’ theorem.
Scott Fraser: Why eyewitnesses get it wrong URL: http://www.ted.com/talks/scott_fraser_the_problem_with_eyewitness_testimony.html
This past September the Cornell Forum on Massively Open Online Courses (MOOCs) hosted Daphne Koller and Anant Agrawal for remote video presentations about their respective organizations. Coursera, launched by Daphne Koller and Andrew Ng, is a for-profit consortium of over 30 institutions of higher learning with the goal of offering the world’s best courses online for free. EdX, headed by Anant Agrawal, is a not-for-profit that desires to reach out to students of all backgrounds and research how students learn. In her TED Talk, Daphne Koller explains her motivation behind Coursera with a quote from Thomas Friedman: “Big breakthroughs happen when what is suddenly possible meets what is desperately necessary.” The surging cost of tuition and overcrowding which already ravages schools needs a solution, and Massively Open Online Courses may be just that. Both Coursera and EdX are indeed MOOCs that have served hundreds of thousands of people, but they differ in a significant way. Coursera is for-profit, while edX is not-for-profit.
This difference in approach has already led to some polarity among prospective member institutions. According to the Chancellor of UC Berkeley, Robert J. Birgeneau: “Ultimately, our faculty will decide where they want to put courses up online, but we find that edX has values and methodologies very closely aligned with ours at Berkeley, so our institutional preference would be to use edX.” This polarity is well founded, because there has been much debate about whether MOOCs are worth the investment in infrastructure. Some have warned that especially for a not-for-profit MOOC, the costs will draw resources away from the main institution and lead it on an unsustainable path. Coursera has a direct plan about how to make profit for its member institutions by charging for certificates. This has led some to argue that Coursera is no different from the University of Phoenix for example, which also offers courses online with discussion boards and videos. Though Coursera and the University of Phoenix are still different because all Coursera courses are available without a tuition, unlike the University of Phoenix.
Let’s dive into the details about what a typical Coursera course consists of. Lectures are available as videos, and quizzes prompt the student to reason about a question to gauge understanding. These quizzes are graded with an automated system, as long as they consist of numerical or otherwise objective answers. Peer grading and self grading is utilized for non objective answers. There is a Question and Answer forum where students can answer each other’s questions at any hour. Coursera also encourages local study groups, allowing students to have a connection with their work that transcends the computer screen.
Daphne Koller noted that peer grading and self grading correlates surprisingly well with official grading, as long as students are incentivised correctly. In the case of peer grading, this shows the wisdom of the crowd. In the case of self grading, this exhibits a game where the players are the students, the instructors should make it so that reporting one’s true grade is the best strategy.
Whereas the peer and self grading may be an experience unique to Coursera or alternative curricula, every student taking INFO 2040 has experienced the 24 hour Question and Answer forum through Piazza. Allows for students to post questions, and for the Instructor as well as other students to answer. Instructors can then endorse student’s answers, and mark questions as good. This is interesting from a Network perspective because Piazza makes a distinction in the authority of students and the authority of teachers. This is a very logical thing to include for a Question and Answer forum to have, because if some student starts broadcasting false information, the instructors should be able to tell other students to stay clear of bad responses. On the other hand, there are examples of less hierarchical environments working very well.
In his TED Talk, Sugata Mitra shows how kids can teach themselves how to use a desktop computer without any supervision. It is part of his goal of “minimally invasive education”, where students are free to explore ideas on their own in an unsupervised environment. With this kind of environment, one might expect the more kids teaching each other the better, but this might not always be the case.
One of the challenges that Education faces today is the “two sigma problem”, which Daphne Koller explains as the idea that students who have personalized instruction with an instructor do better than those students taught with a traditional lecture by two standard deviations. With MOOCs, we see that technology connects the teacher with all the students in a way that wouldn’t be possible otherwise. This online technology scales for any number of students, and each student gets a personalized way to learn the material based on their interest and aptitude. By allowing students to engage in active learning, MOOCs can increase the achievement of students by moving toward fully personalized instruction.
Finally, one might ask: Why has there been so much publicity about MOOCs recently, when similar online learning content has been around for a while? Ultimately, it is the social element of online learning like videos and real time Question and Answer forums. These haven’t been technically possible until a few years ago, so these developments have made all the difference in bringing Online Education to a tipping point where the more teachers and students who participate, the better it gets for everyone involved.
Google surprised the world (once again) when it announced in 2010 that it planned to provide its own internet service that would be 1000 times faster than the basic broadband connections being offered by major ISP’s like AT&T and Comcast. Called Google Fiber, the service was supposed to provide a gigabit internet connection to home users; a speed normally reserved to businesses and universities. There was an application process for municipalities to apply to be the first city the fiber would be deployed in as an initial test. Over 1100 communities applied for this opportunity, delaying the announcement of the final selection, but in 2011 Google announced that Kansas City, Kansas would be the first community to receive the service. The first home gigabit connections in Kansas City have been installed and used just this month.
The technology behind Google Fiber is impressive, and leads many to wonder: how/why can Google afford to give people internet that is 1000 times faster for a similar cost to what they were paying for before? Later in this post, I will discuss some of the motivations Google might have to provide this service and how their pricing scheme for it shows it is entering a market with strong network effects.
A little more history, Google has been buying up so called ‘dark fiber’ since as early as 2005. Dark fiber is fiber optic cabling that has been installed but is not actively being used (no data is being transferred over it, so it is ‘dark’). During the Dot-com and Telecom booms of the late 1990’s, many telecommunications companies built out extensive networks of fiber, expecting the market to keep growing at an exponential rate. With the burst of the dot-com bubble and new technology that increased the amount of bandwidth that could be transferred on a single wire, the market for fiber collapsed (along with some large companies like Global Crossing and Worldcom declaring bankruptcy). Much of it lay dormant for years before Google started buying it in large quantities. Until Google Fiber was announced in 2010, the public was unaware of what they planned to do with massive amounts of high speed infrastructure (with a company like Google, there were a lot of possibilities and speculations).
Some people question if Google plans to make Google Fiber a loss leader until it can gain a greater market share. Many question that Google plans to offer gigabit internet for only $70 a month. Even more surprising, is that they plan to offer a basic internet service that is as fast as current broadband internet offerings form other providers for free. There is still a $300 installation fee, but they say those speeds are guaranteed for at least 7 years!
This announcement has led many to question if Google Fiber is profitable, which the company claims it is. The low cost of entry, and especially the entry level basically free service, strongly implies that they have a strong desire to take a significant portion of the market, and are willing to lose money in the beginning to do so. I believe they are doing this, and there are multiple reasons I will discuss.
One main reason is that the super-fast internet experience is even better when other people in your social network it have it as well. Two Google Fiber users can video chat in high definition and share files extremely fast. An internet connection is also only as fast as the slowest side: many websites today don’t serve content at speeds that Google Fiber users can now handle. Google wants to get a large enough market of people using the fiber service so that web application and content providers will want to cater to people on gigabit connections. This will make the service even more valuable to the people using it.
The other reason is the huge infrastructure cost of being an internet service provider. It is an industry that historically has very few new players. Google could only attempt to take on this market because it had an enormous amount of capital to invest to make it possible. They were able to buy up much of the dark fiber in the country years before they announced the service. Now that they are starting to offer it, they need a large amount of people to use their infrastructure for it to ever be profitable. A gradual growth that would come with pricing it at a level that was profitable in 2012 would probably prevent them from having as large of a market share half a decade from now.
There is also a locality to the infrastructure costs. Google actually split Kansas City into a group of what they called ‘Fiberhoods:’ neighborhoods in the city with Google Fiber. They are rolling the service out to a few square miles at a time. This also helped spur community involvement and excitement in the service, as Google had people pre-register for the service and prioritized which Fiberhoods they lit up first based on how many households signed up for the service in that area. Bringing a fiber connection to a few homes is costly; they want to have many houses share the same cabling so that the installation is more cost effective.
There are probably more motivations, however. One has to ask, why would Google want to get into the internet business in the first place? There is an opportunity to profit off the monthly fees alone, but I think that the main motivation for Google is to have the Fiber service as part of the larger picture of their company. They want to provide a great web experience to their users and increase the amount that their users use Google products so that they can show them more ads and have a larger/better profile of each Google user so that they can tailor ads to them specifically. If Google provided internet, they know where every packet of every customer was heading. Though traditional ISP’s probably ignore most of this information, I highly doubt Google would. Having access to their users as the source of all their internet connectivity is something that would allow them to both have a more complete picture of a person’s internet habits, as well as a better ability to tie an internet identity to a person (especially over long periods of time).
Though the last paragraph exposed the possibility of ulterior motives, Google Fiber is overall still a great thing. It is providing an amazing service to the people who are lucky enough to have access to it. In addition, it is putting a lot of pressure on traditional internet service providers to increase the speed and decrease the cost of their internet services. This is very valuable to all internet users as internet connection speeds have hardly increased in the last decade, even as the technology to provide fast internet has made great progress. Google Fiber, if successful, will likely improve the speeds of most internet users a few years down the road. I just want to know: when can I sign up!
Information cascades are not only for humans. Other members of the animal kingdom also get influenced by it. Cockroaches for example get influenced by it when they decide on a shelter. Scientists have found that a typical cockroach decides on a shelter based on its darkness and on whether there are other cockroaches in it. They prefer really dark places over lighter ones. If there are two shelters with similar lack of lighting, scientists have found that a group of cockroaches choose only one of them, in other words they all gather in one of the shelters. It seems that one pioneer cockroach would choose a shelter based on its darkness. After that, when other cockroaches see that the pioneer cockroach has already settled in that shelter, they go there as well. In a sense, there is an information cascade happening in the group of cockroaches. One of them chooses a shelter among the choices presented and the other cockroaches just follow the first one. To reinforce this idea, scientist decided to mess up with the cockroaches’ preference to darkness. They introduced robotic cockroaches in the cockroach groups. These robotic cockroaches looked like mini matchboxes; however they were accepted by the cockroach group because they smell like them. The scientists presented the group of cockroaches with two shelter choices: a dark one and a lighted one. Before any of the real cockroaches could act as the pioneer roach, the roboroaches went on and chose the well-lit shelter. Interestingly, despite the cockroaches’ normal preference to darker shelters, they chose to stay at the well-lit shelter where the roboroaches were.
In class we discussed about how information cascades make individuals abandon their normal preferences for a specific item or choice when they see that other individuals are cherry-picking the other choice. The cockroaches in the experiment abandoned their normal preference for dark shelters when they saw other cockroaches gathering in the lighter one.
Source: http://www.npr.org/templates/story/story.php?storyId=16328789keep looking »