Skip to main content



Game theory shows why Lyft had to beat Uber to an IPO filing

https://www.recode.net/2018/12/6/18128937/lyft-ipo-uber-strategy

Lyft, the closest competition for Uber, has just recently announced its decision to go public and arrive on the stock market. Both Lyft and Uber have been private companies. However, Uber has long been the shining star above all its arrivals, having a valuation about 5 times greater than that of Lyft’s. Lyft’s decision to go public will give the company a chance to finally “chip away at Uber’s last valuation” and eventually level the playing field.

Below I have created the payoff matrix for Lyft’s decision to go on the stock market. Before the announcement, Uber had been the leading company against Lyft, and I’ve shown that with the payoffs (0,1) for Lyft and Uber respectively under the decision that Lyft stays private (as Uber does as well). If Lyft were to benefit greatly from arriving on a stock exchange by filing an IPO (initial public offering, or stock market launch), it would level the ground between itself and Uber – hence the (1,1) payoffs. On the flip side, Uber would also be increasing its valuation if it were to also go public as shown. The Nash Equilibrium for the matrix is (1,2), where both Lyft and Uber choose to go public. Although in this case Uber’s valuation would still be greater than that of Lyft’s, choosing to file an IPO is still Lyft’s best response to Uber. Lyft, therefore, could only benefit from this decision to beat Uber to an IPO filing.

Uber
IPO Private
Lyft IPO 1,2 1,1
Private 0,2 0,1

How Y’all, Youse and You Guys Talk

https://www.nytimes.com/interactive/2014/upshot/dialect-quiz-map.html

This link brings us to a quiz developed by New York Times graphics editor Josh Katz. The data for the quiz and its results come from more than 350,000 survey responses collected between August and October, 2013. The results of this quiz are shown in heat maps that give us a visualization of American regional dialects. The questions asked in this quiz are based off the Harvard Dialect Survey, a linguistics project begun in 2002 by Bert Vaux and Scott Golder. The colors on the large heat map correspond to the probability that a randomly selected person in that location would respond to a randomly selected survey question the same way that you did. The three smaller maps show which answer most contributed to those cities chosen to be the most similar to you. As an example, I took the quiz and posted my results below.

After answering each of the 25 questions, a similar heat map is shown depicting which regions answered the most and least like you had.

Using Bayes’ Theorem: P(from X region|answer) = [P(answer|from X region)*P(from X region)] / P(answer), Josh Katz was able to calculate where you (the one taking the quiz) are most likely from. And the results are pretty accurate! I am from Scarsdale, NY – just a 10 min drive to Yonkers, the city that the quiz predicted.

In addition to the relevance this quiz has to what we’ve learned in Networks, we can see some evidence of network effects and information cascades (specifically the aspect of copying others, especially those we align with). Below is a map of the 2012 presidential election results. I chose to look at this year since the data that the quiz is based off is from 2013. We can observe some similarities between my regional dialect heat map and the presidential election map. I am not very political, but I would identify myself to be closer on the liberal and democratic end of the spectrum. The areas on my heat map that are red (or of the warmer colors) match similarly to the blue states shown from the election. From this, I infer that those who have similar political views tend to have positive relationships and therefore have more overall interaction (not just politically). Therefore, it would make sense that people would subconsciously speak more alike those they surround themselves with – a form of copying in information cascades. Personally, most of the significant people in my life, friends and family, live in the NY or CA area and share similar political views – thus it makes sense from my experience that we do also tend to speak the same.

Using Bayes’ Theorem to Classify Spam

The article analyzes and discusses how spam is classified, specifically the filtering method called Bayesian filtering. When spam filters need to classify a piece of mail, it knows that this can either be spam of not spam. Using that, the filter is able to calculate the probability of whether it believes something is spam or not. With each new incoming piece of mail that is received, there are certain words within that piece of mail, and each word has a probability of either commonly occurring in spam emails or not. The filter knows the probability that a word appears in spam mails, and by using these pieces of data, is able to calculate the probability that something is spam. With this, the filter continually updates itself with each new piece of mail it receives.

This relates to the ideas from Bayes’ theorem in class, in which we learned that you can find the probability of something occurring, the the space of another event occurring. This makes it useful to classify spam, as in the certain case that a specific word or pieces of data occur in a piece of mail, you can calculate the probability that a piece of mail in that box is spam. Then, with this, you can update your predictions based on other changing values of expectation. We know that Bayes Rule states that P(A|B) = [P(B|A)*P(A)] / P(B). If A is the probability that the a random piece of mail is spam, and B is the chance that the word occurs, then P(A|B) is the goal of the spam filter, which is the probability that the mail is spam given that the word exists. If the calculation occurs for each word, and also other segments of data, such as headers and HTML code, the spam filter can accurately calculate, while also updating its beliefs each time to make itself more accurate. We know that Bayes Rule also causes false positives and false negatives. Although these do exist, the probability of either of these occurring is quite small, as the filter is quite accurate. Even if these do occur, the predictions for the next piece of mail will always reflect these possibilities. These two articles emphasize that Bayes Theorem has many unknown applications, but is often used to help create and calculate accurate guesses. When applied to spam filtering, if you are given a piece of mail, M, Bayes Rule can be applied to determine and explain whether M’s intent is fraudulent or not, which can further justify why certain pieces of mail are put into spam, and others are not.

https://www.bayestheorem.net/real-life-uses-spam-filtering/

https://www.lifewire.com/bayesian-spam-filtering-1164096

Information Cascades and Social Media

Today’s technology has made it far more simple to learn about current events and discover new information that might not have been easily accessible years ago. Although this calls for celebration, there are issues that arise as a result of people believing everything that can be found on the internet. There are an array of news, blogs and social media sites that provide users with said information, but no way of fact checking or verifying the source. Interestingly enough, there was a study done earlier this year that looked into the rate at which false/true news spread over Twitter. Their findings were that “False news reached more people than the truth”. The researchers described the idea that false stories inflict fear or surprise while true stories demonstrated anticipation and trust.

http://science.sciencemag.org/content/359/6380/1146

This study is a good example of how information cascades are constantly being formed. One is able to suspect that the spreading of false news is a mere act of users sharing information based off of others responses rather than their very own opinions. It is often the case that if news were to be trending, people would not hesitate in sharing it based on the assumption that if the masses are sharing it, then it must be true. This misconception has become a prominent issue in this day and age and we will continue to see it in a political, financial, and social context.

Finding the most influential movies using PageRank and other network analysis algorithms

Researchers at the University of Turin used network analysis algorithms to determine the most influential movies. Instead of looking at box office numbers (which aren’t very good at determining how influential that movie can be in the future), the researchers looked at references within movies as a measure of success and they used those findings to also determine the most influential actors, actresses, etc.

The researchers used the movies as nodes and then measure the number of references to other movies as the connections, as well as using the influence of the movies a movie is connected to. The researchers used four centrality scores: in-degree, closeness, harmonic analysis and PageRank to assign influence scores to each movie. By doing so, they also applied this analysis to the directors of said movies as well as the actors/actresses within these movies. The top 10 most influential movies are as follows:

1. The Wizard of Oz (1939)

2. Star Wars (1977)

3. Psycho (1960)

4. King Kong (1933)

5. 2001: A Space Odyssey (1968)

6. Metropolis (1927)

7. Citizen Kane (1941)

8. The Birth of a Nation (1915)

9. Frankenstein (1931)

10. Snow White and the Seven Dwarfs (1937)

A couple insights from their research also showed that the Japanese movies filmed during the 50’s have been very influential for Western cinema. The insights also showed a gender gap where males always dominated the most influential list and actresses weren’t normally found unless the dataset was separated into their respective genders. The exception here is Sweden where actresses overwhelm actors in the global rankings.

 

https://appliednetsci.springeropen.com/articles/10.1007/s41109-018-0105-0

Baseball 3-0 Counts

https://www.beyondtheboxscore.com/2015/5/6/8547151/the-game-theory-of-3-0-pitches

In baseball, the count is the ratio of balls to strikes against the batter. Depending on what this ratio is, can effect many different aspects of the game. When the count is 0-2, for example, the batter must be very cautious and less selective in what they will swing at because one more strike and they will be out. On the other hand, when the count is 3-0, the power is in the hands of the batter and they can be as selective as they want, at least for a pitch or two, because they are not at risk of striking out. Because of the many different strategies that go into what pitch to throw in certain situations, baseball uses many concepts of game theory.

A 3-0 count brings many concepts of game theory and many different opinions for what both the pitcher and batter should do. Many believe that you should never swing at a 3-0 pitch as you should challenge the pitcher to be able to throw a strike. Others believe that you should be looking to swing as the pitcher is likely going to throw a very attractive and hittable pitch that you may not be seeing again. On the other hand, some believe that the pitcher should focus on throwing a very hittable pitch as the batter is likely not going to swing. Whereas others believe you should throw something that may not be as hittable, such as a curveball or change-up, because the hitter will be looking to swing.

In this article, out of 1,012 opportunities of a 3-0 count in the season of 2014, players swung at 3-0 pitches an average of 8.99% of the time and a median of 5.76% of the time. While some assumptions can be made about this data, game theory is unable to explain 100% of occurrences in any situation. There are many different factors that need to be looked at when deciding what pitch the pitcher should throw such as how likely the batter is going to swing, how skilled the hitter is, how many outs there are in the inning, and how many runners are on base. There are many different scenarios that would change the outcome of the pitch thrown and if the ball is swung at or not. Game theory overall will continue to be a large part of baseball.

Google suggests that combining pages could make your site rank better

In this article, John Mueller, Senior Websmaster Trends Analyst at Google,  suggests that you should combine weaker, smaller web pages into a single page in order to increase the rank of your site in google’s algorithm.  His explanation for this is that if you have one page with more information as opposed to that information being spread out amongst different pages on the site, then most likely there would be a lot of links on your site all pointing to that one page with a lot of information. Since that page would have more links pointing to it (even though these links are from within your own website), it would have more Authority than the smaller pages would and therefore would be ranked higher.

This article relates very well to work we did in class regarding hubs and authorities , pagerank, and the overall ranking of webpages by a search engine. We learned that by the Authority update rule the authority of a page is equal to the sum of the authority scores of all the sites that point to the page. It therefore is intuitive that if a website has more pages pointing to it, it will have a higher authority score and therefore will score better in google’s algorithm.

How do trends start?

We are constantly changing how we view the world as well as picking up on new trends as the years go by. What used to be popular a few years ago would now be criticized with new opinions due to how different the world is now compared to how it was in the past. However, how do all of these trends begin? There are many opinions and ideas that can be found on the internet, but it seems that only a few can reach the surface of popularity where the whole world would get to hear or read about them. Is there a common trait from all the trends that have ever existed so that we can understand how and why they began.

According to http://www.henrikvejlgaard.com/?id=161, for a trend to begin, it must be accepted by a distinct group of people which include celebrities, artists, young people, designers, wealthy people, and gay men. Once it gets accepted by these groups of people, also known as trendsetters, it must then go into the process of getting picked up by larger groups of people through the use of social media, magazines, the internet, etc. Depending on what the trend is, most can typically last around a few years before people’s taste or style change.

In class, we have learned about Network Diffusions and how even one node can cause a difference in a graph. This node can represent the trendsetter that can potentially get a large percentage of the world population to either join along or even just hear about this new idea.

A defective auditing market that makes ‘lemons’ of us all

https://www.ft.com/content/fddde450-2521-11e8-b27e-cc62a39d57a0

This article covers the market for audit services and how it encompasses the “lemon” problem. A report from the International Forum of Independent Audit Regulators showed evidence that there were serious problems with 40% of the audits inspected that year. With most markets, if 40% of the goods sampled are defective, it usually isn’t a well-functioning market. The audit market, however, is different because customers(companies) must be audited(obligated to buy the product). There is also an intermediary who chooses the product for the customer and is not the company itself.

The article notes that there is not much of an advantage to compete on audit quality because that could expose vulnerabilities in a company. Big auditing companies such as KPMG and Deloitte have evaded liability by creating generic assessments of their clients’ financial standings. This is unlike the lemon model we learned in class because the client companies(buyers) of auditing services actually want the “lemon,” if it means it will be beneficial to the company. This allows the “lemon” auditing companies to be very successful, but breaking these large audit firms up will not fix the lemon problem. Buyers would have to demand higher standards for audits as well as be ready to pay for them. Auditors would then have to decide what their audits should really show about a company. In contrast to what we learned in class, sellers would want to keep selling “lemons” to buyers, and these lemons actually are the higher quality goods(even though they are in actuality a lower quality good).

Interacting Agents and Stock Market Crashes

https://www.researchgate.net/publication/317873657_An_Interacting_Agents_Model_Approach_To_Stock_Market_Crashes

While traditional economics are contingent upon individual actors each acting to maximize their own utility, collectively producing aggregate trends, actual aggregate behavior is distinctly based upon the interactions of individuals, producing aggregate trends that are not reflections of single-actor utility maximization.

These behaviors can be analyzed to show the difference between “weak” neighborhood interactions and “strong” neighborhood interactions. In both situations, agents buying and selling a single asset interact with an average opinion of the market (to reflect public opinion). The opinion of the asset starts high at the beginning of the model, and the value of the asset is tweaked to perturb slightly downwards. Weak neighborhood interactions (if each agent is acting in their own interest) creates a smooth curve downwards, as each agent sells when the value of the asset drops below their evenly-distributed thresholds. This smooth behavior is rare in the actual stock market, where neighborhood interactions are strong.

The strong neighborhood interaction creates very sudden, non-smooth drops that is similar to how stocks crash. When fitted with actual stock market data, the strong neighborhood interaction model shows a good fit.

 

keep looking »

Blogging Calendar

February 2019
M T W T F S S
« Jan    
 123
45678910
11121314151617
18192021222324
25262728  

Archives