Skip to main content



An NLP Defense Against Sybil Attacks

Reputation and recommendation systems in online platforms are important because they help decrease information asymmetry by providing information that users would not be able to get otherwise (e.g. product reviews, hotel reviews). This both makes evaluating and filtering easier for users, and locks users in to systems because if someone has built up a good reputation on one system, they usually cannot easily transfer it to another.

Two ways to attack reputations are whitewashing attacks and Sybil attacks. Whitewashing attacks occur when users with bad reputations exit the system and begin again under another identity. Ways to defend against this are (1) to make identification more unique – e.g. asking for credit card numbers, social security numbers – and (2) collect fees for each new account. However, the obvious drawback is that this might deter legitimate users from joining the system.

Sybil attacks occur when someone creates a large number of fake account and uses them in a malevolent way, such as giving her own restaurant high ratings and her competitors’ restaurants low rankings. Two ways we discussed in class to defend against Sybil attacks are (1) make account creation costly, (2) restrict privileges to those who have already built up a reputation, and (3) to weight feedback from more established users more heavily.

Here, I will discuss a fourth approach to decreasing Sybil attacks which is the subject of current research. Using natural language processing, it is possible to create systems which can analyze text and detect fake reviews fairly accurately. On a high level, this is done by creating a system which can keep track of certain characteristics of text (e.g. number of nouns, certain words, etc.), training that system using a training set which has text marked as real or fake, and then provide it with never-before-seen reviews and the system to decide whether a review is fake or real. At Cornell, professors Claire Cardie, Jeff Hancock, and PhD candidate Myle Ott are working on creating such a system for hotel reviews called ReviewSkeptic. Here, you can enter a hotel review and the site will tell you whether it is likely to be fake or real. The site is down for the Spring 2013 sesmester. Additionally, others in the Cornell Computer Science and Communication departments are working on linguistic analysis on deception detection

Last semester I was in Jeff Hancock’s seminar, Deception in the Networked Age. One of the main points he emphasized  throughout the course was that in the past, researchers have worked hard to find things which globally indicate a lie (i.e. something that almost all people do every time they lie). However, more recent research suggests that indicators of deception are highly context dependent. While this covers both verbal and non-verbal cues, here the focus is on verbal cues since online reviews are textual.

One example of something that was previously thought to be a global indicator of deception is avoiding eye contact. Many people believe that you can tell if someone is lying by whether or not they are able to look you in the eye while speaking. Research indicates this is not the case; in fact, liars will purposefully make eye contact in order to deceive. There is little evidence that global indicators even exist.

On the other hand, there is strong evidence for contxt-specific indicators of deception. For example, textual analysis of political speeches indicate that false statements involved significantly fewer first person singular pronouns, exclusive terms and action verbs, and significantly more negative emotion terms (Hancock, Bazarova, & Markowitz, 2008).  However, textual analysis of hotel reviews shows that fake hotel reviews exaggerate sentiment, and have more first person singular pronouns, particularly in positive reviews (Oyt, Cardie, & Hancock, 2013). It is interesting that more first person singular pronouns can indicate truthful statements in one context, yet false statements in another.

If this is indeed true and there are no global indicators of deception, and more specifically no indicators which can be detected just by analyzing the text in online reviews, then there would need to be a different system in place for each type of reputation system (e.g. a system for hotel reviews, a different system for restaurant reviews, etc.). Additionally, if the information about what attributes flag a fake review continues to be publicly available, then attackers can analyze the aspect of a review that flag it as fake, and adapt their writing style to fool the algorithm. However, even if fake reviewers changed their writing style deliberately, the algorithm would simply need to be retrained on a new set of reviews.

A natural language processing approach to detecting lies online is promising, though at this point it is far from perfect. However, with more research this could be yet another good defense against Sybil attacks.

 

REFERENCES

Hancock, J., Bazarova, N., & Markowitz, B. (2008). Language, Lies and Politics: A Linguistic Analysis of the Justifications for the Iraq War. Manuscript: Cornell University.

Ott, M., Cardie, C., & Hancock, J. (2013). Negative Deceptive Opinion Spam. Retrieved on April 29, 2013 from http://www.cs.cornell.edu/~myleott/neg_opspam_NAACL2013.pdf

Comments

Leave a Reply

You must be logged in to post a comment.