Skip to main content



A model of a trust-based recommendation system on a social network

Introduction:

For this blog assignment, I summarized an interesting academic paper I found using Google Scholar.  I start by providing basic information about the paper, then state why I found it interesting, and proceed to note the basic idea discussed, the assumptions considered, and finally provide an overview.

Title:

A model of a trust-based recommendation system on a social network

Authors:

Frank E. Walter

Stefano Battiston

Frank Schweitzer

Article:

http://www.springerlink.com/content/yp94v7553p322072/

Interesting:

I was particularly fascinated with this paper because the authors show that their implementation of a trust-based recommendation system on a social network SELF-ORGANIZES to achieve close to optimal performance levels!  This means that as the system matures over time, performance on the global level of the network as a whole is an INHENRENT property of the system, which occurs without explicit coordination of interactions between agents on the local scale.  This occurs because high and low trusts converge, resulting in two disconnected clusters with interconnections between them.  This reminded me of chapter 5.1 of the text and our discussions in class about balanced networks.  Under figure 5.3 on page 123 of our text book online, it states “If a complete graph can be divided into sets of mutual friends, with complete mutual antagonism between the two sets, then it is balanced.  Furthermore, this is the only way for a complete graph to be balanced.”  This appears to be the case here.

Basic Idea:

The idea is of the model is that agents (users) use their social network to reach information and their trust relationships to filter it.  It made me think of an organic, or social search engine and I found this intriguing.  In a sense, this acts quite similarly to a search engine, like Google for example, but is a very different implementation.  In both scenarios, users have an information need, and amongst a large amount of information, only a small fraction of it should be returned.

Assumptions:

  1. Agents are self-interested in the sense of bounded rationality, but do not act randomly, selfishly, or maliciously and that
  2. The social network of agents is fixed and does not change over time –no agents join or leave the networks and no links are rewired, added, or dropped.

These were simplifying assumptions to make modeling more manageable, but the authors note that in reality, both of these assumptions need to be relaxed.  Regardless, the insights from this experiment are insightful.

Overview:

This model consists of three key elements: Agents, Objects, and Profiles.  Agents are connected in a social network.  Objects can represent different entities including items, agents, products, buyers, sellers, etc. (essentially, anything that could be recommended). Each agent is associated with one preference profile.  This provides a mapping between ratings to objects.  Objects are further classified into one or more of the constant and pre-specified categories.  This means that agents (users) are experts of a set of initially defined categories.

Trust between agents (users) that are directly connected to one another are calculated.  For agents (users) that are not directly connected, but indirectly connected to one another (i.e. friend vs friend of friend) , trust is propagated through the network with some amount of discounting.  There are various trust propagation methodologies: average, multiplicative, probabilistic—just to name a few that are more commonly used.

The authors not that there are two types of recommendation searches:

  1. Ranking within a category (RWC): agents query for a particular category and search recommendations for several objects in this category in order to decide for one of the recommended objects in the response from the network—typically the best one.
  2. Specific rating for an object (SRO): agents query for a particular object and search recommendations on this very object in order to decide for or against using it, based on the response from the network.

The authors conclude that there are three particularly important factors for the performance of the system: the density of the network, the preference heterogeneity of agents, and the knowledge sparseness. Their findings show that recommendation systems in trust-based networks perform better than the commonly used frequency-based recommendation systems within the aforementioned factors, which are explained below in more detail.

  • Network density: if the network is very sparse, agents receive useful recommendations on only a fraction of the items that they send queries about; the denser the network, the better the performance, but above a critical threshold for the density, the performance stabilizes. The proximity of this value to the optimum depends on the other two quantities.
  • Preference heterogeneity: if the preferences of agents are homogeneous, there is no advantage for filtering the recommendations; however, if the preferences of agents are all different, agents cannot find other agents to act as suitable filters for them. In between, when preferences are heterogeneous, but ‘not too much’, the system performance can be near to the optimum.
  • Knowledge sparseness: when knowledge is dense, it is easy for an agent to receive recommendations from agents with similar preferences. In the extreme situation in which, for each category there is only one expert with any given preference profile, agents can receive useful recommendations on all categories only if there exists a high-trust path connecting any two agents with the same profile. This is, of course, related to the density of links in the network.

Extensions:

The authors also note several critical extensions to the framework they have established for applications in reality.

  1. Since social networks evolve over time, relax the static social constraint and explore a dynamic system.
  2. Random agents are agents that, instead of giving correct recommendations, give a random recommendation.  Having such agents in the system mimics the effect of noise on communication channels.
  3. Selfish agents are agents that do not return recommendations except in the case that they already received responses through the agent initiated the query.
  4. Malicious agents are agents that intentionally give recommendations that do not correspond to their own beliefs—i.e., they recommend what they would not use themselves, and vice-versa.

Comments

Leave a Reply

Blogging Calendar

November 2011
M T W T F S S
« Oct   Aug »
 123456
78910111213
14151617181920
21222324252627
282930  

Archives