Google indexing Facebook comments
http://www.netmagazine.com/news/google-indexing-facebook-comments-111529
According to the article, “Google indexing Facebook comments” by Tanya Combrinck, Google has significantly increased the scope of material that its PageRank system will utilize. The “Googlebot,” or the server system that constantly indexes the web and builds the database used by Google search engine, can now parse comment boxes. For example, it can scan Facebook walls or the comments on the very news article that is referenced here. This feature will cause the content of those comments to affect the page rank which is an important value for any website, as a good portion of traffic on the web is directed through Google.
Some websites are excited for this change as they believe the content of their comments will help enhance their search ranking and thus direct more traffic to their websites. However, others are more pessimistic because it will degrade the value of a website’s original content in terms of page rank. Since there is more content being scanned and indexed, the value of each sentence or link in the original article or webpage is now worth relatively less since the comments are equally indexed.
An issue that may arise with Googlebot’s new power is duplicate comments across weblogs, which could distort or degrade page ranks. Websites could end up getting ranked based on how many people comment on their content, not the originality or value of it. This could make it difficult for new websites with genuinely unique content to emerge and could make news sites and blogs more of a popularity contest.
Many are concerned that it could significantly complicate search results. A random comment on a page that exactly matches search criteria could direct one to a page that, in reality, has little beneficial content. Some strategists have already seen examples of blogs and other forums outweighing genuine content sites due to their large number of comments that are now indexed.
In the end, the greatest takeaway from the article is that a page ranking system is only as good as the content provided to it. If it is asked to index a large amount of data that is not representative of the websites that it links to, then there is little value in the ranking. The best page ranking will likely result from quality content being indexed on the website and other websites referencing it as a source. If random repetitive comments from every blog that might in fact link to each other in a circular manner are indexed, then it seems the quality of the page rank will suffer. Overall, many seem concerned that this scenario might be the case, so it will be interesting to see how Google handles the issue.