Skip to main content



Epistemological Problems With the Future of Search

The following article, detailing the threats that the future of search poses to existing players, brings up a number of interesting points for me. While I agree with Bercovich that Google is essentially a high-tech librarian, the most fascinating part of this article is the new paradigm that he thinks will replace this model. At this point, I see Siri (and the its equivalents) as features that initially looked promising but turned out to be pretty much useless so far. While almost every piece of tech that I own possesses a conversational interface, I almost never use it; at best, Siri takes 10 seconds to do what my fingers could in 5 and at worse, I feel like an idiot for futilely yammering into a very expensive brick.

What this article seems to promise, however, is the potential for a conversational interface that actually adds value to a device. As someone who works for a Knowledge Graph company, the mechanics of this aren’t the most interesting concern. Having seen what we can do with KG already makes me believe that we will rapidly be able to answer complicated questions with hard data. What I’m nowhere near as sure about, and what this article touches on slightly, is how these interfaces and their users will distinguish truth from opinion. For many queries, there is a fact based answer. If I ask Siri who won the Penguins game last night, the answer is factual. The conversion of these kind of fact-driven queries from the librarian model we currently operate on to the Semantic Search model proposed by Bercovich makes complete sense to me. Where I’m much more skeptical is with queries that aren’t fully factual. One of the great things about the librarian model is that it is relatively impartial. When I Google “best presidential candidate”, I’m given a number of nuanced opinions and results. It’s not clear what would happen to me if I were to ask the new paradigm of search the same question.

One possibility is that the new model will simply tell me that it doesn’t know. While this may be the way that Semantic handles queries like this in the beginning, I don’t expect this to last very long. The nature of tech in this era is one of refusing to accept that there are problems that cannot be solved by better engineering and better data. All one has to do to understand this is to look at Soylent or any number of other tech companies that exist to solve “problems” that aren’t really problems at all. I see no reason why any company would accept that opinion queries just can’t be answered. This leads me down a particularly worrying train of thought about what will happen when we try to use data to answer questions of opinion. If no singular fact is available as an answer, what will a Semantic engine return? One answer to this question could be derived from extrapolating based on user data, in the way that Facebook currently does with its Newsfeed. To me, this option seems fraught with pitfalls.

The whole point of Semantic search is that the response it returns is given as fact. If we consider the Newsfeed case, where the engine returns “facts” based on a user profile, the issue here is pretty plain to see. The original intent of the internet was to provide as many people as possible with access to the world’s information. If the internet pivots to instead provide an altered version of fact based on what the user already believes, this seems incredibly detrimental to our ability to co-exist with each other. Our politics are already a prime example of people living in different realities; all one has to do is look at the current election to see this. Members of both parties  view the other’s candidate as the incarnation of everything wrong with the world. Imagine how much worse this would be if internet directly pushed people deeper into their own version of reality. How well will our society continue to function where the answer of question “which presidential candidate is better” only serves to validate your existing opinion? How well will be able to co-exist if our technology helps us to believe that all that we believe is factually correct, and thus that everyone who disagrees with us is wrong, when little or no factual evidence exists?

Comments

Leave a Reply

Blogging Calendar

October 2016
M T W T F S S
 12
3456789
10111213141516
17181920212223
24252627282930
31  

Archives