Skip to main content



Hate Networks on Social Media Platforms

After I graduated from high school, I tried briefly to remain up to date on the various goings-on in the lives of the people I had spent so much time with during my formative years. Facebook was quite popular as a means of publishing details about one’s life and communicating with other people in my hometown, so it seemed like a natural way to stay connected with my friends and acquaintances as we went our separate ways in life. As time went on, it turned out that social networks like Facebook were having an unexpected effect on some of the people I grew up with. The closer I examined some of the posts that a select few of my peers liked and shared, the more apparent it became to me that their personal political views were being slowly but surely pushed in a more hateful direction than any of them had displayed while we were in school together. One only had to click a few links in the right order when investigating one of the posts in question to get into some truly racist and hateful groups and pages. This was alarming to me, but aside from reporting the groups when I stumbled upon them, there was not much that I as an individual could do. This, among other things, contributed to my eventual decision to delete my Facebook account and stop communicating with and thinking about the people whose “personal Overton windows” had been pushed so far in such a hateful direction.

A few months ago, some website algorithm recommended an article called “Strategies for combating online hate”. This piqued my interest, and I read it immediately. The topics it addressed seemed like a perfect fit for a blog post of this nature.

 

Hate Networks

The article that was recommended to me is an overview of a study that was published in the journal Nature, entitled “Hidden resilience and adaptive dynamics of the global online hate ecology”. The researchers created a model of the way that various “hate clusters” on social media are connected, with “clusters” being defined as “online pages or groups that organized individuals who shared similar views, interests or declared purposes, into communities”. To transition from a list to a network of clusters, researchers considered two clusters to have a connection if each cluster contained a hyperlink to the other cluster. This allowed the researchers to observe network behavior on a larger scale than a simulation of a large number of individual users would have allowed for.

During the course of their research, it was found that hate group clusters are “highly resilient” to various ways of dealing with them. Because clusters are linked not only through hyperlinks, but through users who are members of multiple clusters, banning a given group or page had very little effect on the interconnectedness or growth of a given hate cluster. Instead of dispersing, banning a page or group of users “aggravates online hate ecosystems and promotes the creation of clusters that are not detectable by platform policing (which the authors call ‘dark pools’), where hate content can thrive unchecked.”

 

New Strategies

The authors of the study present several potential new methods of dealing with hate clusters:

  1. The first strategy suggests that rather than targeting the largest and most visible clusters, a social network should focus on removing the users within smaller, less well-connected clusters. This would allow social networks to avoid the outrage that would come with the removal of a large, well-connected, visible hate cluster while still being able to prevent the rise of new clusters of that type (small clusters can become large ones if they become popular enough). Reddit in particular seems to be experimenting with this strategy by banning smaller subreddits with content they find objectionable. One example of such a recently banned subreddit is /r/ConsumeProduct, which was originally intended to criticize what was seen by its members as excesses in capitalism and consumerism, but had at some point shifted its focus to antisemitic dog-whistles and quotes from Ted Kaczynski’s manifesto (he is better known as the Unabomber).
  2. The second strategy suggests selecting users in clusters at random and banning them in an attempt to weaken the overall connections between various clusters. The effectiveness of this method depends on the topology of the cluster network: a more tightly-knit community might notice users being banned and also not be affected as much as a more loosely-knit community. In addition, a loosely-knit community might be able to be isolated eventually more easily through the random banning approach and then banned entirely once the social network deems it isolated enough.

There are two other strategies that were recommended in the study, but they are less relevant to the content of this class.

 

Conclusion

If the data supports the efficacy of the strategies proposed by the study, then I they might warrant further testing on actual hate networks rather than on mathematical models. I worry that banning users without offering some alternative to participating in a hate network might cause them to become socially isolated and seek out other extremist groups on other platforms, in which case we’re back to square one. I doubt that social networks will be forthcoming with the methods that they are using, so it also seems unlikely that I’ll be able to associate any of these proposed strategies with any improvement in the beliefs of my former classmates. I hope something works eventually.

 

Works Cited:

Main Source:

https://www.nature.com/articles/d41586-019-02447-1

Source of Main Source:

https://www.nature.com/articles/s41586-019-1494-7

Comments

Leave a Reply

Blogging Calendar

October 2020
M T W T F S S
 1234
567891011
12131415161718
19202122232425
262728293031  

Archives