What does Facebook’s news feed experiment mean for user privacy?

 
 

Controversy surrounding the recent reveal of Facebook’s news feed experiment has been rife this past week. The experiment, carried out in 2012, was a study of how users are affected by the positivity of the statuses they see on their news feed. Facebook’s technology already determines the content we see first, pushing highly-rated posts from our closest friends to the top of our news feeds. This same technology was used to alter the number of “positive” posts that a sample 689,000 users saw on their news feed over the space of one week.

Facebook filtered users feeds so that posts with negative wording were prominent in their feed, and positive updates were hidden. For other users, they did the opposite. Facebook then assessed whether each user adopted the attitude seen in their news feed. The results, though small, indicated an ‘emotional contagion’, where users’ status updates began to match the mood shown by their friends. A happy news feed meant happier updates – and vice versa!

Mood Swings

This result actually contradicts previous studies, which found that Facebook users feel envy and bitterness towards their friends who constantly post self-promotional statuses and present themselves as happy. Since Facebook would rather their users enjoy their time on the site (instead of deactivating their accounts in a fit of rage) it makes sense that they’d want to see how much negative statuses really do affect users. Their methods of conducting this research however have angered many Facebook users, who believe the study was carried out without their consent and was a direct manipulation of their emotions. Facebook is now under investigation by Office of Data Protection and the Electronic Privacy Information, who are now investigating the study to see whether it has broken any privacy laws.

Read the Small Print

This isn’t the first time that Facebook has come under fire for privacy issues (they were found to be sharing users’ likes with advertising companies) and in 2013 the company reported an enormous loss of active users - 48% stating privacy concerns as their reason for leaving. What is interesting about this particular experiment though is that Facebook’s user policy states that all users give their permission to be used for research purposes when they create an account. According to Facebook, their users had already agreed to this. But how many Facebook users were aware of this rule? It has to be said that very few people really do read the Terms & Conditions, so when a loophole is hidden deep inside a ‘wall of text’ it can be expected that the majority of people won’t ever actually read it. Is this the fault of internet users’ carelessness by rushing through sign-up processes, or do websites need to come up with better ways to ensure all their users understand their policies? Either way, it seems unfair for users to be expected to consent to research projects, without knowledge of how information will be used and without any way of opting out.

Growing Tensions

Facebook have since admitted their mistake and sent out a full apology, but this controversy is a prime example of the ongoing tension between users and companies when it comes to internet privacy. So what does this actually mean for typical Facebook users, and should we be concerned? Here’s a breakdown of some of the issues surrounding it:

  • The effect that the negative / positive posts had on users’ statuses was very small. “The largest effect size was a mere two hundredths of a standard deviation (d = .02).” So while it is true we can be influenced by what we read on social media, we’re still in control of what we think and what we post.
  • It’s also important to note that just because a person uses more negative / positive words in their status updates, it does not necessarily mean that their overall mood is changed. It could be, instead, a response to social expectations and norms – if a user sees that others are complaining about their day or making snarky remarks, they’ll feel more comfortable sharing their own complaints.
  • With a sample size of 689,000, the vast majority of users were unaffected by this survey – only 1 in 2500 had their news feeds altered, and at differing rates (some users saw a 10% increase in positive/negative words, others up to 90%). However, this is still a privacy concern for those individuals involved – and it also poses the problem of what Facebook will do with this data, now that the study has suggested that it’s possible to manipulate our thoughts through editing the News Feed.
  • Social media users have voiced concerns that this could be used for subliminal marketing, altering user’s moods to encourage buying products, or – even more disturbingly – others have suggested that news feeds could be altered in the run-up to elections, with the possibility of manipulating our political views.

 

No matter what your thoughts on the news feed experiment it is certainly a divisive topic. The significance may be being blown out of proportion, however Facebook could certainly be doing more to keep its users informed about what is happening with their data.

More information on the Facebook study can be found in this article.

Blog contributed by our intern Catherine Stanford