fbpx
3-5 Evaggelikis Scholis, 14231 Nea Ionia, Greece
+302108321279
+302110135659

Restructuring Social Media Platforms for the Greater Good

Originally posted on towardsdatascience.

What if we restructure social media spaces for better and healthier public debates by rebuilding the algorithms again from scratch?

From claims of Russian disinformation campaigns that influence US elections to fake news and the filter bubble, many argue that social media platforms are no longer facilitating healthy public discourse.

As a consequence, hundreds of companies are now boycotting what would amount to millions of dollars in Facebook advertising. The goal of the boycott? Insist that the social media platform does a better job of monitoring hate speech.

These companies, along with other individuals, believe that these social media platforms play a critical role in our national debates. And with this great power comes a responsibility to shape those debates in a positive manner.

How Recommendation Engines Encourage Extremism

We’ve written about the fairness of algorithms and how organizations can best avoid biased data analysis in their early stages of collecting data. But now we’re going to focus on one algorithm in particular: the recommendation engine.

According to many media experts, the recommendation engine is the major culprit for the polarization of society. They are supposed to save time and anxiety over finding the “right choice” of books or articles to read. But often they end up limiting our choices and worldview. Facebook and Google’s personalization engines aren’t truly personalized for each individual. The “recommendations for you” are algorithms that suggest recommendations based on users with similar behavior to you. They are not developed with the goal of true personalization, but ad optimization and driving revenue. It’s an important distinction.

That’s why personalization engines are limited. For example, a user who follows a white supremacist out of sheer curiosity on Facebook will have their feed full of extremist content for a month afterward. Or a user who genuinely wants to read something outside their usual reading habits is only given ones based on past preferences. Or a user who bought a baby gift for his nephew is now inundated with daily sales for baby items despite those ads not being useful for his day-to-day life.

High-Quality Web Data that Powers New Algorithms

Here are a few examples:

  • A team at Yale University designed a personalization engine with the goal of minimizing the effects of polarization. Their personalization engine results included an equal number of articles from both sides of the political spectrum. The dataset they used to develop this algorithm included news articles collected over the last 30 days from Webhose’s News API.
  • Hai is a recommendation engine that goes beyond collecting data from one domain like most recommendation engines. Instead, it integrates datasets from a range of apps like Netflix, Hulu, Spotify. It also uses AI to make recommendations based on your individual tastes and preferences, rather than “what users similar to you also liked.”

Another group of researchers at Simon Fraser University took it one step further. They decided to eliminate polarizing content as much as possible — before it is even suggested by the recommendation engine. They built their own algorithms with the help of data that compared real news articles with fake ones. The fake news items were from the Russian Internet Research Agency while the real news articles from Webhose included 2,500 data items from a total of 172 news sources. The result of the research? A fake news detector that would identify disinformation before it was posted.

Working to Bring People Together

Source: towardsdatascience

 

Related Posts