Twitter has published the results from a study that analyzed its algorithms to explore potential political bias. The company notes that users who have an algorithmically sorted home timeline will see a combination of tweets from people they follow and recommended tweets based on their Twitter activity. This initial study compared the political content users see in their timeline when it is sorted by an algorithm rather than chronologically.

Earlier this year, Twitter said it would study the fairness of its algorithms and the ways in which they may be unintentionally contributing to ‘harms.’ This new study is part of that undertaking; it focused on tweets from elected officials across seven countries, as well as recommended content from news companies surfaced by its algorithms.

Among other things, the company looked at whether its algorithms are amplifying certain political groups more than others and, if so, whether that is a consistent issue in multiple countries. The company also explored whether its algorithms favor certain political news outlets more than others. The analysis involved millions of tweets that were published between April 1 and August 15, 2020.

Here’s what’s complex: The team did phenomenal work identifying *what* is happening. Establishing *why* these observed patterns occur is a significantly more difficult question to answer and something META will examine.
5/n

— Rumman Chowdhury (@ruchowdh) October 21, 2021

Twitter has shared some of the discoveries it made during this analysis, including that compared to a chronological timeline, the algorithmically sorted timeline amplifies tweets about political content regardless of party. Of note, six out of the seven countries included in the study were found to amplify tweets containing political right content compared to the political left.

As well, Twitter found that its algorithm also more greatly amplifies content from right-leaning news publications. It seems Twitter isn’t sure why this is the case, with the company noting in a blog post about the study that “further root cause analysis is required in order to determine what, if any, changes are required to reduce adverse impacts by our Home timeline algorithm.”

Twitter’s Director of Software Engineering Rumman Chowdhury explained:

In this study, we identify what is happening: certain political content is amplified on the platform. Establishing why these observed patterns occur is a significantly more difficult question to answer as it is a product of the interactions between people and the platform. The ML Ethics, Transparency and Accountability (META) team’s mission, as researchers and practitioners embedded within a social media company, is to identify both, and mitigate any inequity that may occur.

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *