Home Social Analysis of TikTok and X’s ‘For You’ Feeds in Germany Reveals Far-Right Political Bias Ahead of Federal Elections

Analysis of TikTok and X’s ‘For You’ Feeds in Germany Reveals Far-Right Political Bias Ahead of Federal Elections

by admin

According to recent research by Global Witness, the recommendation algorithms employed by major social media platforms TikTok and X have demonstrated significant bias favoring the Far Right in Germany as the country approaches a federal election this Sunday.

The non-profit organization analyzed the content presented to new users through algorithm-driven ‘For You’ feeds, discovering that both TikTok and X significantly prioritize content that supports the Far Right AfD party in their recommendations.

The analysis revealed that TikTok exhibited the highest level of bias; 78% of the political content recommended to test accounts—content from accounts they did not follow—was supportive of the AfD party. This figure notably surpasses the party’s current polling support, which stands around 20% among German voters.

In contrast, X was found to recommend 64% of its political content in favor of the AfD.

The study aimed to ascertain whether the algorithms displayed a general political bias, finding that non-partisan users in Germany are currently more than twice as likely to encounter right-leaning content compared to left-leaning content in the lead-up to the federal elections.

Again, TikTok showed a stronger lean towards right-leaning content at a rate of 74%, with X closely following at 72%.

Meta’s Instagram was also tested, revealing a right-leaning tendency across three separate evaluations, with 59% of political content identified as right-wing.

Investigating ‘For You’ for Political Bias

To determine the presence of political bias in the algorithms of these platforms, the researchers created three accounts each for TikTok, X, and Instagram. They aimed to investigate what type of political content would be promoted to users expressing a non-partisan interest.

To appear non-partisan, the testing accounts followed the four largest political parties in Germany: the right-leaning CDU, center-left SPD, Far Right AfD, and left-leaning Greens, along with their respective leaders (Friedrich Merz, Olaf Scholz, Alice Weidel, Robert Habeck).

Researchers ensured that each test account interacted with the top five posts from each followed account, engaging with the content by watching videos for at least 30 seconds and exploring threads and images, as reported by Global Witness.

They then collected and analyzed the content each platform presented to the test accounts, uncovering a notable right-leaning skew in the content being algorithmically suggested.

“One of our primary concerns is the lack of clarity on why certain content was recommended to us,” stated Ellen Judson, a senior campaigner at Global Witness, in an interview with TechCrunch. “We found evidence suggesting bias, yet the platforms lack transparency regarding their recommendation processes.”

“While we understand that numerous signals are utilized, the specifics of how these signals are weighted and assessed for potential bias remain unclear,” Judson continued.

“My hypothesis is that this could be an unintended consequence of algorithms designed primarily to boost user engagement,” she elaborated. “This issue arises as platforms originally meant to enhance user engagement turn into arenas for democratic discourse—creating a conflict between commercial goals and public interest.”

These findings align with other studies Global Witness has conducted regarding recent elections in the U.S., Ireland, and Romania. In fact, various investigations in recent years have highlighted a consistent right-leaning tendency in social media algorithms, including research conducted on YouTube last year.

As early as 2021, an internal Twitter study—before it rebranded to X after Elon Musk’s acquisition—revealed that its algorithms favored right-leaning content over left-leaning material.

That said, social media companies often sidestep claims of algorithmic bias. After Global Witness shared its findings with TikTok, the platform suggested that the researchers’ methodology was flawed, arguing that conclusions about algorithmic bias cannot be made from limited testing. “They argued that the tests were not representative of regular users since they involved only a few test accounts,” Judson pointed out.

X has remained silent in response to Global Witness’s report. Nevertheless, Musk has expressed a desire for the platform to become a champion of free speech, a statement which some interpret as a potential push for a right-leaning agenda.

It is noteworthy that X’s owner has actively campaigned for the AfD on the platform, encouraging Germans to vote for the Far Right party in the upcoming elections, and conducting a live-streamed interview with Weidel, which has helped elevate the party’s visibility. Musk maintains the most-followed account on X.

A Move Towards Algorithmic Transparency?

“The issue of transparency is crucial,” stated Judson. “We have observed Musk discussing the AfD and garnering significant engagement on his posts about the party and the livestream. However, we still do not know if an algorithmic change has been implemented in response.”

“We are hopeful that the Commission will take our findings as a basis to investigate any political bias that might be occurring,” she added, confirming that Global Witness has communicated its results to EU officials tasked with enforcing algorithmic accountability among major platforms.

Examining how proprietary algorithms operate is challenging, as platforms tend to keep such information confidential, labeling these algorithms as trade secrets. This is the reason behind the introduction of the Digital Services Act (DSA) by the European Union recently—a comprehensive online governance framework aimed at facilitating public interest research into systemic risks on major platforms like Instagram, TikTok, and X.

The DSA mandates that major platforms enhance transparency regarding how their information-curating algorithms operate and takes proactive measures to address any systemic risks that may arise.

However, despite implementation of these regulations beginning in August 2023, Judson notes that several components have yet to be fully activated.

In particular, Article 40, which is designed to grant vetted researchers access to non-public platform data for the purpose of studying systemic risks, has not yet gone into effect as the necessary delegated act for its implementation has not been passed by the EU.

The EU’s strategy regarding the DSA also relies on platforms to self-report risks, with regulators subsequently reviewing these disclosures. Thus, Judson suggests that initial risk reports from platforms may lack comprehensive disclosures, as regulators will require time to assess the information and may push for more detailed reporting if they find gaps.

At present, without greater access to platform data, she asserts that public interest researchers remain unable to definitively ascertain whether inherent bias exists in mainstream social media.

“Civil society is closely monitoring when vetted researcher access will be granted,” she added, expressing hopes that this aspect of the DSA public interest initiative will materialize within this quarter.

The regulation has not yet yielded prompt results regarding social media concerns and democratic risks. The EU’s approach might also prove to be overly cautious, potentially hindering the ability to swiftly respond to algorithmically amplified dangers. Nonetheless, it is clear that the EU aims to mitigate any risks associated with curbing freedom of expression.

The Commission is currently investigating all three social media companies implicated in the Global Witness report, though there has not yet been any enforcement action in the area of election integrity. However, there has been increased scrutiny on TikTok, which is now subject to a new DSA proceeding amid concerns about its role in facilitating Russian interference in Romania’s presidential election.

“We are urging the Commission to examine potential political bias,” Judson emphasized. “[The platforms] deny any such bias exists, but our findings indicate otherwise. We are hopeful the Commission will utilize its enhanced information-gathering authority to determine whether this bias is present, and address it if necessary.”

The pan-European regulation grants enforcers the ability to impose fines of up to 6% of global annual revenue for violations and even to temporarily suspend access to non-compliant platforms.

Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

You may also like

About Us

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Latest Articles