Home Social Reddit Implements New ‘Human Verification’ Measures to Combat Suspicious Bot Activity

Reddit Implements New ‘Human Verification’ Measures to Combat Suspicious Bot Activity

by admin

In a move to tackle the burgeoning issue of bots on its platform, Reddit has announced new strategies following the closure of Digg, a competitor that succumbed to bot-related challenges. The company plans to implement a labelling system for automated accounts, akin to the “good bots” featured on X, making it easier for users to identify such accounts. Additionally, accounts that exhibit bot-like behaviour will be subjected to verification processes to confirm their human status.

Reddit emphasises that this verification isn’t a blanket requirement for all users; instead, it will be triggered by specific indicators of non-human activity. Should an account fail to confirm its human status, it may face restrictions. To detect potential bots, Reddit will deploy advanced tools that analyse account behaviour, such as posting frequency and content generation speed. It’s worth noting that using AI for creating content isn’t against Reddit’s policies, although community moderators retain the discretion to enforce their own rules.

To ensure human verification, Reddit is set to incorporate third-party mechanisms, including Apple and Google passkeys, biometric systems like Face ID, and possibly government IDs in certain regions, including Australia and the U.K. While these latter measures may be mandated by local regulations aimed at age verification, Reddit’s co-founder and CEO, Steve Huffman, reassures users that the process will prioritise privacy. The goal is not to unmask users but to affirm their human presence while maintaining the anonymity that defines the platform.

The intervention aims to curb the rise of bots on social media, known to distort political conversations, spread misinformation, and create artificial engagement. Projections suggest that, by 2027, bot traffic will surpass that of human users, a trend being noted as social platforms grapple with automated accounts like those on Reddit, which are often engaged in seed-sowing narratives or driving unwanted traffic. The platform’s content is also notoriously utilised for training AI models, which raises suspicions about bots mimicking user-generated posts to collect training data.

Reddit has previously indicated its intention to enforce human verification due to the increasing bot prevalence and evolving regulatory landscapes. However, there’s recognition that existing solutions may not suffice. Huffman suggests that future resolutions should revolve around decentralisation and privacy, ideally eliminating the need for IDs.

Simultaneously, Reddit continues its ongoing efforts to combat bots and spam, reportedly removing around 100,000 bot accounts daily. The platform invites developers of beneficial bots to contribute to the fight against spam by using the new “APP” label, which can be explored within the Reddit development community. As Reddit navigates the complexities of maintaining its identity in an age overwhelmed by automated content, these strategic measures highlight its commitment to fostering an authentic user experience.

Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

You may also like

About Us

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Latest Articles