Following the shutdown of former competitor Digg due to an inability to control site-overrunning bots, Reddit announced on Wednesday its own comprehensive strategy to tackle the pervasive issue of automated accounts.
The company plans to implement a system for labeling automated accounts that provide beneficial services to users, akin to the "good bot" designations seen on X. Furthermore, Reddit will now mandate human verification for any accounts suspected of being bots.
Reddit emphasized that this initiative does not constitute a sitewide verification requirement. Instead, it will be selectively triggered only when an account's activity or other technical indicators suggest it may not be human. Accounts unable to successfully pass this verification test may face restrictions, according to Reddit.
To identify potential bots, Reddit is deploying specialized tools that analyze account-level signals and various factors, such as the speed at which an account attempts to create or post content. It's important to note, however, that the use of AI to generate posts or comments is not against Reddit's platform policies, though individual community moderators retain the authority to establish their own rules.
For human verification, Reddit will utilize a range of third-party tools. These include passkeys from providers like Apple, Google, and YubiKey, as well as other biometric services such as Face ID or even Sam Altman’s World ID. In certain countries, the use of government identification may be required, particularly in regions like the U.K., Australia, and some U.S. states, due to local age verification regulations. Reddit clarified that while necessary in these specific contexts, government ID is not its preferred method of verification.
“If we need to verify an account is human, we’ll do it in a privacy-first way,” Reddit co-founder and CEO Steve Huffman stated in Wednesday’s announcement. He added, “Our aim is to confirm there is a person behind the account, not who that person is. The goal is to increase transparency of what is what on Reddit while preserving the anonymity that makes Reddit unique. You shouldn’t have to sacrifice one for the other.”
These adjustments are designed to counter the escalating prevalence of bots across social platforms and the broader internet. Bots are frequently exploited to manipulate political discourse, disseminate misinformation, artificially inflate popularity, covertly market products, generate fraudulent ad clicks, and more. Projections from Cloudflare suggest that by 2027, bot traffic, encompassing web crawlers and AI agents, will surpass human traffic.
Reddit, in particular, has emerged as a significant target for bots engaged in manipulating narratives, astroturfing for companies or products, reposting links, distributing spam, driving traffic, and conducting research. Adding to this complexity, given Reddit’s content is leveraged for AI training through lucrative agreements with AI model providers, there are suspicions that bots might even be posting questions on the site specifically to generate additional training data, especially in knowledge areas where AI models exhibit deficiencies.
Reddit’s other co-founder, Alexis Ohanian, has previously addressed the "dead internet theory"—a conjecture positing that bots outnumber humans online and that the vast majority of internet content, interactions, and web activity is automated or AI-generated rather than human-driven. In the current era of advanced AI agents, this theory is increasingly manifesting as a reality.
The company had announced last year its intention to introduce human verification requirements to address the proliferation of bots and to comply with "evolving regulatory requirements." However, Reddit now acknowledges that existing solutions, which Huffman recently discussed on the TBPN podcast, are not optimal.
“The best long-term solutions will be decentralized, individualized, private, and ideally not require an ID at all,” Huffman articulated in today’s announcement.
In conjunction with these new measures, Reddit affirmed its ongoing commitment to removing bots and spam, a process that currently sees an average of 100,000 account removals daily. The platform will also continue to rely on user reports of suspected bots, with further tooling enhancements anticipated. Developers operating "good bots" can find more information on how to label their accounts using the new “APP” designation within the r/redditdev community.
The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.