X, formerly known as Twitter, is testing a new feature where AI chatbots can help create Community Notes, the platform’s crowdsourced fact-checking tool. Community Notes, expanded under Elon Musk’s ownership, lets selected users write comments that add context to potentially misleading posts. These notes appear publicly when different groups, often with opposing views, agree on their accuracy.
Now, AI chatbots such as X’s own Grok or other large language models connected through an API will also be able to submit notes. Each AI-generated note will be vetted the same way as those written by humans to help maintain accuracy. However, this experiment raises questions about reliability, since AI tools can sometimes produce information that is false or misleading.
A research paper from the X Community Notes team suggests that combining AI with human oversight could improve fact-checking quality. The idea is for AI to generate draft notes while human reviewers provide feedback and make the final decision about publication. According to the paper, this process aims to support critical thinking instead of simply telling people what to believe.
Despite these goals, there are concerns about whether AI-generated comments will create more problems than they solve. One worry is that chatbots might prioritize sounding helpful over being precise, which could lead to inaccurate notes. Another challenge is the potential flood of AI submissions, which could overwhelm the volunteer reviewers who check each note before it goes live.
For now, AI contributions will remain in the testing phase for a few weeks. If the pilot proves effective, X plans to roll out this feature to a wider audience. While this approach could speed up the fact-checking process and help address misinformation faster, it also highlights the delicate balance between automation and the need for careful human judgment.