Skip to main content

Campbell Brown, Ex-Meta News Chief, on Governing AI's Truth

Campbell Brown, whose career has been dedicated to the pursuit of accurate information — initially as a distinguished TV journalist and subsequently a

4 min read8 views5 tags
Originally reported bytechcrunch

Campbell Brown, whose career has been dedicated to the pursuit of accurate information — initially as a distinguished TV journalist and subsequently as Facebook's inaugural and sole news chief — now observes artificial intelligence profoundly altering information consumption. Recognizing a potential recurrence of past challenges, she is taking direct action rather than awaiting external solutions.

Her initiative involves identifying leading global experts to develop benchmarks, which are then used to train AI judges for large-scale model evaluation. For Forum AI's geopolitics division, Brown has assembled a notable team including Niall Ferguson, Fareed Zakaria, former Secretary of State Tony Blinken, former House Speaker Kevin McCarthy, and Anne Neuberger, who previously directed cybersecurity in the Obama administration. The objective is for AI judges to achieve approximately 90% consensus with these human experts, a benchmark Forum AI reportedly meets.

Brown pinpoints the genesis of Forum AI, established 17 months ago in New York, to a pivotal moment. "I was at Meta when ChatGPT was first released publicly," she recounted, adding that "really shortly after realizing this is going to be the funnel through which all information flows. And it’s not very good." The potential impact on her own children lent an almost existential urgency to the situation. She recalled thinking, "My kids are going to be really dumb if we don’t figure out how to fix this."

Her primary frustration stemmed from the apparent lack of prioritization given to accuracy. She noted that foundation model companies are "extremely focused on coding and math," while the complexities of news and information present a greater challenge. However, she contended that difficulty does not equate to dispensability.

Forum AI's initial evaluations of prominent models yielded less than encouraging results. Brown highlighted instances like Gemini sourcing content from Chinese Communist Party websites "for stories that have nothing to do with China," and observed a pervasive left-leaning political bias across almost all models. She also pointed to more subtle deficiencies, such as missing context, omitted perspectives, and the unacknowledged misrepresentation of arguments. "There’s a long way to go," she stated, "But I also think that there are some very easy fixes that would vastly improve the outcomes."

Brown's tenure at Facebook offered firsthand insight into the repercussions of platforms optimizing for misaligned objectives. She confessed to Fernholz, "We failed at a lot of the things we tried." The fact-checking program she had developed is no longer operational. The overarching lesson, often overlooked by social media, is that prioritizing engagement has proven detrimental to society, leaving many individuals less informed.

She harbors hope that AI can disrupt this cycle. "Right now it could go either way," she explained, presenting a dichotomy where companies might either cater to user preferences or "give people what's real and what's honest and what's truthful." While acknowledging that the idealistic vision of AI optimizing for truth might appear naive, she believes the enterprise sector could emerge as an unexpected ally. Businesses leveraging AI for critical functions such as credit decisions, lending, insurance, and hiring are inherently concerned with liability, and "they're going to want you to optimize for getting it right."

This burgeoning enterprise demand forms the cornerstone of Forum AI's business strategy. However, converting compliance interest into consistent revenue presents a significant challenge, especially given that a substantial portion of the current market remains content with perfunctory "checkbox" audits and standardized benchmarks that Brown deems insufficient.

The current compliance landscape, she asserted, is "a joke." She cited the example of New York City's pioneering hiring bias law, which mandated AI audits; the state comptroller subsequently discovered that over half of these audits failed to detect violations. Genuine evaluation, she argued, necessitates deep domain expertise to navigate not only well-understood scenarios but also complex "edge cases that can get you into trouble that people don't think about." This meticulous work is time-intensive, and as she put it, "Smart generalists aren't going to cut it."

Brown, whose company secured $3 million in funding last fall in a round led by Lerer Hippeau, is uniquely qualified to articulate the disparity between the AI industry's self-portrayal and the everyday experience of most users. "You hear from the leaders of the big tech companies, 'This technology is going to change the world,' 'it's going to put you out of work,' 'it's going to cure cancer,'" she observed. "But then to a normal person who's just using a chatbot to ask basic questions, they're still getting a lot of slop and wrong answers."

Trust in AI remains remarkably low, and she contends that such skepticism is often justified. "The conversation is sort of happening in Silicon Valley around one thing, and a totally different conversation is happening among consumers," she concluded.

#AI News#Campbell Brown#Forum AI#AI Truth#Model Evaluation
ES
Editorial StaffEditor

The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.

View all posts
Reader feedback

What did you think of this story?

User Comments

Filter:
No comments yet. Be the first to comment!
Continue reading
View all news