Campbell Brown, known for her dedication to truthful information as a TV journalist and as Facebook’s sole news chief, is now addressing the challenges posed by AI in the dissemination of information. She aims to preemptively tackle issues rather than wait for solutions from others.
Her venture, Forum AI, was discussed with JS’s Tim Fernholz at a StrictlyVC event in San Francisco. The company assesses foundation models on complex “high-stakes topics” like geopolitics, mental health, finance, and hiring, where answers are not always clear-cut.
Forum AI’s strategy involves enlisting top experts to create benchmarks and training AI systems to evaluate models on a large scale. For their geopolitics initiative, Brown has engaged experts such as Niall Ferguson, Fareed Zakaria, Tony Blinken, Kevin McCarthy, and Anne Neuberger. The aim is to achieve about 90% agreement between AI assessments and expert opinions, a goal Brown asserts they have met.
Reflecting on the inception of Forum AI, which was established 17 months ago in New York, Brown recounted her experience at Meta when ChatGPT was first introduced. She quickly realized its potential as a primary source of information, despite its shortcomings. Concerned about the impact on future generations, she felt a pressing need to address these challenges.
Brown’s main grievance is the lack of emphasis on accuracy by foundation model companies, which she believes focus heavily on coding rather than on the complexities of news and information. She argues that despite the difficulty, accuracy should not be overlooked.
When Forum AI started evaluating leading models, the results were not promising. Brown highlighted issues such as Gemini sourcing from irrelevant Chinese Communist Party websites and a prevalent left-leaning bias in most models. Additionally, she noted the lack of context and perspective as well as the presence of straw-man arguments. However, she believes that some straightforward solutions could significantly enhance these outcomes.
Having observed Facebook’s pitfalls in prioritizing engagement over accuracy, Brown noted many failures, including a now-defunct fact-checking program. Her takeaway is that focusing solely on engagement has negatively impacted societal understanding.
Brown hopes AI can alter this pattern. She suggests that companies have a choice: cater to user preferences or prioritize truthfulness and honesty. While the latter might seem idealistic, she believes businesses concerned with liability in areas like credit and insurance will demand accuracy.
Forum AI is banking on this business demand, although transforming compliance interest into sustainable revenue is challenging. The market often relies on basic audits, which Brown finds insufficient.
Describing the compliance landscape as inadequate, she mentioned that New York City’s hiring bias law revealed many undetected violations. Effective evaluation needs domain expertise to tackle both common scenarios and unforeseen issues. This process requires time and expertise beyond generalist capabilities.
Last fall, Brown’s company raised $3 million led by Lerer Hippeau. She points out the gap between the AI industry’s promises and user experience. While tech leaders claim transformative potential, everyday users often encounter inaccuracies in basic interactions.
Trust in AI remains low, a skepticism Brown feels is often justified. She observes a disconnect between the Silicon Valley narrative and consumer experiences.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

