Skip to main content

Barry Diller trusts Sam Altman. But 'trust is irrelevant' as AGI nears, he says.

May 7, 2026

Billionaire media mogul Barry Diller doesn’t think OpenAI CEO Sam Altman is untrustworthy, despiterecent reporting to the contrary. On stage at The Wall Street Journal’s “Future of Everything” conference this week, Diller vouched for the AI exec, who has been accused by some former colleagues and board members of being manipulative and deceptive at times. Diller, who is friendly with Altman, was responding to a question about whether or not people should put their faith in Altman to ensure that artificial intelligence benefits humanity. In particular, he was asked about the theoretical form of AI known as Artificial General Intelligence, or AGI, which could one day outperform humans on any task. The media exec, a co-founder of Fox Broadcasting and chairman of IAC and Expedia Group, said that while he believes Altman is sincere in his pursuits, that’s not really the area of concern people should be focused on. Rather, it’s the unknown consequences that will result from AI. “One of the big issues with AI is it goes way beyond trust,” Diller said. “It may be that trust is irrelevant because the things that are happening are a surprise to the people who are making those things happen. And I’ve spent a lot of time with various people who’ve been in the creation mode of AI, and they have a sense of wonder themselves. So…it’s the great unknown. We don’t know. They don’t know,” he explained. “We have embarked on something that is going to change almost everything. It is not under-reported. Now, whether these huge investments are going to come through — I couldn’t care less. I’m not invested in it, but progress is going to be made,” Diller added. Still, the media mogul said he believes that most of the people leading the charge are good stewards, saying he believes that Altman is sincere and “a decent person with good values.” (Diller wouldn’t say which of the AI leaders he thinks is insincere, we should note.) “But the issue is not their stewardship. The issue is … it’s dealing truly with the unknown. They don’t know what can happen once you get AGI, and we’re close to it. We’re not there yet, but we’re getting closer and closer, quicker and quicker. And we must think about guardrails,” Diller noted. Plus, he warned, if humans don’t think about guardrails, then the alternative is that “another force, an AGI force, will do it themselves. And once that happens, once you unleash that, there’s no going back,” Diller said.
Editorial Staff

Editorial Staff

The Editorial Staff at AIChief is a team of Professional Content writers with extensive experience in the field of AI and Marketing. AIChief was Founded in 2025, AIChief has quickly grown to become the largest free AI resource hub in the industry. Stay connected with them on Facebook, Instagram and X for the latest updates.

View All Posts

User Comments

Filter:
No comments yet. Be the first to comment!