OpenAI Staff Divided Over Sora App Launch Amid Mission vs. Profit Tensions
OpenAI employees and former researchers are expressing growing unease over the company’s new social media venture, the Sora app—a TikTok-style platform featuring AI-generated videos and an abundance of Sam Altman deepfakes. The launch has sparked internal debate about whether the move aligns with OpenAI’s original nonprofit mission to develop artificial intelligence that benefits humanity. John Hallman, a former OpenAI pretraining researcher, shared concerns on X, calling AI-driven social feeds “scary” while acknowledging the team’s effort to design a positive user experience. “We’re going to do our best to make sure AI helps and does not hurt humanity,” he wrote. Boaz Barak, a researcher and Harvard professor, echoed those mixed feelings: “Sora 2 is technically amazing but it’s premature to congratulate ourselves on avoiding the pitfalls of other social media apps and deepfakes.” Former OpenAI researcher Rohan Pandey used the moment to promote Periodic Labs, a new startup founded by ex-AI lab employees focused on using AI for scientific discovery. “If you don’t want to build the infinite AI TikTok slop machine but want to develop AI that accelerates fundamental science… come join us,” he wrote. The tension reflects a broader struggle within OpenAI: balancing its identity as a fast-growing consumer tech company with its self-proclaimed mission to advance safe, beneficial AI. While products like ChatGPT have helped fund research and spread AI tools widely, critics question whether the company’s increasing focus on consumer platforms risks undermining its core goals. Sam Altman defended the Sora launch on X, stating that while OpenAI remains deeply committed to AGI and scientific advancement, creating engaging, fun products helps generate revenue and keep users excited about AI. “It is also nice to show people cool new tech/products along the way, make them smile, and hopefully make some money given all that compute need,” he wrote. He acknowledged past skepticism around ChatGPT’s purpose, noting that the path forward is not always clear-cut. “Reality is nuanced when it comes to optimal trajectories for a company.” Yet the question remains: at what point does the pursuit of growth and profit override OpenAI’s mission? Regulators are watching closely. California Attorney General Rob Bonta has voiced concern that OpenAI’s shift toward a for-profit model could dilute its nonprofit commitments. Some insiders argue the mission is genuine, citing it as a key reason they joined the company. But others see it as a branding tool to attract top talent. Sora is still in its early days, but its launch marks a significant expansion into consumer-facing AI platforms. Unlike ChatGPT, which prioritizes utility, Sora is designed for entertainment—featuring short, looping AI videos reminiscent of TikTok or Instagram Reels. OpenAI claims it’s avoiding the worst pitfalls of social media by not optimizing for time spent on the app. Instead, it aims to encourage creation and includes features like usage reminders and a focus on content from people users know. These safeguards are stronger than those seen in Meta’s recent Vibes launch. Still, early signs suggest engagement-driven design is already at play—dynamic emojis appear when users like videos, a subtle nudge to encourage interaction. As Altman has previously noted, the unintended consequences of social media feeds are well-documented. The algorithms that keep users scrolling often do so at the expense of mental health and societal well-being. Whether OpenAI can build a social platform that avoids those traps remains uncertain. The real test will be how Sora evolves—and whether the company can grow its consumer business without sacrificing its mission.