OpenAI Tests Compute-Intensive Features, Limiting Some to Pro Users Amid Rising GPU Demand
OpenAI CEO Sam Altman has announced that the company is testing new, compute-intensive features, warning users that some will initially be restricted to Pro subscribers or come with additional fees due to high costs. In a post on X, Altman revealed that OpenAI is launching new offerings that require significant computational resources, describing the effort as an experiment in pushing AI infrastructure to its limits. He wrote, "We also want to learn what's possible when we throw a lot of compute, at today's model costs, at interesting new ideas." Despite the rising expenses, Altman reaffirmed OpenAI’s long-term goal of reducing the cost of intelligence and making advanced AI widely accessible. "We are confident we will get there over time," he said, emphasizing that the company remains committed to democratizing AI. The announcement comes amid a broader industry trend of escalating demand for computing power. OpenAI has been vocal about its insatiable need for GPUs. Kevin Weil, OpenAI’s chief product officer, recently said on the "Moonshot" podcast that every new GPU acquired is immediately put to use, comparing the surge in compute demand to the bandwidth boom that fueled the video revolution. Altman previously stated in July that OpenAI aims to deploy more than 1 million GPUs by the end of the year. He added a lighthearted challenge to his team: "Very proud of the team but now they better get to work figuring out how to 100x that lol." Other AI companies are also investing heavily in infrastructure. Elon Musk’s xAI revealed it is using a supercluster of over 200,000 GPUs named Colossus to train its Grok4 model. Meanwhile, Meta’s Mark Zuckerberg recently said the company is prioritizing "compute per researcher" as a key competitive edge, investing heavily in GPUs and custom-built infrastructure to outpace rivals.