HyperAI
Back to Headlines

OpenAI Opens Up Deep Research Model API, Cuts Web Search Prices

4 days ago

OpenAI has announced a significant upgrade to its API services, opening access to its deep research models. This new update includes powerful features such as automated web search, data analysis, MCP (Model Communication Protocol), and code execution. The newly available models, o3-deep-research-2025-06-26 and o4-mini-deep-research-2025-06-26, were previously used in ChatGPT and now developers can directly call them through the API. These models are particularly suited for complex tasks that require accessing the latest information and performing advanced reasoning. In terms of expanded functionality, the o3, o3-pro, and o4-mini models now support web search features. OpenAI has also adjusted its pricing strategy, reducing the cost of inference web searches to $10 per 1,000 calls. The price for GPT-4o and GPT-4.1 web searches has been significantly reduced to $25 per 1,000 calls. To further enhance developer experience, OpenAI has introduced webhook functionality, which automatically notifies developers when tasks are completed, eliminating the need for constant manual status checks. OpenAI recommends using webhooks for long-running tasks like deep research to improve system reliability and development efficiency. These updates mark important advancements in OpenAI's API services, offering developers more robust and cost-effective tools. OpenAI has been steadily improving its offerings to stay competitive in the rapidly evolving AI landscape, where leading companies like Google and Anthropic are making similar strides. Additionally, OpenAI has announced its 2025 Developer Conference (DevDay), scheduled for October 6, 2025, in San Francisco. The conference is expected to attract over 1,500 attendees and feature a range of engaging activities. Key highlights include live-streamed keynote speeches on the latest AI developments and future visions, as well as hands-on workshops focusing on the newest models and tools. Compared to last year, the event will offer more stages and presentations to ensure all participants have a productive and insightful experience. Meanwhile, in other AI news: ElevenLabs Launches Voice Design v3: ElevenLabs, a leader in AI voice technology, has released Voice Design v3, a cutting-edge tool that generates highly realistic and personalized voices using simple text prompts. Supporting over 70 languages and hundreds of local accents, this tool represents a significant leap forward in AI voice synthesis, enhancing creative freedom and emotional expression. Google Reopens and Enhances "Ask Photos" Feature: Google has reintroduced and optimized its AI-powered "Ask Photos" search tool, designed to help users find specific images by asking complex questions. After an initial rollout was paused for further improvements, Google now claims the tool performs better in terms of speed, quality, and user experience. Google Launches Offerwall Tool: To help publishers combat the impact of AI search on their revenue streams, Google has launched the Offerwall tool. This feature allows readers multiple ways to access content, including micropayments, completing surveys, and watching ads. AI intelligence determines when to display the Offerwall to maximize engagement and revenue. According to tests, the tool has increased publisher income by 9%. YouTube Introduces AI Summaries and Expanded Access to Chatbot Tools: YouTube is rolling out two new AI features to enhance user experience. One is an AI summary that appears in search results, currently available to U.S. YouTube Premium members. When users search for content like "best beaches in Hawaii," a special carousel will display brief video clips along with AI-generated descriptions. The second feature is an expansion of access to conversational AI tools. Meta Recruits Top AI Researchers from OpenAI: Meta has secured the talents of three leading AI researchers—Lucas Beyer, Alexander Kolesnikov, and Xiaohua Zhai—from OpenAI. These experts, known for their groundbreaking work in machine learning and computer vision, had recently joined OpenAI after leaving Google DeepMind. Their recruitment underscores Meta's commitment to advancing its AI capabilities, particularly in the areas of visual transformers and scalable image models. Black Forest Labs Open-Sources FLUX.1 Kontext [dev]: Black Forest Labs has made its latest image editing model, FLUX.1 Kontext [dev], open-source. With 1.2 billion parameters, this model supports context-aware image generation and editing on consumer-grade hardware, providing a powerful alternative to GPT-4o. The open-source release is expected to benefit creators, developers, and researchers alike. These developments highlight the ongoing innovation and competition in the AI industry, driven by companies like OpenAI, Google, and Meta, as they vie to create more sophisticated and useful AI tools.

Related Links