OpenAI has started rolling out a streamlined version of its agentic Deep Research tool to free-tier users, and at the same time, it’s increasing usage limits for Plus, Team, Enterprise, and Edu customers. According to the company, this new lightweight version runs on the recently launched o4-mini reasoning model and is built to be more affordable while still delivering strong performance.
Pro users will now have access to 250 Deep Research queries per day. Meanwhile, Team, Plus, Enterprise, and Edu users will each get 25 queries daily, and Free-tier users will receive 5.
After a user hits their daily limit for the full-featured version of Deep Research, their queries will automatically switch to the lighter o4-mini-powered version.
In a post on X (formerly Twitter), OpenAI explained that while responses from the new lightweight model will generally be shorter, they’ll still offer the depth and quality users expect. Although the company hasn’t released many technical specifics, it did share a graph showing that the lighter model performs nearly as efficiently as the full version—while being much more cost-effective to operate.
What is Deep Research—and Why Does It Matter?
Deep Research refers to a class of AI tools designed to dive deep into online information, synthesize it, and generate high-level insights—often in the style of a research analyst. The concept first emerged with a tool from Google, but gained real traction in February when OpenAI released a more advanced version. Since then, other companies have introduced similar tools under names like Gemini, Grok, Perplexity, and Copilot.
OpenAI described its Deep Research tool as capable of analyzing and summarizing hundreds of hours’ worth of web content to produce detailed, structured reports. Elon Musk made similar promises when he introduced Grok’s deep research mode earlier this year. While these claims can sometimes overshoot what’s actually delivered, there’s general agreement that deep research models typically outperform traditional web searches or standard chatbot responses.
That said, they’re not perfect. Like most AI systems, these tools can still “hallucinate”—fabricating information or presenting inaccurate conclusions. So while they represent a big leap forward, they’re best used with a critical eye.