OpenAI is going to unveil GPT-5 soon. In the meantime, it has released gpt-oss, with open-weight language models that outperform similar sized open models on reasoning tasks. As the company explains:
The gpt-oss-120b model achieves near-parity with OpenAI o4-mini on core reasoning benchmarks, while running efficiently on a single 80 GB GPU. The gpt-oss-20b model delivers similar results to OpenAI o3‑mini on common benchmarks and can run on edge devices with just 16 GB of memory
gpt-oss-120b is now the leading 🇺🇸 US open weights model. Qwen3 235B from Alibaba is the leading 🇨🇳 Chinese model and offers greater intelligence, but is much larger in size (235B total parameters, 22B active, vs gpt-oss-120B’s 117B total, 5B active)
Link below to further… pic.twitter.com/IpYD6vSfHm
— Artificial Analysis (@ArtificialAnlys) August 6, 2025
Both these models can use tools and perform well in chain of thought reasoning. They are compatible with Responses API and can execute Python code. These models are transformers that use a mixture-of-experts architecture to reduce the number of active parameters needed to process input. According to the latest benchmarks, gpt-oss-120b is now the leading US open weight model.
[HT]