While many of us use proprietary LLMs to get things done, it is always nice to see how far open source models are coming. Take the Hunyuan-A13B for instance: it is a mixture of experts model with 80b total parameters and 13B active ones that can perform tasks on par with o1 and DeepSeek. Its hybrid dynamic fast and slow reasoning approach enables it to handle long-text tasks. This model also has agentic tool calling.
🚀 Introducing Hunyuan-A13B, our latest open-source LLM.
As an MoE model, it leverages 80B total parameters with just 13B active, delivering powerful performance that scores on par with o1 and DeepSeek across multiple mainstream benchmarks.
Hunyuan-A13B features a hybrid… pic.twitter.com/8QTT547fcC
— Hunyuan (@TencentHunyuan) June 27, 2025
This model supports fast and slow thinking modes. It has a 256k context window. To disable thinking mode, simply add this to your code:
enable_thinking=False
[HT]