Open weight models are the middle ground been fully open source and closed models. With every new model we see, we’d love to have the ability to change how they behave. Qwen3.5-397B-A17B is the first open weight model in the Qwen 3.5 series. It is a multimodal LLM with hybrid linear attention + sparse MoE and largescale RL environment scaling. It is great at coding, reasoning and skill agents.
🚀 Qwen3.5-397B-A17B is here: The first open-weight model in the Qwen3.5 series.
🖼️Native multimodal. Trained for real-world agents.
✨Powered by hybrid linear attention + sparse MoE and large-scale RL environment scaling.
⚡8.6x–19.0x decoding throughput vs Qwen3-Max
🌍201… pic.twitter.com/Pq0qIk54MB— Qwen (@Alibaba_Qwen) February 16, 2026
This model has 397b total parameters with only 17b active parameters. The one hosted on Alibaba Cloud ha a 1M context window and built-in tools. It performs quite well in comparison to SOTA models across various benchmarks.

[HT]

