New models are being released every week or so it seems. Apparently, Qwen3 is about to drop. From the leaks we have seen, this is pre-trained on 36 trillion tokens across 119 languages. Here are the main highlights:
- Expanded Higher-Quality Pre-training Corpus : Qwen3 is pre-trained on 36 trillion tokens across 119 languages—tripling the language coverage of Qwen2.5—with a much richer mix of high-quality data, including coding, STEM, reasoning, book, multilingual, and synthetic data.
- Training Techniques and Model Architecture : Qwen3 incorporates a series of training techniques and architectural refinements, including global-batch load balancing loss for MoE models and qk layernorm for all models, leading to improved stability and overall performance.
- Three-stage Pre-training :
- Stage 1 : Focuses on broad language modeling and general knowledge acquisition.
- Stage 2 : Improves reasoning skills like STEM, coding, and logical reasoning.
- Stage 3 : Enhances long-context comprehension by extending training sequence lengths up to 32k tokens.
Scaling Law Guided Hyperparameter Tuning: Through comprehensive scaling law studies across the three-stage pre-training pipeline, Qwen3 systematically tunes critical hyperparameters—such as learning rate scheduler and batch size—separately for dense and MoE models, resulting in better training dynamics and final performance across different model scales.
Apparently, this was leaked on Modelscope but has been taken down since. It should be released soon though.