Chinese tech group Alibaba has launched Qwen3, the latest generation of its open-sourced large language model ( LLM ) family, setting “a new benchmark for artificial intelligence ( AI ) innovation”.
The Qwen3 series features six dense models and two mixture-of-experts ( MoE ) models, offering developers flexibility to build next-generation applications across mobile devices, smart glasses, autonomous vehicles and robotics. All models – including dense and MoE – are now open sourced and available globally.
The series, the company says, marks its debut of hybrid reasoning models, combining traditional LLM capabilities with advanced, dynamic reasoning. Qwen3 models can seamlessly switch between thinking mode for complex, multi-step tasks ( such as mathematics, coding and logical deduction ) and non-thinking mode for fast, general-purpose responses.
For developers accessing Qwen3 through application programming interfaces, the model, the company adds, offers granular control over thinking duration ( up to 38K tokens ), enabling an optimized balance between intelligent performance and compute efficiency.
Notably, the Qwen3-235B-A22B MoE model significantly lowers deployment costs compared with other state-of-the-art models, reinforcing the company’s commitment, it says, to accessible, high-performance AI.
Trained on a massive dataset of 36 trillion tokens – double that of its predecessor Qwen2.5 – Qwen3 delivers significant advancement on reasoning, instruction following, tool use and multilingual tasks. Key capabilities include:
Since its debut, the Qwen model family, Alibaba notes, has attracted over 300 million downloads worldwide. Developers have created more than 100,000 Qwen-based derivative models on Hugging Face, making Qwen one of the world’s most widely adopted open-source AI model series.