watch on aatventure.news

NEW KIMI AI | Coding with a Multimodal Long-Context Brain

Kimi K2 is the large language model series developed by Moonshot AI team

2025-07-12 05:00:00 - Wes Roth

Kimi K2 is a state-of-the-art mixture-of-experts (MoE) language model with 32 billion activated parameters and 1 trillion total parameters. Trained with the Muon optimizer, Kimi K2 achieves exceptional performance across frontier knowledge, reasoning, and coding tasks while being meticulously optimized for agentic capabilities.


Key Features

Large-Scale Training: Pre-trained a 1T parameter MoE model on 15.5T tokens with zero training instability.

MuonClip Optimizer: We apply the Muon optimizer to an unprecedented scale, and develop novel optimization techniques to resolve instabilities while scaling up.

Agentic Intelligence: Specifically designed for tool use, reasoning, and autonomous problem-solving.


Model Variants

Kimi-K2-Base: The foundation model, a strong start for researchers and builders who want full control for fine-tuning and custom solutions.

Kimi-K2-Instruct: The post-trained model best for drop-in, general-purpose chat and agentic experiences. It is a reflex-grade model without long thinking.


https://www.kimi.com

https://aimode.co/app/kimi






More Posts