Xiaomi Reveals MiMo-V2-Pro Training Details: 1T Model Parameters, Thousands of GPUs Deployed

Gate News message, April 24 — Xiaomi’s large language model team lead Luo Fuli disclosed in an in-depth interview that the MiMo-V2-Pro model has 1 trillion parameters in total and required thousands of GPUs for training. She noted that the 1T scale represents the minimum threshold to achieve performance approaching Claude Opus 4.6 level and secure a competitive entry ticket for the next phase of AI agents.

Technically, the Pro version employs an extreme sparse attention mechanism with a 7:1 ratio between global attention and sliding window attention, controlling inference costs for long-context processing. The model also retains the MTP (Multi-Token Prediction) architecture to leverage surplus compute power for faster inference.

On the management side, the 100-person MiMo team has only 30-40 people directly engaged in core iterations. The team operates without formal hierarchies or explicit sub-group divisions and delivery deadlines. When encountering unstable numerical issues such as training loss spikes, the team prioritizes halting training for investigation, even if it means stopping operations for one or two weeks and incurring millions of dollars in compute costs.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Meta Platforms Plans 10% Workforce Reduction on May 20, Affecting Approximately 8,000 Positions

Gate News message, April 24 — Meta Platforms plans to reduce its workforce by approximately 10%, affecting roughly 8,000 positions, on May 20. The layoffs are intended to improve operational efficiency while increasing investment in artificial intelligence. The planned restructuring reflects the

GateNews49m ago

The Trump administration has released an AI refinement crackdown plan, accusing Chinese companies of systematically stealing model capabilities.

The Office of Science and Technology Policy (OSTP) in the White House, in an official statement released on April 23 by Presidential Assistant Michael J. Kratsios, said that the Trump administration has information indicating that foreign entities (mainly based in China) are intentionally targeting major U.S. artificial intelligence companies. They are systematically extracting U.S. AI model capabilities through “tens of thousands of agent accounts” and jailbreaking technology, and are simultaneously announcing four response measures.

MarketWhisper1h ago

DeepSeek releases the V4 open-source preview, with a technical score of 3206 surpassing GPT-5.4

DeepSeek officially launched the V4 preview series on April 24, with open-sourced model weights under the MIT license, and the model weights have been also released on Hugging Face and ModelScope. According to the DeepSeek V4 technical report, V4-Pro-Max (the highest inference intensity mode) scored 3206 points on the Codeforces benchmark, surpassing GPT-5.4.

MarketWhisper1h ago

Cambricon Completes Day 0 Adaptation of DeepSeek-V4, Marking Milestone for China's AI Chip Ecosystem

Gate News message, April 24 — Cambricon announced today that it has completed Day 0 adaptation of DeepSeek-V4, the latest large language model from DeepSeek, using its proprietary NeuWare software ecosystem and vLLM framework. The adaptation code has been open-sourced simultaneously, marking the

GateNews1h ago

Tencent open-sourced Hy3 preview version, code benchmark tests improved by 40% over the previous generation

Tencent officially open-sourced the Hy3 preview version of a large language model on April 23 on GitHub, Hugging Face, and the ModelScope platform, and simultaneously provided paid API service via Tencent Cloud. According to Decrypt’s report on April 24, the Hy3 preview version began training in late January and reached the publication calendar in less than three months.

MarketWhisper1h ago

FTX Portfolio Investments Worth 158 Trillion Won If Not Bankrupt

FTX, the centralized cryptocurrency exchange that filed for Chapter 11 bankruptcy protection in November 2022 due to liquidity shortages and capital outflows, would have held investments valued at approximately 158.796 trillion won if it had not collapsed, according to analysis cited by Park

CryptoFrontier1h ago
Comment
0/400
No comments