Gate News message, April 23 — Google announced on April 22 that it will release separate eighth-generation TPU chips for training and inference later this year, replacing its previous combined design. The move targets AI agent workloads and offers Google Cloud customers an alternative to Nvidia hardware.
The training chip delivers 2.8 times the performance of Google’s seventh-generation Ironwood TPU at the same price, while the inference chip is 80% faster and features 384 MB of SRAM, triple the amount in Ironwood. The separation of training and inference capabilities reflects a shift in how companies optimize for different computational demands.
The initiative is backed by a long-term partnership with Broadcom and Anthropic. Anthropic plans to use approximately 3.5 gigawatts of TPU computing through Broadcom starting in 2027, with Broadcom handling chip manufacturing and networking components through 2031. Anthropic, the AI startup behind Claude, has seen annualized revenue recently exceed $30 billion. Meanwhile, Apple, Microsoft, Meta, and Amazon are also expanding custom AI chip efforts to reduce reliance on Nvidia, which remains the market leader.
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to
Disclaimer.
Related Articles
Chrome gets an “AI coworker”: Auto Browse automates web tasks, with a $6 per-month subscription for enterprise edition
Chrome Enterprise launches Gemini-powered Auto Browse and Chrome Skills, letting the browser automatically carry out multi-step tasks, but requiring the user to click to confirm; it can save/share AI workflows and integrates with Gmail, Calendar, and Drive, including DLP controls, costs $6 per month, and is positioned as turning the browser into an AI colleague.
ChainNewsAbmedia34m ago
OpenAI Introduces ChatGPT Workspace Agents: Codex-Powered, Team Shared, Slack Integration
OpenAI launched Workspace Agents on April 22 in ChatGPT Business/Enterprise/Edu/Teachers, powered by Codex, designed for long-running cloud operation, shared by teams, and capable of offline execution. They can proactively respond in Slack and generate invoices, execute multi-step workflows, and support scheduling. The research preview is free until May 6; afterward, it will use a credit-based pricing model, with rates to be announced. They compete alongside Google Gemini Enterprise Agent Platform and Anthropic Claude Cowork. All three focus on enterprise-grade agents, but their positioning differs.
ChainNewsAbmedia36m ago
Google Cloud Next 2026: Launches Gemini Enterprise Agent Platform, $750 million to Help Consultants Deploy
Google Cloud unveiled the Gemini Enterprise Agent Platform at Cloud Next 2026, integrating model selection, agent building, DevOps, orchestration, and enterprise security controls, and launched a $750 million fund to help McKinsey, Accenture, and Deloitte deploy enterprise agents. The platform, along with Ironwood TPU, A2A, and MCP, builds its own full-stack offering and consulting channel, to counter OpenAI Operator and Anthropic Claude Enterprise.
ChainNewsAbmedia37m ago
Google Expands Wiz Cloud Security Across AWS, Azure, and Google Cloud
Google announced new security features and deeper integration of Wiz, the Israeli cloud security firm it acquired for US$32 billion, across Google Cloud and rival platforms at its Cloud Next '26 event. The company introduced three AI agents for Security Operations in preview mode, designed for
CryptoFrontier47m ago
Microsoft to Invest $17.9 Billion in Australia for AI and Cloud Infrastructure by 2029
Microsoft commits AU$25B by 2029 to expand AI and cloud infrastructure in Australia, deepen cyber defense with government agencies, train 3 million in AI by 2028, and coordinate data-center and AI policy to bolster sovereignty.
Microsoft’s AU$25 billion expansion in Australia through 2029 aims to grow local AI and cloud capacity, building on a AU$5 billion prior commitment. The plan expands the Cyber Shield program with the Australian Signals Directorate, trains 3 million people in AI by 2028, and partners with the Australian AI Safety Institute, while formalizing data-center and AI infrastructure expectations with the government to bolster digital sovereignty.
GateNews49m ago