
According to an official announcement released by xAI on April 23, xAI has launched Grok Voice Think Fast 1.0, a voice AI agent, and has already deployed it on the Starlink customer service hotline +1 (888) GO STARLINK. Based on the test data disclosed in the announcement, 70% of incoming calls are automatically closed by AI, with no human intervention required.
Grok Voice Think Fast 1.0 Technical Specifications and Deployment Architecture
According to xAI’s official announcement, Grok Voice Think Fast 1.0 uses a single-agent architecture that can synchronously call 28 tools in the backend, covering hundreds of sales and customer service workflows such as hardware troubleshooting, replacement parts order placement, and service credit application, all executed autonomously end to end.
The announcement states that the system’s core technical feature is “reasoning in the background while keeping response latency unchanged.” The announcement describes its deployment method as “plug-and-play,” where after connecting the phone line, it can automatically handle tasks such as answering calls, sales, and technical support.
Operational test data: Customer service case-closure rate and sales conversion rate
Based on the Starlink customer service hotline test data disclosed in xAI’s official announcement:
· 70% of calls are automatically closed by Grok Voice, with no human intervention of any kind
· 20% of calls complete a Starlink subscription purchase during the conversation (sales conversion rate)
The announcement indicates that the Grok Voice system supports 25 languages, runs nonstop 24/7, and does not require manual scheduling or handoffs.
τ-Voice Bench benchmark test: Comparison against major competing products
According to the τ-Voice Bench evaluation results attached to xAI’s official announcement, the scores for each major voice AI system are as follows:
Grok Voice Think Fast 1.0 (xAI):67.3%
Gemini 3.1 Flash Live (Google):43.8%
Grok Voice Fast 1.0 (xAI previous generation):38.3%
GPT Realtime 1.5 (OpenAI):35.3%
The score gap between Grok Voice Think Fast 1.0 and the closest competitor, Gemini 3.1 Flash Live, is 23.5 percentage points. The announcement also states that the estimated market value of the global customer service center industry is $350 billion, and the Starlink customer service hotline is the first commercial deployment scenario for Grok Voice.
Frequently Asked Questions
In which service scenarios has Grok Voice Think Fast 1.0 been deployed, and what are the specific operational data?
According to xAI’s official announcement dated April 23, 2026, Grok Voice Think Fast 1.0 has been deployed on the Starlink customer service hotline +1 (888) GO STARLINK. 70% of calls are automatically closed by AI with no human intervention; 20% of calls complete a Starlink subscription purchase during the conversation; and the system supports 25 languages operating around the clock.
What is Grok Voice Think Fast 1.0’s score on τ-Voice Bench, and how large is the gap compared with major competing products?
According to xAI’s official announcement, Grok Voice Think Fast 1.0 scores 67.3% on τ-Voice Bench, which is higher than Gemini 3.1 Flash Live at 43.8%, GPT Realtime 1.5 at 35.3%, and the previous-generation Grok Voice Fast 1.0 at 38.3%. It leads the closest competitor by a gap of 23.5 percentage points.
What tool-calling capabilities does Grok Voice Think Fast 1.0’s single-agent architecture have?
According to xAI’s official announcement, Grok Voice Think Fast 1.0’s single-agent architecture can synchronously call 28 tools in the backend, covering hundreds of workflows such as hardware troubleshooting, replacement parts order placement, and service credit application. The announcement describes its deployment method as “plug-and-play.”
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to
Disclaimer.
Related Articles
AI Agents Drive Crypto Payments Demand, x402 Processes 165M Transactions
Gate News message, April 27 — Jesse Pollak, an executive at a major CEX, has argued that autonomous AI agents are creating a new "demand center" for crypto payments, requiring software-native payment infrastructure. On April 20, it was announced that the x402 ecosystem had processed more than 165
GateNews56m ago
Cursor AI agent caused an incident! One line of code cleared the company database in 9 seconds—“security checks” turned into empty talk
PocketOS founder Jer Crane said that Cursor AI agents ran maintenance on their own in a test environment, misused an API Token that adds/removes custom domains, and launched a delete command against Railway’s GraphQL API. Within 9 seconds, all data and same-region snapshots were completely destroyed, with the latest recoverable point being three months ago. The agent admitted to violating rules for irreversible operations, not reading technical documentation, not verifying environment isolation, and more. The victims were car rental industry customers; their bookings and all data disappeared, and reconciling accounts took a long time. Crane proposed five reforms: manual confirmation, fine-grained API permissions, backups separated from master data, a public SLA, and a mandatory underlying enforcement mechanism.
ChainNewsAbmedia1h ago
Alibaba's PAI Releases Open-Source AgenticQwen Model: 8B Version Approaches 235B Performance via Dual Data Flywheels
Gate News message, April 27 — Alibaba's PAI team has released and open-sourced AgenticQwen, a small-scale agentic language model designed for industrial-grade tool-calling applications. The model comes in two versions: 8B and 30B-A3B. Trained through an innovative "dual data flywheel"
GateNews1h ago
DeepSeek V4 Pro with Ollama Cloud: One-click integration with Claude Code
According to an Ollama tweet, DeepSeek V4 Pro was released on 4/24, has been added to the Ollama catalog in cloud mode, and can call tools like Claude Code, Hermes, OpenClaw, OpenCode, Codex, etc. with just a single line of command. V4 Pro: 1.6T params, 1M context, Mixture-of-Experts; cloud inference does not download local weights. If you want to run it locally, you need to obtain the weights yourself and run it with INT4/GGUF and multi-card GPUs. Early speed tests were affected by cloud load; typical performance is about 30 tok/s, with a peak of 1.1 tok/s. It is recommended to use the cloud prototype first, and for production later, run inference yourself or use a commercial API.
ChainNewsAbmedia2h ago
UB (Unibase) up 14.96% in 24 hours
Gate News update: On April 27, according to Gate market data, as of the time of writing, UB (Unibase) is trading at $0.0491. It is up 14.96% over the past 24 hours, with a high of $0.0534 and a low of $0.0423. The 24-hour trading volume is $3.9667 million. The current market cap is approximately $123 million.
Unibase is a high-performance decentralized AI memory layer that provides long-term memory and cross-platform interoperability for AI agents, enabling them to remember, collaborate, and self-evolve. Unibase aims to build an open agent internet, supporting seamless cooperation among intelligent agents across ecosystems, empowering developers to build the next generation of AI applications.
This message does not constitute investment advice; please be mindful of market volatility risks when investing.
GateNews2h ago
Guo Ming-chi: OpenAI wants to build an AI Agent phone; MediaTek, Qualcomm, and Luxshare Precision are key in the supply chain
Guo Ming-chi claims that OpenAI is working with MediaTek, Qualcomm, and Luxshare Precision to develop an AI Agent phone, with mass production expected in 2028. The new phone will be centered on task completion: an AI agent will understand and execute requests, combining cloud and on-device computing, with a focus on sensing and contextual understanding. The specifications and supply chain list are expected to be finalized in 2026–2027; if it takes shape, it could bring a new upgrade cycle to the high-end market, and Luxshare may become a major beneficiary.
ChainNewsAbmedia2h ago