Akamai Technologies, Inc. has provided additional detail on the recent four-year, $200 million service agreement it signed with a major U.S. tech company at the forefront of the AI revolution for high-performance AI compute. Under the terms of the deal, the customer will utilize a multi-thousand NVIDIA Blackwell GPU cluster hosted in a data center designed for efficient, high-density power capacity, as well as other cloud infrastructure services on Akamai's distributed cloud platform. The deal underscores enterprise demand for Akamai's integrated AI development and deployment platform and represents one of the world's largest NVIDIA Blackwell RTX PRO 6000 Server Edition clusters at scale.

The GPU cluster is powered by an AI-optimized Ethernet networking platform, enabling non-blocking, lossless, and high-performance connectivity for large-scale AI factories and GPU-accelerated computing. Additionally, it leverages a high-performance, parallel file storage platform for linear scalability in AI and HPC workloads with NVMe-over-Fabric. The announcement follows several moves Akamai has made to expand its AI inference and generalized compute capabilities, including a rapid expansion of its global IaaS footprint to 41 datacenters leveraging its relationships with many data center partners to enable this growth, and the expansion of its Managed Container Service, which provides a managed service to scale out applications across Akamai?s distributed infrastructure.

In October 2025, the company announced Akamai Inference Cloud, redefining where and how AI is used by bringing AI inference closer to users and devices. Most recently, Akamai announced the acquisition of thousands of NVIDIA Blackwell GPUs to bolster its global distributed cloud infrastructure and create a unified platform for AI R&D, fine-tuning, and post-training optimization. Akamai disclosed information about this service agreement during its Fourth Quarter and Fiscal Year 2025 conference call on February 19, 2026.