Cloudflare, Inc. announced that developers can now deploy AI applications on Cloudflare?s global network in one simple click directly from Hugging Face. With Workers AI now generally available, Cloudflare is the first serverless inference partner integrated on the Hugging Face Hub for deploying models, enabling developers to quickly, easily, and affordably deploy AI globally, without managing infrastructure or paying for unused compute capacity. With Workers AI generally available, developers can now deploy AI models in one click directly from Hugging Face, for the fast way to access a variety of models and run inference requests on Cloudflare?s global network of GPUs.

Developers can choose one of the popular open source models and then simply click Deploy to Cloudflare Workers AI to deploy a model instantly. There are 14 curated Hugging Face models now optimized for Cloudflare?s global serverless inference platform, supporting three different task categories including text generation, embeddings, and sentence similarity.