On Monday Meta launched "Meta Compute", a new strategic initiative aimed at building massive computing infrastructure to support its artificial intelligence ambitions. The group plans to deploy several dozen gigawatts of capacity by the end of the decade, and hundreds of additional gigawatts over the longer term. The project is part of a broader technological catch-up effort following the mixed reception of its Llama 4 model and a record $72bn investment in 2025.

Leadership of "Meta Compute" has been entrusted to Santosh Janardhan, head of global infrastructure, and Daniel Gross, a former investor and entrepreneur specializing in cutting-edge technologies. Janardhan will retain oversight of the technical architecture, data centers, the chip program and development tools. Gross will lead a new unit focused on capacity planning, strategic partnerships and long-term economic modeling. 

Mark Zuckerberg said that the project will be carried out in close coordination with Dina Powell McCormick, recently appointed Meta's president. With "Meta Compute", the company aims to position itself as a central player in very large-scale AI infrastructure, a field that has become critical amid competition from Google, Microsoft and OpenAI. This ramp-up marks a new phase in Meta's industrial strategy, centered on deploying massive capacity to support the development and training of its future artificial intelligence models.