Yes, absolutely. I think it's interesting because as you mentioned, we used to have one network, and we understood that well. And now we're adding in this new domain, this brand-new network. And not only are we doing it, we're doing it quickly and the bandwidth requirements of that network are really amazing. So the pace of innovation is occurring faster. And I think that causes some confusion. So I'm going to share some slides. I think it's kind of a good intro to do some market sizing, which we all know I love to do.
And we'll kind of start there and set the stage for AI here. So where we are today in AI is in the agentagentic wave. And we can argue exactly when that started. But over the next couple of years, we're going to spend over $1 trillion in infrastructure equipment, so compute, storage, networking to support this agentagentic wave. And to me, that's a mind-blowing number. So I tried to quantify that a little bit differently.
And it means since we just started talking. We've shipped thousands of switch ports into the data center. So good job guys. You shipped thousands of ports since we started here. It's just a tremendous new volume compared to what we were used to in the past. And the good news of that spend, right, is networking is going to really be the glue of that connects that together. So we're talking about a huge amount of money being spent here and ultimately, a significant amount of that will be on networking in order to stitch these GPU, xPU clusters together. And what that means, going back to this pace of innovation is by the end of the decade, the vast majority of infrastructure in the data center is going to be AI or accelerated.
So we got a good handle on traditional compute. And by the end of the decade, I think we're going to have a really good handle on AI. It's going to be the dominant amount of spend in the data center. And that's going to change considerably when we think about Scale Up and Scale Out network opportunity, which we'll get to in a second. So speaking of the AI network and kind of diving into that, everyone loves to kind of talk about the data center as it's one fabric. Then Hardev and Martin, you did a really good job earlier kind of highlighting that it's different and expanding rapidly.
And this chart helps to kind of frame that. If we look at bandwidth growing in the data center, you can see that AI is growing at nearly 100% per year. In other words, what we're throwing at AI now is twice as much as we did a year ago. And next year, it's going to be twice as much as we're doing today. And what that ends up looking like is the chart on the right, where most of the traffic in the data center is going to be AI-related very, very rapidly.
It also brings to the point that all these other networks are ultimately going to have a very large tailwind from this when we talk about DCI and connecting these facilities together or when we begin talking about traditional compute, everything is going to have to be brought up to a certain standard or a certain speed in order to support what we're doing in these AI clusters. So I think, let us kind of have a little bit of a conversation here on this chart. But if we look at what's occurring, we used to have that traditional network and now we have both Scale Up and Scale Out.
And within those domains kind of in Scale Out, that's where we have the InfiniBand versus Ethernet debate. And then when we talk about Scale Up, that's where we get NVLink versus Ethernet. We see UALink, the Ultra Ethernet spec will play an important role there. So it's not just about one network. It's really about these multiple networks there. So I'll kind of ask both of you a question. When was the first time with customers, you began hearing about more than one network in AI. For yourselves, I'm sure, it was a few years ago. But what was that defining moment where there was going to be more than one network in your mind?