Cloudflare Inc (NYSE:NET) Morgan Stanley Technology, Media and Telecom Conference March 6, 2024 5:50 PM ET
Company Participants
Thomas Seifert - Chief Financial Officer
Conference Call Participants
Hamza Fodderwala - Morgan Stanley
Hamza Fodderwala
All right. Well, good afternoon, everybody. Thank you for joining us. I'm Hamza Fodderwala, cybersecurity analyst at Morgan Stanley. And this afternoon, we have the pleasure of having Thomas Seifert, CFO of Cloudflare. Thomas, thank you for joining us.
Thomas Seifert
Thanks for having us. Hello, everybody.
Hamza Fodderwala
I'll give you a minute to drink your water, while I read this very important disclosure. So for important disclosures, please see the Morgan Stanley Research Disclosure website at www.morganstanley.com/research disclosures. With that, Thomas, thanks again for joining us. I wanted to talk about fatigue. Yes, I certainly and you certainly seemed fatigue, but I think we can get a second wind here. Well, one, actually, on that topic, are you seeing any spending fatigue? Clearly, our Q4 results would contradict that, but I'm curious what you make of those comments?
Thomas Seifert
I came in late last night. We had a late dinner, so there might be some fatigue, but not really. And as you know, we had a really good quarter. And some of our peers that we admire showed really strong results. So we cannot really speak of fatigue. The threat landscape speaks a completely different picture, too. I mean, you just have to open up the what's your journal from this morning between ransomware and health care attacks against critical infrastructure, nation-sponsored attacks against defense industry is driving a threat landscape that is anything but fatigue from a security perspective.
Question-and-Answer Session
Q - Hamza Fodderwala
Fair enough. Very clear. So a couple of quarters ago, Matthew talked about Cloudflare as the connectivity cloud and really encompassing a lot of different types of use cases. Can you elaborate what that means? And yes, I think we'll start there.
Thomas Seifert
I think it could be a long-winded answer. But we -- of course, we built a network today that sits in more than 300 cities, that is in more than 100 countries in terms of points of presence. And there is a lot on there in terms of millions of free customers, 1-,000s of what you call, pay-as-you-go customers, very small startups, the largest financial institutions, governments, and they all connect through us to various private and public clouds, and they want this data to move securely, efficiently, performant and cost performant. And observe all this through one control plane. And this is what we provide.
We -- in the beginning, company -- external folks and analysts talked about us as the first hyperscale, and that's not what we felt we are. We think we are the first networking cloud, connecting all of our customers and everything they do and all the data they move to wherever it needs to go. And I think this is what coins that idea of us being a connectivity cloud. And it defines, I think, really, really well where the business model has started, what it evolved into and where we think our future opportunity is.
Hamza Fodderwala
Got it. Got it. Maybe going back to the most recent quarter. So Q4 was very strong. I was -- we were particularly impressed with the acceleration in RPO and CRPO bookings in particular. Just remind us what drove that? And is there anything that might have been one timing there that as investors, we should be careful about extrapolating?
Thomas Seifert
Well, as you said in the introduction, Q4 was a strong quarter. The good news was that it was not really driven by one thing or one-off, 1 large deal or 2 larges. It was driven by a broad set of arrays and growth vectors. We saw maybe the most mundane. We still got good progress on our transformation from a go-to-market perspective. Everything that we initiated on the go-to-market side is taking hold.
We are really encouraged by the accelerated momentum of our Zero Trust platform. That is -- we call this our second wave of product, product line, we started with Cloudflare for access in 2020. And we always talked about that featuring out the product in that product portfolio access, gateway, browser isolation, e-mail DLP will take time. But we've reached a point where we have literally reached feature parity with everybody else who is out there and that is driving momentum.
Now we have a fully featured Zero Trust product in combination with the platform we have with a rather compelling offering, and that drove momentum across a wide set of customers, verticals and regions. And then I think the other really important topic in Q4 was a very strong federal business. We talked about one very large deal with the Department of Commerce, Zero Trust deal, by the way, too. But strength in federal, not only in our country but outside across the oceans in both directions, very strong federal business in the fourth quarter and a good pipeline, I would say, for federal business moving forward.
Hamza Fodderwala
Got it. Got it. I definitely want to dig into that. So yes, you talked about, I think a $30 million Zero Trust win in the federal vertical. You've had 3 different actor business. Act 1 has been the very strong application services business around DDoS protection, CDN and various other services. And Act 2 being the Zero Trust or what's also known as SASE. Just talk to me a little bit about the scale of the Cloudflare network and how you are able to use the installed base and revenue you're already generating a very strong [Act] services business to really extend that same competitive advantage in SASE relative to maybe some of your peers.
Thomas Seifert
It's less about the revenue. It is much more about the infrastructure of the network. So when somebody starts to do work on us, I said you -- and ask me or our team, how do I get my armor -- arms around the competitive modes of Cloudflare, you have to really understand the architecture of the network. So we said we are today in 300 -- more than 300 cities in many cities like San Francisco and more than 1 location, a highly decentralized network.
And every product we have and every service we offer runs on every server and every location and that means that the complete surface of the network capacity wise and infrastructure-wise, becomes our decrease of freedom, how we manage traffic, how we manage cost. And this the key reason why our margin structure is so superior and why you have such an elasticity in our business model.
So for example, when -- during COVID when most of our revenue is subscription review, it's also quasi fixed, there's very little component of variable. When during COVID, we all started to work from home, traffic spiked on our network literally within a couple of weeks by 60%. Folks expected our margins to tank, they didn't flinch, they actually improved. And this speaks volumes about the efficiency of the architecture but also the elasticity we have to absorb gigantic moves in data.
Now this network is built on the traffic we deliver or handle with our first wave of products. So it's a CDN network, but not a lot of CDN revenue, it's the firewalls, the DDoS mitigation, the routing, the load balancing that happens. In this business model, you pay not -- or we don't pay for the amount of data we move, we literally pay for the size of the pipes we have installed, right? And the first wave of product is literally traffic moving out to the eyeballs.
So when we now design a slate in the portfolio of Zero Trust products, they are literally moving traffic in the reverse direction. It's all about moving traffic back. So all that traffic that we collect literally comes for free. So our Zero Trust products are margin-wise far of 90% there. Matthew, I think, on the last earnings call [indiscernible] he said we could consolidate all the Zero Trust providers out there, put all that network on our -- all the traffic on our network and not need to invest one additional dollar of CapEx. So it gives you really a good idea of that -- the capacity of the network.
So you have all these products. They fill the infrastructure and run on the infrastructure we have built, highly performing at the edge of our network at a very superior margin structure. And now you run all these Zero Trust products in the same control plane where everything else runs. And this combination of not only being a vendor consolidator, but being able to consolidate this all on one platform is, I think, what makes us so unique. It was one of the, I think, key reasons also why we won the Department of Commerce, not only because the product was highly competitive feature-wise, but because it comes with a platform that is so much more than just Zero Trust.
Hamza Fodderwala
Makes sense. So joining on Act 2 on the Zero Trust side. So the performance advantage of the network seems very clear with Cloudflare. One of the things you've also done is make a lot of enhancements to the Zero Trust security portfolio. You talked a little bit about the DLP, SASE and all the different features that you have. One of the other things you've been focusing on as well is up-leveling the go-to-market cloud for the company that has obviously a massive installed base, very natural product market fit. But as you sell these larger SASE deals, what did you have to do from a go-to-market standpoint to really up-level that and talk to us about the recent appointment of Mark Anderson in relation to that.
Thomas Seifert
So as you move up market, and you sell -- you start to sell Zero Trust product, the personas are changing. While discussions before have been mainly site CIO and CISO discussions, now the deals become really, really big, you talk about double-digit million-dollar contract. So you talk to the C-suite, you talk to CEOs, you talk to CFOs, you talk to general counsels because a lot of those topics, especially outside of North America become compliance topics, data sovereignty and data localization topic. So you have to adjust messaging, you have to adjust the personas and target the personas you need to talk to, more sophisticated account structure from support perspective.
We've started to enable our channel. The first wave of products are so fast to install, so highly efficient, whether you as a customer put your homepage on Cloudflare, I think you're probably 5 minutes but you're onboarding all of Morgan Stanley, probably it takes us under attack. It also takes us a couple of hours. So it's highly efficient. It didn't -- these products didn't leave a lot of room for channel partners to provide value with the wave 2 products, the SASE or Zero Trust has changed. So enabling the channel has become an important part of our go-to-market strategy and general enabled revenue grew from 70% last year. So we're making good progress, but we are still a way to go.
And this evolution is still giving us a lot of opportunity in terms of customer size, number of very large customers we talked on the earnings call now with the first handful of customers that are far north of $10 million, but there's a lot of room to grow. So with Mark Anderson coming on board, it allowed us to accelerate that journey, I think not so much about disrupting or changing cost, but literally accelerating the journey has done this before. He's been very successful. He's been on our Board for years. He understands us not only the enterprise side, but where that efficiency comes from. And it allowed us to combine not only sales, but sales and marketing under his leadership, and he's just a great guy. So we're all excited that he is on the Board now.
Hamza Fodderwala
Great. Great. Digging on the channel, particularly on SASE, obviously a channel very important to enterprise security sales. How is it that Cloudflare is able to incentivize channel partners to partner with Cloudflare? And are there certain things that you can do, given your scale on the network side where you can offer a better incentive perhaps than your peers?
Thomas Seifert
Well, there -- I mean, with a nonobvious topic first, success -- great success. So there's a flywheel. Once you have your first very large deals, especially the deal with the Department of Commerce here that we had in the last quarter, it drives interest from other large channel partners. The team was up with us all of a sudden, there's a Cloudflare that allows you to sign a $30 million, $40 million, $50 million deal. So that drives a significant amount of interest.
The products are compelling. The platform is compelling. And then we have a superior margin structure that we can take advantage of in terms of rewarding successful partner for us without endangering the price envelope that are in the market. So this all combined, I think, has been -- there have been the reasons why we've seen quite some success in building out our channel program.
Hamza Fodderwala
Got it. I want to talk a little bit about how you're packaging the product as well. So Cloudflare One was the new package that you launched, I believe, sometime last year. Can you walk us through how that's been able to help you land some of these larger customers, some of the pricing changes around that?
Thomas Seifert
When I started at Cloudflare, we had less than 10 revenue-generating products. Now we are in the mid-50s. So you have to evolve how you market and how to sell those products. Bundling has been now an opportunity for us, something we have been -- a journey we've been embarked on. We are not finished yet that is evolving. We are testing, as we speak, new bundled concepts with certain customer verticals in certain test markets. And there are some lofty examples out there of companies who have done good jobs bundling their products and getting pricing and expansion opportunities under control or taking full advantage of the Salesforce, Microsoft, I think, qualified for that.
So it's the journey. We have seen really good progress, especially when it comes to consolidating spend and delivering ROI on our platform, a topic that has been hugely important to folks like me in time when budgets were tight last year and still continue to be tight. But it's a journey that we have not finished yet. We are right in the middle of it, to be honest. And we think there's significant upside still in front of us, both from our go-to-market land perspective, but especially also from an expansion perspective.
Hamza Fodderwala
I wanted to shift back a little bit maybe towards a broader security question that encompasses Act 1 as well. So we talked a little bit about the rising threats that we're seeing recently. Cloudflare, I believe, secures -- over 20% of Internet traffic goes through Cloudflare. You've got rising geopolitical tensions, you have half of the global population voting in elections this year. How has that increased threat activity driving perhaps more revenue for Cloudflare?
Thomas Seifert
First of all, in situations like this, it's important to not think of revenue first, but about protecting, I mean, yesterday was Super Tuesday, election Tuesday. We protected more than 100 websites, state and federal institutions as part of the process just last night, more than 400 across the country. We are in a very unique position. There's a difference. If you are a pure enterprise company and you sit in front of 1000s maybe even in front of 10,000 enterprise customers, the perspective you have on this landscape is very, very narrow. We sit in front of millions of free customers, 10,000s of paying customers with election platforms. We have a program that is called Galileo, where we protect for free voices that need to be heard, critical journalists and organizations, people that are under heavy nation-sponsored attacks.
So we have a very unique perspective on attacks. We see attacks in their -- how they start. Somebody who wants to attack Morgan Stanley starts years earlier practicing malware and attack vectors, and we see those attacks being developed that makes us a very interesting partner for companies that needs help to defend. It also feeds our products in terms of the secure posture we have. We literally defend billions, 80 billion attacks per day across the network. And it's a data game, right? What we see, how early we see it, that allows us to defend better.
And that hopefully leads to better products that we monetize. But the first step is really getting the defense posture up for all those entities that put their trust in us and sit behind our network. It's an interesting time. The threat landscape is an all-time high. We have seen the highest and most sophisticated deed of attacks rising up. We've blogged -- posted about an attack, a highly sophisticated attack against our own infrastructure that we successfully defended. So there's nothing, there's no fatigue that we see. Back to all we started.
Hamza Fodderwala
I'd be remiss if I didn't talk about AI. I know it's super early days, but I think Cloudflare is different in that. Obviously, a lot of companies are coming out with AI copilots, but Cloudflare is really an AI enabler, if you will. So can you just high level explain what are some of the different vectors for monetization as it relates to AI because I know there's a lot that you're offering?
Thomas Seifert
That drove a lot of the discussions we had in the various meetings today already.
Hamza Fodderwala
Sorry, your...
Thomas Seifert
No, no, no. That's -- it's a super fascinating topic. We would be remiss not talking about AI. For us, AI plays from a revenue and monetization perspective plays various layers and vectors. There's over the last -- there's of course, AI companies just signing up with us for their own security posture. And I don't think there is an AI company of name, small or big, that is not behind our network at this point in time. A use case that we did not expect that was -- and is still driven of the GPU capacity shortage that we see is that LLM companies are putting their data on us and our 2 product and use us as a departure point to find available and affordable GPU capacity for training their models.
That might not be a business model forever, but it's certainly a good business model for the time being. We just help them find available and affordable GPU capacity without paying for a huge amount of egress and is transporting the data to where the capacity is. The really interesting use case for us is using our edge network, our distributed network and our ability to have [CPU] and compute resources close to the eyeballs where they connect us. We seem to be in this Goldilocks zone where inference tasks run in a highly efficient way, efficient, secured and compliant. A lot of the inference tasks in the future will be about where data is and where it can move.
And we started to deploy GPU capacity at the edge of our network in order to enable that and prepare for that. We rolled out GPU capacity. We originally targeted 100 cities by the end of last year. We were a little bit ahead of plan, 120, I think, if I remember correctly. But we will be literally in every location by the end of this year, enabling inference task to run at our network either for latency reasons, for compliance reasons, for cost reasons, where you enable some devices by enabling AI capability, literally milliseconds away from the device or offloading expensive hardware infrastructure from a device into our network. That is the most promising. We launched the vector database last year that has a high attach rate to everything we sell now. That's a significant opportunity for us.
And that inference and vector database is a business where we try for adoption, not for revenue. That is really important point for us. We learn a lot by how we have the model and how we have to go for GPU capacity and what kind of GPU capacity our customers need at the edge of our network. The idea is that we extract the need of the customer and the software that runs on it from the hardware set we have. That is something we have done successfully on all the other products we offer, and that is another reason why our margin structure is still superior.
And then last but not least, is we all want to -- as individuals and as companies want to interact with large language models, and how we make sure that this happens safely and securely without data leakage, without opening up new incursion vectors and threat vectors is a big topic for us. So we just announced today that we started developing firewall for AI. How you mitigate traffic on LLM is very different from API traffic. We talked this natural language, not answer every answer we get is the same. So it's an interesting topic. So we -- that will be a third vector for monetization.
Hamza Fodderwala
A lot of stuff there. A couple of follow-ups, and then I want to open up to the audience for questions as well. One of the questions I get is on the inference opportunity -- the AI inference opportunity. What does that GPU capacity that you have look like? And how is Cloudflare able to make these investments? And I believe even like I think last quarter or last year, your CapEx was down. So how are you able to do it in an efficient way given the GPU shortage out there?
Thomas Seifert
CapEx ratio is on dollar-wise where we -- excuse me, yes, for a lot of good reasons. The hardware team is doing a good job. But if you come back to what we said earlier, our Zero Trust momentum is accelerating. So that is revenue that is coming in, that hardly -- that literally needs 0 CapEx because it's living off the infrastructure we already have. This allows us to deploy ratio-wise, at least CapEx dollars towards GPU.
Among the really unique things about us is if you move so much data through your network, you learn a lot from it. And you're never in a position where you invest ahead of the demand curve. So in the beginning, we used our CPU capacity to learn from the inference tasks, and then we bought our first 500 cards and you learn on that. So today, it turns out for inference task if you're just targeting phase and say, "this is the inference task I would like to run, what GPU hardware do you recommend? It's hardly ever the bleeding edge that you need to train models, it's the whatever [L4Ds] or whatever it's called.
So we are deploying a large mix now of GPU cards from all the suppliers, NVIDIA video, of course, Intel, AMD and the first ASICs that are specialized and really trying to find a hardware mix that is optimized towards the inference task that we see. And the idea is, what I said earlier, to abstract the need from the software stack, from the hardware stack. So our customers shouldn't worry about what GPUs they need to provision, how much capacity they should reserve, they literally buy what they use, and we decide or the algorithms decide where we run it on what hardware we run, where it's most efficiently computed. And that's the journey we've been on. We are really good at that. The teams that are working on this. And this is one of the reasons why we have so efficient CapEx numbers.
Hamza Fodderwala
Just one more follow-up and I'll open up to the audience. One of the key selling points in the past on Cloudflare workers and some of the developer services has been the fact that you're not charging these egress fees. Recently, GCP and then AWS talked about dropping egress fees. I don't know if it's made official yet from AWS, but I think there's more nuance to that argument. So maybe just explain is that for software? Is that -- yes, how does that impact you?
Thomas Seifert
I remember when we launched 2 of our storage products and people said -- asked Matthew, what's the -- what happens if there are no egress fees, et cetera? It's a win-win, either way, either they are high egress fees and then our R2 model and revenue is going to benefit or the 0 egress fees and then that is all about being a connectivity cloud, then we can move freely data for our customers and will make revenue somewhere else. So first of all, they are not waiving all egress. They are just reacting to some legislation that is going to come in Europe.
So it's just if you leave a company for good and move your data, then the egress fee is going to be waivered. If you continue to move data in and out, you'll run up fees. As it happens, we just launched a product today, that is a multicloud product, Magic Cloud network, also they call it Magic Multicloud. We just launched this today. It's -- what it does is it's like an interpreter that understands all the different public clouds.
And with that interpreter comes our big backbone that allows you to move data between private and public clouds. And less R2 revenue, more multicloud revenue. So I think either way, it's going to be a win for us. We probably would prefer a world without egress fees and have data move freely and customers decide where the best location, most cost-efficient location is to compute, store and do whatever they want to do with data. And we just help them move it securely and highly performant and cost effective.
Hamza Fodderwala
Any questions from the audience? We have one over here.
Unidentified Analyst
I have 2 questions. One is I want to further -- can you elaborate on the CapEx on GPU and also the nature of the business that Cloudflare is doing You're deploying GPU, is this like renting out GPU capacity, like, Oracle how, or is it different nature of business?
Thomas Seifert
So we are not lending out capacity. I think this is -- if you come to us, you don't have to -- you have to don't think about how much capacity do I need, how much capacity do I reserve? Will I be underprovisioned, overprovisioned? You just pay for what you use. I think this is -- and we worry about the abstraction. That is why it's so important for us we model it correctly and we get the high usage of the GPU capacity we have. But we are not granting capacity. You come to us, you pay for what you use.
The first part of your question is, we have service in every location. We have significant CPU capacity out there. Sometimes people confuse this with CDN. We are not a CDN. We happen to have a really fast CDN network because we deliver security and performance products at the edge of our network. So we've always done significant high-volume compute task at the edge, encryption, decryption, packet inspection.
And now you -- we started already years ago and Matthew at the foresight to think about this a couple of years ago to leave slots open -- PCI slots open in our server. So we just block them now with GPU capacity. So that was the flexibility, the capability that he started to luckily or not foresee a while back. We buy a mix of capacity, hardly at the bleeding edge. That's why it's affordable and has high availability. We don't need [H100] cards at this point. And we provide -- and we procure them today, still mostly NVIDIA, but more and more from a broader set of suppliers.
Unidentified Analyst
Got it. My second question is that you also elaborate that the abundant opportunity ahead of us through bundling through the usage of AI. So if you're thinking about a little bit longer term, like 2, 3, 5 years in this environment with AI deployment, both training and inference, usage has been accelerating that and also our business initiative, for example, bundling and all that. Would you see the -- and also the spending optimization, there's fatigue or optimization come to an end, right? So would you see the growth accelerate in this macro environment? Or what signal do you see to see that acceleration happen. You have to see a lot of inference happen to stimulate the growth of worker and edge products.
Thomas Seifert
Yes. So the first signal is our adoption of workers and workers AI accelerating. And we see really encouraging trends. If you look at our download data, it's steep up. And 1/3 of the developers that sign up for Workers AI are net new. So there is significant interest. The second big indicator is the variety of use cases we see coming on our network is huge. Well, with an Investor Day in May, we'll try to give some insights in how much interest and how much variety you see in the use cases.
So those are -- they're the 2 best indicators we have today that we seem to be in a really, in a Goldilocks zone for inference tasks. Like in so many cases, when we push new technology, we try to push adoption and not revenue. There is -- one of the core principles of Cloudflare is never to discourage a byte of data from moving through our network, even if it is for free. What we learn from that byte of data in terms of where it comes from, where it goes, good/bad threat vector is where we will derive value moving forward.
On the inference side, this is now how we model capacity, how we are able to abstract the software layer from the hardware stack, how we can get to [indiscernible]. If you look at one of the problems today in training land is that GPUs are very spiky in their utilization, and you have long periods of significant underutilization that makes it very expensive. We want to get to the same utilization rates that we see on the CPU side, and we think we're in a good path there. So we learn from the diversity of data that is on our network, and that is why the usage is, at this point more important than revenue.
Hamza Fodderwala
If I could maybe squeeze in just one last big picture sort of macro question. You have a large installed base, large enterprise customers, SMB, and you also have relatively short sales cycle. So you're able to see sort of changes in demand relatively quickly versus your other vendors. So just curious, what are you seeing from a macro standpoint across those 2 different segments and kind of what you bake into the 2024 outlook?
Thomas Seifert
It hasn't really changed much from what we said on our earnings call. There's still a lot of noise in the data. I would say, for sure, stabilization, not deterioration, but we think we are still in the grind. There's also what you see in Europe and then what you see in specific countries in Europe, what you see in Asia, what you see in specific countries in Asia is contradicting. So we are -- we think we are on a somewhat stable ground, but we'll continue to grind for a while.
Hamza Fodderwala
Thomas, thank you so much for your time. Thank you.
Thomas Seifert
Always a pleasure. Thank you.