Tim O’Reilly, in a comment on my earlier post about how he overstates the importance of the network effect, writes: “… you failed to address my main point, namely that cloud computing is likely to be a low-margin business, with the high margin applications found elsewhere.”
Let me try to correct that oversight.
O’Reilly is here using “cloud computing” in the narrow sense of offering for-fee access to utility data centers for basic computing “infrastructure” encompassing compute cycles, data storage, and network bandwidth (a la Amazon Web Services or Windows Azure). I would definitely agree that this will be – and should be! – a low-margin business, as is generally the case with utility industries. (O’Reilly seems to dislike big low-margin businesses. Personally, I’m fond of them.) Success in a capital-intensive utility industry often hinges on maximizing usage in order to utilize your capital equipment as productively as possible; seeking high margins, by keeping prices high, can actually be self-defeating in that it can constrain usage and lead to suboptimal capacity utilization. I would also argue that the infrastructure side of cloud computing will likely come to be dominated by a relatively small number of firms that will tend to be quite large, which is quite different from the fragmented hosting business that O’Reilly believes will be the model for the infrastructure cloud.
Where I have a real problem with O’Reilly’s argument, though, is when he goes on to suggest that the low-margin characteristics of the cloud infrastructure business can be best explained by the lack of a strong network effect in that business. That’s balderdash. If you were to list the determinants of the profitability of the cloud infrastructure business, the lack of a strong network effect would be way down the list. O’Reilly appears to be suffering from a touch of tunnel vision here. The network effect is his hammer, and he’s looking for nails.
As to O’Reilly’s belief that at least some cloud applications will be relatively high-margin businesses (in comparison with running the infrastructure), I have no beef with that view. I would even be happy to agree that in some cases the network effect will be a source of those high margins. I would strongly disagree with O’Reilly’s idea that a strong network effect will be the only source, or the primary source, of high margins in the web app business. (“Ultimately, on the network, applications win if they get better the more people use them,” he declared. “As I pointed out back in 2005, Google, Amazon, ebay, craigslist, wikipedia, and all other Web 2.0 superstar applications have this in common.”*) There will be plenty of other potential paths to high margins: like creating a good, useful, distinctive software tool, for instance, or creating a strong brand, or achieving some form of lock-in (as horrible as that may sound).
Digression:
I note that today O’Reilly is expanding his definition of “network effect” far beyond his original definition of “applications that get better the more people use them.” He now dismisses his earlier definition as a “simplistic” definition, even though it’s the generally accepted definition. (As Liebowitz and Margolis explain, the network effect “has been defined as a change in the benefit, or surplus, that an agent derives from a good when the number of other agents consuming the same kind of good changes. As fax machines increase in popularity, for example, your fax machine becomes increasingly valuable since you will have greater use for it.”) If I were O’Reilly, I would also expand the definition of the term. After all, the broader you define “network effect,” the more phenomena you can cram under its rubric.
But, since O’Reilly continues to reject my contention that Google’s success cannot be explained by the network effect, let me defer to a higher authority: Hal Varian. Professor Varian is not only one of the smartest explicators of the network effect and its implications but is now a top strategist with Google. The following is an excerpt from a Q&A with Varian from earlier this year:
Q: How can we explain the fairly entrenched position of Google, even though the differences in search algorithms are now only recognizable at the margins? Is there some hidden network effect that makes it better for all of us to use the same search engine?
A: The traditional forces that support market entrenchment, such as network effects, scale economies, and switching costs, don’t really apply to Google. To explain Google’s success, you have to go back to a much older economics concept: learning by doing. Google has been doing Web search for nearly 10 years, so it’s not surprising that we do it better than our competitors. And we’re working very hard to keep it that way!
Yes, Google is adept at mining valuable information from the Net, and the value of that information tends to go up as more people use the Net. Yes, Google runs auctions that become more valuable as more traders join. Yes, web activity in general is a complement to Google’s core profit-making business. But that doesn’t change the fact that there’s little or no network effect in the use of Google’s search engine. The benefit I derive from Google’s search engine does not increase as more people use it. Period.
End of digression.
I think O’Reilly did a nice job of identifying the different layers of the cloud computing business – infrastructure, development platform, applications – and I think he’s right that they’ll have different economic and competitive characteristics. One thing we don’t know yet, though, is whether those layers will in the long run exist as separate industry sectors or whether they’ll collapse into a single supply model. In other words, will the infrastructure suppliers also come to dominate the supply of apps? Google and Microsoft are obviously trying to play across all three layers, while Amazon so far seems content to focus on the infrastructure business and Salesforce is expanding from the apps layer to the development platform layer. The degree to which the layers remain, or don’t remain, discrete business sectors will play a huge role in determining the ultimate shape, economics, and degree of consolidation in cloud computing.
Let me end on a speculative note: There’s one layer in the cloud that O’Reilly failed to mention, and that layer is actually on top of the application layer. It’s what I’ll call the device layer – encompassing all the various appliances people will use to tap the cloud – and it may ultimately come to be the most interesting layer. A hundred years ago, when Tesla, Westinghouse, Insull, and others were building the cloud of that time – the electric grid – companies viewed the effort in terms of the inputs to their business: in particular, the power they needed to run the machines that produced the goods they sold. But the real revolutionary aspect of the electric grid was not the way it changed business inputs – though that was indeed dramatic – but the way it changed business outputs. After the grid was built, we saw an avalanche of new products outfitted with electric cords, many of which were inconceivable before the grid’s arrival. The real fortunes were made by those companies that thought most creatively about the devices that consumers would plug into the grid. Today, we’re already seeing hints of the device layer – of the cloud as output rather than input. Look at the way, for instance, that the little old iPod has shaped the digital music cloud.
Today, we tend to look at the cloud through the eyes of the geek. In the long run, the most successful companies will likely be those that look at the cloud through the eyes of the consumer.
*UPDATE: I just realized that O’Reilly tempered this statement in his comment on my earlier post, writing, “I agree that I probably am overstating the case when I say that this is the only source of business advantage. Of course it isn’t.” Clarification accepted.
UPDATE: Meanwhile, Tim Bray warns against jumping to conclusions about the ultimate shape of the cloud based on what we’ve seen to date. Everything could change in an Internet minute: “Amazon Web Services smells like Altavista to me; a huge step in a good direction. But there are some very good Big Ideas waiting out there to launch, probably incubating right now in a garage or grad school.” The spreadsheet is to the PC as the _______ is to the cloud. Fill in the blank, and win a big prize.
I am with Tim O’Reilly. Cloud Computing will be more like public transport (think railway or tram – mass transit) than a revolutionary transport system (such as the car). It will be cheap, crappy and only for those people who can’t afford better. Its very possible that it will become equivalent of public healthcare but that won’t make it “quality”.
The sheer mass and volume of Cloud Systems mean that customization is not possible, and this means lesser service. Just like public transport, its not totally convenient or life changing.
Useful for some, and not for others.
Nick,
I agree with most of your points. One that I differ with however, is that the search experience is not improved by the network effect. I think it is, specifically two services that come to mind are the “did you mean” spelling checker, as well as the Google Suggest feature.
FB
I agree that Google’s network effect is not as straightforward as, say, fax machines. With the one exception that Google’s weightings take into account which results searchers click on. The other point is that PageRank gets better as more people link. So you’re argument probably stands to some degree but I would lose the “Period.”.
Greg: I don’t think that was O’Reilly’s point.
Frank: Fair point. Your examples do show that on the margin the network effect does enhance Google’s search engine. That doesn’t mean, though, that the network effect is the cause of Google’s dominance.
pwb: OK, that rhetorical flourish was probably gratuitous. But it felt good.
I have the utmost respect for Liebowitz, Margolis, Varian and all the other giants whom I refer to in my PhD — but, as I explained in a reply post,
http://twocroissants.wordpress.com/2008/10/27/network-externalities/
you (Nick and Tim O’) need more then their model to agree: their are many type of positive reinforcement, and “network effect” is probably the most confusing label.
Similarly, I believe that Tim’s three tier cloud-computing breakdown is great, but to agree on the likely margin, you need more categories. Look at Apple (hardware & general public soft) vs. IBM (customized soft) — how can the first one having better margins? Hardware is cut-throat, and custom is the receipt for margins; because they aggregate differently: one has a unique control, while the other competes against a crowd of the same consultants/mercenaries that they themselves employ.
I’m assuming that a large, well integrated S+S (Apple, SalesForce, Facebook?) will have high margin, even though a large share of they cost will be (custom) server farms, while many lean, pure Tb providers will have lower margin ratio. They’ll be other cases — but it’s harder to guess it all.
Now that will be the day! When consumers run amok in the clouds and again driving developers and coders nuts; driving them towards putting up their own guerrilla infras & networks that lie on the fringe and not totally subject to the cloud majesties. (Is there a movie script in there?)
I like the devices thingy, Nick. Brilliant! Besides the Ipod, can we consider the mobile smartphones also?
Best.
alain
Nick,
I have to disagree about Google Search not benefiting from the network effect too.
While I agree with your point that Google’s success can’t be explained by the network effect- that is, it’s not what got them to #1- I think their continued dominance is very much due to their large userbase; it’s the reason I don’t think any search engine will ever be able to truly compete (even ignoring issues such as consumer/advertiser apathy & lethargy.)
A brand new Search Engine that started up today with the same index of websites as Google (or even a bigger and better index) wouldn’t have any idea what terms people are searching for, or which link they are clicking on from the Search results given. On the other hand, Google have years of data from the majority of searches that have been done.
So when Google refine their algorithms, because of the network effect, they have access to data that other search engines simply don’t have- they can tell whether their refined algorithm would have moved the “right” result from a particular search up or down in any given search (and therefore, whether the net result of a refined algorithm is positive or negative.)
(Apologies if I’m stepping in half way through an argument, “like a child who walks in half way through a movie”…)
Scott
“Cloud” hosting (as in EC2, GoGrid, 3Tera) isn’t going to radically alter what’s possible with hosting. ( Truly fluid mobile agent systems could, but they haven’t left academe :-( ) It’s about stripping off surplus h/ware, rationalising storage into SAN pools, and hiding low level complexity from the user. It’s a consolidation drive enabled by virtualisation. Firms like SoftLayer, let you manage your physical servers over an API, so would you even notice if they all acceded to a big mainframe running Xen?
There are areas were “clouds” can make previously high-end methods accessible to “normal” business. Multi-site redundancy, transparent online backups, etc. They’ve also made bundling s/ware into appliances, and/or as part of the hosting bill incredibly easy. But I don’t think anything radically *new* will come of it, the way it did with electricity. More that the Fortune 50 level of kit will be available to Joe the Plumber.
Google’s clickstream data would be harvestable by ISPs, so I’m sure you can buy it somewhere [Hitwise?]. I doubt that the first 10,000 Google searches are any more informative than the next 1m. In non-Western (e.g. most the world) markets, Google has quite stiff opposition. I remember Altavista, the mighty are forever falling…. ;-)
To me, the interesting ideas aren’t about the benefits of having more&more users gathered in one place. It’s about why & who you’d want to bring together. For an auction site, having 100 or 1000 iPods on sale doesn’t make a difference to the buyer. I have a LinkedIn and a Facebook account. I use them separately, so I have a business & a personal life. But people in London media-land tend to just use Facebook, as “everyone I deal with is a friend who I love”. AngelSoft have a VC fundraiding platform, but it’s one that restricts entrepreneur’s access to VC, so that their time isn’t wasted. A lot of “linking” has a negative value: it’s spam. That’s why Google no-longer use the original PageRank. I doubt that any business grows more valuable simply by having more people use it. I suspect many gain more by excluding at least a portion. (Look at dating sites.)
@Greg – You’re wrong. The grade of h/ware and support provided by “clouds”, is almost always going to be better for a given budget. A better analogy than public transport is the skyscraper. Separate offices, same plumbing, same power supply, sewage, etc.
(I know I’m taking a narrow definition of “cloud”. But if you just definite it vaguely as everything getting cheaper and requiring less thought, there’s not much interesting to say about it. I prefer the term Utility Computing…)
Ronald Coase 2.0 ? :-D
Wow the discussion on device layer was enlightening.
I think your comments and views are right on. A couple of minor points:
1. Tim often refers to Christensen’s brilliant “law of conservation of attractive profits”. Christensen says that adjacent elements of the stack go through cycles of commoditization, as various elements of a ‘stack’ disintegrate and re-integrate (ie re-aggregate) over time, with value shifting back and forth between those elements. I would argue that the ipod is a great example of that: IBM’s PC disintegrated the computing-device stack and pushed value to some of the individual elements of the stack – benefiting Intel and Microsoft among others. In turn, the ipod (and to some extent the mac and iphone) has reintegrated the stack shifting value back to the ‘hardware’ or ‘device’ coupled with software and services as an aggregated whole.
So the “device layer” as you call it, is more of a re-aggregation of existing layers in new forms, rather than a new layer. But no doubt, as stacks shift, new layers also become much more apparent and independent. Web services and social graph services could be described as new layers on the internet – just as the new electrical devices created a new layer that sat on top of the new electric grid.
2. One characteristic of web apps is that they make it very easy to incorporate customer feedback into the product. (Paul Graham even talks about this in his essays on building Viaweb with LISP and incorporating customer feedback quickly. That was definitely web 1.0!) Because of this rapid feedback system, one characteristic of the web is that it allows services to gain economies of scale in a whole different way. Many of the effects Tim (and others in comments) refer to seem to be just “economies of scale” from customer feedback – the “learning by doing” Varian points to. So I would add “incorporation of customer feedback” to your list under “2. Scale Advantages” (from your previous post.)
Of late, Tim O’Reilly is trying to hammer all technological tidings so they fit into his “think box”. The world unfortunately is not very malleable, so instead, his web 2.0 paradigm is growing to encompass everything and the kitchen sink.
The commenter “some random nerd” is correct. The more people use Google, the more training data Google has, based on clickthrough patterns, to improve its algorithms. With improved algorithms comes improvement to my results, the next time I do a search. The network effect is in effect.