I find it hard to believe, but it’s almost three years since the Harvard Business Review published my article IT Doesn’t Matter. Every time I think the debate about the piece is dying down, it flares up again. The latest example comes in the new issue of Fortune Small Business, which features dueling interviews between me and Dell CEO Kevin Rollins.
Monthly Archives: January 2006
What’s Microsoft afraid of?
When open-standards advocate Peter Quinn resigned as CIO of Massachusetts last month, amid intense and ugly political pressure, it didn’t look good for the state’s plan to adopt OpenDocument as its file-format standard for Office documents. That plan has been fiercely opposed by Microsoft, which fears the abandonment of the proprietary document formats that have long provided a bulwark against competition for its ubiquitous and highly profitable Office suite.
But it doesn’t look like the state is backing down. When Microsoft’s national technology officer, Stuart McKee, recently told a reporter that Massachusetts’s interim CIO Bethann Pepoli “is crafting an additional policy,” implying that the state was backing away from its plan, Pepoli quickly shot back a denial, saying that Massachusetts “is not ‘crafting an additional policy’ in regard to the OpenDocument initiative … We are proceeding with implementation of the OpenDocument Format standard.”
Yesterday, the administration of Massachusetts Governor Mitt Romney, in a clear sign of continued support for the OpenDocument policy, announced the appointment of Louis Gutierrez as the state’s new CIO. The second sentence of the press release on the appointment states: “Gutierrez will be responsible for overseeing the final stages of implementation of the state’s new OpenDocument format proposal, to go into effect in January 2007.” The release also notes that Gutierrez:
led the development and implementation of the state’s Virtual Gateway, an online portal that integrated the web presence of 16 agencies into a user-friendly format that improved service delivery and reduced costs. “The Virtual Gateway is an example of how state government computing can be transformed through the application of open standards that interoperate with many kinds of technology and vendors,” said Gutierrez. “As technology continues to evolve there remain substantial opportunities to transform services and a need to plan for the long-term future of technology-infused operations.”
As Andy Updegrove writes, “Clearly, this press release is being used to express the determination of the Romney administration to push through its implementation of ODF. This is doubly significant in a political sense, given that Romney has made no effort to deny that he has forgone running for reelection in favor of nurturing his chances to make a run for the United States presidency. By underlining his commitment to ODF, Romney may be using the ODF issue to draw a line in the sand, thereby demonstrating that he will neither kowtow to special interests (in this case, Microsoft), nor will he ‘flip flop’ on a policy, once he has committed to it.”
This fight, which has many political as well as technological angles, is far from over, but it’s nice to see that, so far, the state administration refuses to be bullied. It will be interesting to watch how Microsoft responds to Gutierrez’s appointment. At this point, you have to wonder why the company is so intent on stopping the state’s experiment with ODF. After all, if the move proves a disaster, as Microsoft claims it will, won’t that confirm the superiority of Microsoft’s formats and stop other government agencies from following in Massachusetts’s footsteps?
Could it be that what Microsoft really fears is that the adoption of ODF won’t be a failure but a success?
Open source’s other advantage
If you compare the way software is developed under an open-source approach and a traditional proprietary approach, you might reasonably conclude that both approaches have their strengths and weaknesses – and that those strengths and weaknesses spring from the very different organizational models they use. An open-source project has the advantage of mobilizing a whole lot of eyeballs: getting many different programmers with different perspectives and backgrounds – many of whom are also actual users of the software – to contribute to solving problems, adding features, and fine-tuning the code. But it has the disadvantage of relatively weak coordination – the programmers are usually at a distance from one another, which impedes collaboration. A proprietary project, by contract, has the advantage of rich collaboration – the programmers typically work for the same company, have the same bosses, go to the same meetings, and are even in the same building. On the other hand, the team is smaller and more homogeneous and probably not as tightly connected to the users of the software – there are a lot fewer eyeballs.
But an interesting research paper by three Harvard scholars, Alan MacCormack, John Rusnak, and Carliss Baldwin, reveals that the apparent weakness of the open-source organizational model – the constraints on close collaboration among programmers – may actually be a hidden strength. The researchers compared the architecture, or design, of one open-source software program (Linux) with that of one proprietary program (the last version of the closed-source Netscape Navigator, which became the first version of the open-source Mozilla). They wanted to test a hypothesis: That the organizational structure of a software development effort will directly manifest itself in the design of the product. In particular, they believed that the tight organization of a proprietary project would result in a tightly integrated program, with a lot of interdependencies, while the loose organization of an open-source project would lead to a more modular design, with fewer interdependencies.
Their analysis of the source code of Linux and the original Mozilla validated their hypothesis. They found many more interdependencies among source files, or code groups, in Mozilla than in Linux. “Specifically,” they write, “a change to one element in Mozilla is likely to impact three times as many other elements as a similar change in Linux. We conclude that the first version of Mozilla was much less modular than a comparable version of Linux.” They also found that Mozilla, after its release as open source, was rapidly and successfully redesigned to become much more modular – at least as modular as Linux, in fact. That shows that there isn’t anything inherently less modular about a browser application than an operating system. The differences in code appear to result from differences in organization.
On the surface, this isn’t surprising. As a proprietary commercial product, Netscape Navigator was designed to optimize its performance. Modularity was neither a goal nor a necessity. Having a lot of interdependencies in the code was, it seems likely, the easiest way for the close-knit team to enhance the performance of their product. No doubt, the team saw their ability to collaborate closely on the software design as a great strength. Linux, on the other hand, had no choice but to have a more modular design – it was a necessity given the loose, informal organization of Linux’s far-flung volunteer workforce.
What’s interesting is that, as we move from an age of isolated software, in which the performance of each individual program was judged separately, to an age of plug-and-play software, in which performance will be judged on the ease with which different programs can be assembled and disassembled, the barrier to close collaboration in the open-source model may turn out to be not a weakness but a strength. If true, this would have broad consequences for the future of the software industry. It would suggest that the traditional model of software development – even if used to produce free software – will be at a significant disadvantage to the open-source model.
There’s also a bit of an irony here. The research implies that open source’s advantage doesn’t stem from the strength of the programmer community. It stems from the weakness of that community.
The trust scale
Here’s Google CEO Eric Schmidt, at Davos, defending the company’s decision to censor search results in China: “We concluded that although we weren’t wild about the restrictions, it was even worse to not try to serve those users at all. We actually did an evil scale and decided not to serve at all was worse evil.” Others have also put Google’s decision on the “evil scale” and found that the good outweighs the bad. Here’s David Weinberger: “It’s a tough world. Most of what we do is morally mixed … If forced to choose – as Google has been – I’d probably do what Google is doing. It sucks, it stinks, but how would an information embargo help? It wouldn’t apply pressure on the Chinese government. Chinese citizens would not be any more likely to rise up against the government because they don’t have access to Google. Staying out of China would not lead to a more free China.”
Once you jettison Google’s “don’t be evil” absolutism, in other words, it becomes possible to make a relativist argument that Google did the right thing. The argument hinges on a historically dubious assumption – that you have a better chance to change a government’s behavior through appeasement than through confrontation – but that doesn’t necessarily make it wrong in this particular case.
That’s not the whole story, though. As John Gapper explains in an op-ed in today’s Financial Times, there is a practical case to be made that Google, like Yahoo and Microsoft before it, made the wrong choice in collaborating with the Chinese government. “Internet companies,” he writes, “cannot simply shrug off their opportunism in China.” Unlike other companies that choose to do business in China on the government’s terms, “internet giants should be held – and should hold themselves – to higher standards.”
Why? Gapper points to two reasons. First:
the internet industry is regulated far more lightly than many others. The internet is treated as an open space for the public, a creative commons not subject to the restrictions that bind other areas of communications … We know how fragile the self-restraint of regulators and governments becomes when people believe companies are abusing their freedoms … Internet companies enjoy unique privileges at the moment and could easily find them taken away if their reputations deteriorate.
Gapper notes that Google has fought hard to maintain the internet’s neutrality – to prevent telephone and cable companies from giving preference to certain content running over the network. But Google’s case weakens substantially when it begins to compromise the network’s neutrality itself. If Google can work with the Chinese government to give some speech preference over other speech – to draw distinctions between different forms of content on purely political grounds – it becomes much harder for it to argue against other companies drawing distinctions between different kinds of content on purely commercial grounds.
Gapper’s second reason:
the internet acts for most of us as an instrument of liberation … It brings a means of communication through e-mail or instant messaging that crosses national borders. Through search engines such as Google, it also allows us to find and peruse a huge amount of information … That is a double-edged sword. A search engine is wonderful if you are the searcher but what about when a government searches for you? That happened to Shi Tao, a dissident Chinese journalist sentenced to 10 years in prison last year for “revealing state secrets” after Yahoo submitted to the government’s demand to know who was using a Yahoo e-mail address.
The internet’s special status hinges on trust, in other words. It’s hard to sustain that trust if you agree to collaborate with repressive regimes.
“In these circumstances,” Gapper concludes, “the internet giants ought to tread very carefully. The benefits of an open internet, free from clumsy regulation and inquisitive authorities, have been huge. But they need not last and will be curtailed if the public loses faith in Google and others. China is a vast market but what does it profit an internet company if it gains the whole world and loses its soul?”
“We must fight the net”
The BBC has posted a pdf of a declassified 2003 Department of Defense report that, according to a cover note from Donald Rumsfeld, “provides the Department with a plan to advance the goal of information operations as a core military competency.” In its executive summary, the report lists “three matters of key importance that require immediate attention”:
1. We Must Fight the Net.
2. We Must Improve PSYOP.
3. We Must Improve Network and Electro-Magnetic Attack Capability.
Fight the Net? This phrase, which appears more than once in the heavily redacted document, appears to refer to the possibility of the internet and other networks being used as a weapon of attack. The BBC writes:
[The report] seems to see the internet as being equivalent to an enemy weapons system. “Strategy should be based on the premise that the Department [of Defense] will ‘fight the net’ as it would an enemy weapons system,” it reads. The slogan “fight the net” appears several times throughout the roadmap. The authors warn that US networks are very vulnerable to attack by hackers, enemies seeking to disable them, or spies looking for intelligence. “Networks are growing faster than we can defend them… Attack sophistication is increasing… Number of events is increasing.”
In its PSYOPS section, the report notes that the internet is allowing propaganda targeted at foreign audiences to circle back to U.S. audiences. As the BBC summarizes:
Perhaps the most startling aspect of the roadmap is its acknowledgement that information put out as part of the military’s psychological operations, or Psyops, is finding its way onto the computer and television screens of ordinary Americans. “Information intended for foreign audiences, including public diplomacy and Psyops, is increasingly consumed by our domestic audience,” it reads.
It’s hardly a surprise that such a document exists. There is every reason to believe that information networks will be battlefields of one sort or another in the future, and a national defense strategy needs to take that fact into account. Still, it’s fascinating, and a little scary, to read about plans to “disrupt or destroy the full spectrum of globally emerging communications systems, sensors, and weapons systems dependent on the electromagnetic spectrum.” Put your head between your knees, and kiss your data goodbye.
The supply-side boom
In The New Boom, a short article in the new edition of Wired, Chris Anderson makes a cogent case for why the current resurgence in internet entrepreneurship is “a healthy boom, not a fragile bubble.” His case boils down to three points. First, the market for internet content and services is more mature than it was in the 90s – there are real rather merely theoretical customers out there. Second, the drop in prices for bandwidth and computing means that you don’t need anywhere as much capital to launch an internet business as you used to. Third, and related, the lower capital requirements mean that entrepreneurs require less venture funding, which in turn means less pressure to make a quick and lucrative exit.
I think Anderson’s right to say that the current boom is not a bubble. Other than some isolated examples of frothiness – most notably, if also arguably, Google’s stock price – there’s little evidence of any kind of broad speculative fever. And I think Anderson accurately sums up why the entrepreneurial environment is very different today than in the late 90s.
But there’s a flaw, or at least a missing element, in the analysis. Bubbles are simply a matter of supply and demand – too much demand (investor cash) chasing too little supply (investment opportunities). Anderson’s article focuses entirely on the supply side. The lower costs required to launch an internet business today means lower barriers to entry and thus a robust supply of startups. At this point, there’s relatively little interest (compared to six or seven years ago) in these startups among the broader investor community – ie, the individual investors who ultimately control the bulk of investment capital. But that could change at any moment. An investor stampede would render Anderson’s analysis moot. And all those seemingly disciplined entrepreneurs who today pronounce their allegiance to “steady, organic growth” would turn into frothing-at-the-mouth IPO hounds faster than you can say “sock puppet.” Bubbles are born on the demand side, not the supply side.
Anderson, in other words, is making a rational analysis of an irrational phenomenon. The people that turn healthy booms into fragile bubbles are investors, not entrepreneurs. And so far, the exuberance about Web 2.0, or whatever you want to call it, remains much more pronounced on the supply side than on the demand side. As long as that remains true, there won’t be a bubble.
Down the drain
Following in the footsteps of McDonald’s, Ford, the FBI and many others, Lloyd’s of London has pulled the plug on a big, expensive, and now embarrassing software project. Launched five years ago, the effort was aimed at modernizing the venerable insurance group’s brokering processes and making them “paperless.” But the new system never enticed users to actually use it – the cure turned out to be worse than the disease. Lloyd’s ended up wasting 70 million pounds – about $125 million – on the project before chief executive Michael Dawson concluded this week that “the platform was not optimal in ensuring more efficient business processes for the Lloyd’s and London market and as a result it will close.”
There are three lessons here. First is that the bigger the software project, the more likely it is to collapse under its own weight. $100 million seems like the line beyond which failure is almost assured. Second is that you should always create software to solve the day-to-day problems faced by the actual users, not to meet big abstract organizational challenges. Solve enough little problems, and the big ones take care of themselves. Fail to make users’ lives easier, and they’ll simply bypass the system (and never trust anything you do ever again). Third and finally, you should never give a software project a catchy codename. For Ford, it was Project Everest; for McDonald’s, it was Project Innovate; for Lloyd’s, it was Project Kinnect. If you’re about to launch an IT initiative big enough to warrant its own name, you should probably make sure you have a really good golden parachute.