Thanks to a reference in Chris Koch’s IT strategy blog, I just read a long, boring but important article on business computing in the McKinsey Quarterly. Titled Managing Next-Generation IT Infrastructure, the piece argues that we’re now ready for an “industrial revolution” in the way big companies assemble their IT infrastructures – all the servers, storage devices, operating systems and other basic hardware and software components used to run their business applications. Rather than custom-build new chunks of infrastructure to fit new applications – the traditional practice that has created such complexity and inefficiency in business computing – companies can build a single standardized set of computing modules, or, as the authors poetically put it, “productized, reusable services,” that can be allocated to new applications as needed. You manage your infrastructure like a modern factory, in other words, rather than an ancient craft guild.
Such a model requires much greater centralized control over a business’s entire infrastructure – to put it in my own terms, it requires the creation of an internal IT utility – but it brings much higher levels of capacity utilization, reduces the headaches associated with integrating dozens of custom-built systems and provides a much clearer view of where an organization’s IT spending really goes. The most valuable part of the article is a sidebar describing Deutsche Telekom’s experience in moving to this new model of infrastructure management.