How microservers could change the data center

Find opportunities — and win them.

Find out why some experts think microservers are going to give the boot to virtualization in many large datacenters.

There’s a growing wind in the sails of microservers, a new type of datacenter computer that is extremely energy efficient and tailor made for the cloud- and Internet-style workloads that are increasingly common at agencies and other big enterprises.

Dell joins some smaller players and recently introduced its initial line of microservers, Intel has begun shipping the first of several processors designed specifically for use in microservers, and Facebook officials say they have big plans for the diminutive computers.

In a recent story we covered the reasons why microservers are expected to make a huge splash even though they buck one of the hottest trends in enterprise computing, the use of virtualization software to consolidate data processing chores on to fewer, more powerful servers. If you are interested in learning more about microservers, read on for some links to key news items, case studies, analysis and technical discussions.

Microservers put a low-power microprocessor, like those designed for smartphones and tablets, on a single circuit board, then pack dozens or hundreds of those cards into one server cabinet that provides centralized power supply, cooling fans and network connections.

There are many opportunities for using microservers to drastically reduce datacenter operating costs, including:

* Web applications with high volumes of small discrete transactions, like user logins, searches, checking e-mail and simple Web page views.
* Running hosting services that provide a dedicated physical machine for each application or user.
* Creating grids or clusters in which multiple server nodes work in parallel on a specific task.
* Environments that need to reduce the energy consumption and physical footprint of their data center servers.

Some folks think microservers will eventually dominate most cloud datacenters. John Treadway, global director, cloud computing solutions at Unisys, makes a persuasive case for microservers on his personal blog CloudBzz. He predicts that microservers will replace bigger servers running virtualization in most commercial cloud datacenters by 2018, with internal enterprise datacenters on the same path though a few years later.

To see what a large-scale cloud datacenter packed to the gills with microservers looks like, click on the YouTube video available on this webpage from Data Center Knowledge. It’s taken inside French hosting company Online.net’s facility and features early microservers built by Dell. The section showing the microservers begins about 1:20 into the video.

Facebook is one of the highest profile players in the U.S. to endorse the microserver approach for large scale data centers. In this article from PCWorld, Gio Coglitore, director of Facebook labs, lays out the rationale for the social networking giant’s plans to move to microservers, with reasons that include energy efficiency, avoiding virtualization vendor lock in and increasing system resiliency.

One of the better write ups that clearly explains the value proposition for microservers is available from one of the industry’s earliest players, SeaMicro. It’s a case study about Mozilla, the group that organizes the development of the Firefox Web browser, and their use of microservers. One of the most interesting parts of the article describes how Mozilla officials calculated, among other cost and efficiency metrics, the energy required to perform a certain processing task, in this case an Internet HTTP request. They concluded that microservers used one-fifth the power per HTTP request than a traditional server.

Correlating power consumption with the work output of an IT system is a more advanced and meaningful way to calculate datacenter energy efficiency than the metrics CIOs now use most often. Last year I wrote a story about efforts to increase the use of these more sophisticated metrics. [

If you really want to get into the weeds about the relative performance of different processor approaches and their suitability for varying types of workloads, there are a couple of good papers that size up those debates.

One paper from a group of researchers out of Carnegie Mellon University evaluates the use of clusters of low power chips like in microservers deployed in what is called a Fast Array of Wimpy Nodes. The FAWN approach can be a much more energy efficient option for many types of workloads, but not all. The researchers note that the power costs of large datacenters accounts for up to half of the three-year total cost of owning a computer.

On the other hand, Google released a paper from one of its researchers that details the drawbacks in certain situations of arrays of wimpy chips. What happens is that wimpy-core systems can require software applications to be specially written to run on them, resulting in extra development costs that can take a big bite out of the energy savings.