Tech Briefing

By John Zyskowski

Blog archive

How microservers could change the data center

There’s a growing wind in the sails of microservers, a new type of datacenter computer that is extremely energy efficient and tailor made for the cloud- and Internet-style workloads that are increasingly common at agencies and other big enterprises.

Dell joins some smaller players and recently introduced its initial line of microservers, Intel has begun shipping the first of several processors designed specifically for use in microservers, and Facebook officials say they have big plans for the diminutive computers.

In a recent story we covered the reasons why microservers are expected to make a huge splash even though they buck one of the hottest trends in enterprise computing, the use of virtualization software to consolidate data processing chores on to fewer, more powerful servers. If you are interested in learning more about microservers, read on for some links to key news items, case studies, analysis and technical discussions.

Microservers put a low-power microprocessor, like those designed for smartphones and tablets, on a single circuit board, then pack dozens or hundreds of those cards into one server cabinet that provides centralized power supply, cooling fans and network connections.

There are many opportunities for using microservers to drastically reduce datacenter operating costs, including:

* Web applications with high volumes of small discrete transactions, like user logins, searches, checking e-mail and simple Web page views.
* Running hosting services that provide a dedicated physical machine for each application or user.
* Creating grids or clusters in which multiple server nodes work in parallel on a specific task.
* Environments that need to reduce the energy consumption and physical footprint of their data center servers.

Some folks think microservers will eventually dominate most cloud datacenters. John Treadway, global director, cloud computing solutions at Unisys, makes a persuasive case for microservers on his personal blog CloudBzz. He predicts that microservers will replace bigger servers running virtualization in most commercial cloud datacenters by 2018, with internal enterprise datacenters on the same path though a few years later.

To see what a large-scale cloud datacenter packed to the gills with microservers looks like, click on the YouTube video available on this webpage from Data Center Knowledge. It’s taken inside French hosting company Online.net’s facility and features early microservers built by Dell. The section showing the microservers begins about 1:20 into the video.

Facebook is one of the highest profile players in the U.S. to endorse the microserver approach for large scale data centers. In this article from PCWorld, Gio Coglitore, director of Facebook labs, lays out the rationale for the social networking giant’s plans to move to microservers, with reasons that include energy efficiency, avoiding virtualization vendor lock in and increasing system resiliency.

One of the better write ups that clearly explains the value proposition for microservers is available from one of the industry’s earliest players, SeaMicro. It’s a case study about Mozilla, the group that organizes the development of the Firefox Web browser, and their use of microservers. One of the most interesting parts of the article describes how Mozilla officials calculated, among other cost and efficiency metrics, the energy required to perform a certain processing task, in this case an Internet HTTP request. They concluded that microservers used one-fifth the power per HTTP request than a traditional server.

Correlating power consumption with the work output of an IT system is a more advanced and meaningful way to calculate datacenter energy efficiency than the metrics CIOs now use most often. Last year I wrote a story about efforts to increase the use of these more sophisticated metrics. [

If you really want to get into the weeds about the relative performance of different processor approaches and their suitability for varying types of workloads, there are a couple of good papers that size up those debates.

One paper from a group of researchers out of Carnegie Mellon University evaluates the use of clusters of low power chips like in microservers deployed in what is called a Fast Array of Wimpy Nodes. The FAWN approach can be a much more energy efficient option for many types of workloads, but not all. The researchers note that the power costs of large datacenters accounts for up to half of the three-year total cost of owning a computer.

On the other hand, Google released a paper from one of its researchers that details the drawbacks in certain situations of arrays of wimpy chips. What happens is that wimpy-core systems can require software applications to be specially written to run on them, resulting in extra development costs that can take a big bite out of the energy savings.

Posted by John Zyskowski on May 27, 2011 at 7:27 PM


Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

What is your e-mail address?

My e-mail address is:

Do you have a password?

Forgot your password? Click here
close
SEARCH
contracts DB

Trending

  • Dive into our Contract Award database

    In an exclusive for WT Insider members, we are collecting all of the contract awards we cover into a database that you can sort by contractor, agency, value and other parameters. You can also download it into a spreadsheet. Read More

  • Is SBA MIA on contractor fraud? Nick Wakeman

    Editor Nick Wakeman explores the puzzle of why SBA has been so silent on the latest contractor fraud scandal when it has been so quick to act in other cases. Read More

Webcasts

  • How Do You Support the Project Lifecycle?

    How do best-in-class project-based companies create and actively mature successful organizations? They find the right mix of people, processes and tools that enable them to effectively manage the project lifecycle. REGISTER for this webinar to hear how properly managing the cycle of capture, bid, accounting, execution, IPM and analysis will allow you to better manage your programs to stay on scope, schedule and budget. Learn More!

  • Sequestration, LPTA and the Top 100

    Join Washington Technology’s Editor-in-Chief Nick Wakeman as he analyzes the annual Top 100 list and reveals critical insights into how market trends have impacted its composition. You'll learn what movements of individual companies means and how the market overall is being impacted by the current budget environment, how the Top 100 rankings reflect the major trends in the market today and how the biggest companies in the market are adapting to today’s competitive environment. Learn More!