The next-generation data center: What's inside?
The latest data center designs are less about flash and more about substance.
The Internet is dotted with stories about cool, futuristic data center designs. There are photos of thatch-covered roofs and walls that reduce cooling costs, diagrams of modular enclosures that make it easy to add space to a data center (and take it away), and aerial photos of data centers covered with solar panels. However, these design features, while exciting and innovative, are outliers. The average data center isn’t going to be adding any of the above anytime soon. That’s not to say that there aren’t great things coming to a data center near you in the near future. Below, experts provide a look into what’s innovative and mainstream in data centers today.
1. Flash storage. Networking, memory and network interconnect capacity have followed Moore’s Law, doubling every 18 to 24 months, but storage — although gaining in capacity — is still shuffling along, constrained in many cases by input/output.
“The heads can only pick up so much information at one time,” said Arun Taneja, founder and consulting analyst at research firm Taneja Group. “Capacity keeps growing and giving us more density, but disks can only rotate so fast.”
Enter solid-state drives as a way to improve speed and reduce I/O bottlenecks. Some of the newer SSD arrays feature lower read/write latency and more than five times the read/write throughput of previous drives. In addition, one drive can replace multiple traditional drives in a data center, saving rack space, money and energy.
2. Virtualized servers. Although CIOs have embraced virtualization, they need to use more of it in the data center, Taneja said. “Today, only 40 to 50 percent of data center workloads have been virtualized, and in my opinion, the most advanced data centers should be at 60 percent virtualized,” he said.
Virtualization helps IT reduce its overall data center footprint, increase efficiencies and streamline management. Most important, when configured correctly, data center virtualization allows for better uptime with some implementations reaching the mythical five nines (99.999 percent) without adding cost or physical redundancies.
3. Wider use of standardized racks. Data center and IT managers have been looking for and using wider, taller and deeper racks for some time now. The trend has been driven by need: Managers need to squeeze more servers into a finite piece of real estate. As a result, many racks seem to be taking the path of Jack’s beanstalk and growing straight upward as high as 9 feet tall. One recent report from IMS Research says shipments of 48U racks will grow an average of 15 percent annually over the next five years. Sales of standard-size racks are expected to grow about 5 percent.
But there’s a problem with rogue, custom-designed racks: It can be difficult to move them into a new space. Standards from Facebook’s Open Compute Project might eliminate this problem, though. The organization detailed a plan in May 2012 called Open Rack, which will set a new standard for so-called hyperscale data center environments. In addition to a larger form factor — 21 inches as opposed to 19 within a 24-inch rack frame design — the new standard will feature busbars that supply 12-volt power to servers, eliminating the need for individual server power supplies.
4. Flywheel technology. Batteries are standard inside a data center. Used as backup after power fails and before generators take over, they require a dedicated connection to the electric grid, take up expensive floor space and generate plenty of heat, even those that are highly efficient. Flywheel technology helps eliminate batteries in the data center by using kinetic energy.
“It’s very cool and very green,” said Darin Stahl, a lead analyst at Info-Tech Research Group. “It’s like Fred Flintstone technology: big static wheels that generate enough energy to hold you until the generators come online.” And because they’re significantly smaller than traditional UPS technology, they take up less space and require less electricity and cooling, he said. “They let you get rid of banks and banks of batteries.”
From a capital expense perspective, the technology might cost more initially, but from an operating expense standpoint, there are real savings to be had.