Agencies Gain Efficiency Using Storage Management

SPECIAL REPORT: Storage Management


By Barbara DePompa, 1105 Government Information Group Custom Media

Within the federal government, most agencies and departments are quite accustomed to dealing with enormous amounts of information that must be efficiently managed and maintained, no matter what it's function or form (structured or unstructured, classified or open for public consumption.)

According to industry experts there is a wide array of solutions available to aid in improving efficiency, increasing automation and even reducing costs.

Data compression, for example, is critical in federal IT organizations seeking greater storage efficiency. Compression can be applied to primary disk storage to help improve efficiency and reduce the need to continuously purchase additional disk storage capacity. Agencies should consider solutions such as Storwize, which offers a solution that fits between servers and primary storage devices and reduces the space required to store information on top-tier storage devices. What makes Storwize attractive, according Curtis Breville, senior analyst, storage management for Boulder, Colo.-based Enterprise Management Associates, is that this solution has no negative impact on performance.

Another help is storage resource management software. When data sits in many locations, and is replicated to a remote location for disaster recovery, what happens if information once considered classified is officially declassified? How do you go about declassifying every instance of that data's existence? Breville maintains that SRM tools allow administrators to alter a file's priority, as well as who can view the file. Once the security setting is changed, it will change that setting at every location at which the file exists, he explained. Same holds true for file deletion. SRM tools recognize where files have been replicated. When information is deleted, restrictions are changed, SRM tools ensure the change is implemented wherever the file is located, which can be an enormous time saver.

One of the hottest trends in storage management today is the advent of massive arrays of independent disk (MAID) subsystems, which spin to a stop, only to ramp up again when data is accessed. This technology can be used to reduce energy costs, and is considered an important option when agencies are seeking green IT solutions. Breville reported some large organizations, “have already saved hundreds of thousands of dollars per month, simply by implementing MAID disk storage solutions that don't spin all of the time.”

Another important technological trend involves thin provisioning, which makes use of disk storage capacity that otherwise would sit idle, waiting for data. In federal organizations with thousands of databases, and hundreds of terabytes (or more) of storage capacity, there are many situations in which 80% of a database's storage capacity sits idle. With thin provisioning, as storage requirements hit predetermined capacity levels, additional storage is added in smaller increments (5%, for example) to fulfill a database requirement. In standard disk provisioning today, storage is added in increments of 50% or more. Thin provisioning can reduce the number of physical disk storage subsystems required and allow agencies to avoid the cost of adding additional devices.

An increasing number of organizations are incorporating information lifecycle management or (ILM) to help determine the storage tiers required for specific data. ILM differs from other technological trends in that it's largely a process-based alternative. IT staff must query individual operating units about high priority applications and charge back according to the amount of information stored on expensive disk storage platforms. Currently, Breville see ILM as more buzzword than actual 'cure', but when properly implemented, ILM should reduce costs, ease management and ensure data is stored in the right place, for performance as required by each application. “This solution takes legwork, and may require agencies to invest in outside consulting up front, which can eat into the cost benefits,” he explained.

folders

SATA disk storage solutions offer a low cost storage alternative, but this type of storage is considered less reliable and not easily secured. The primary advantage of SATA disk storage is the ability to store a large amount of data in a rack with up to a petabyte of capacity, requiring less space, and costing less than a networked storage solution. SATA disk storage is currently recommended for low priority applications or for disk-to-disk, or archival storage environments. “The speed of performance between fibre channel and SATA solutions is negligible and SATA costs less in terms of floor space requirements, heating and maintenance, but a key caveat for most federal audiences is the dearth of security protections available,” Breville explained.

To reduce hardware in data centers, storage virtualization is also an option. With SAN storage using Infiniband for high-speed applications, iSCSI or fibre channel for most other requirements, each storage type requires it's own network, including switches, bridges and devices to run the SAN. Breville recommends a solution from Mellanox, which offers an appliance that takes information from the server and can distribute data across various types of SAN storage, eliminating 20 or 30 devices typically required for various SANs. Netapp, meanwhile, has a front-end storage controller that allows administrators to place EMC, Hitachi, NetApp and other storage components behind it, allowing administrators to use one set of commands to manage different brands of storage devices.

Data deduplication is an important technological trend as well, replacing duplicate data with references to a shared copy to save storage space, which is important given federal data protection and archiving demands. Data deduplication technologies improve resource utilization beyond other consolidation efforts, and also yield energy conservation advantages. Virtualization and deduplication, while at different stages of adoption, go hand in hand. Deduplication can drive consolidation and footprint reduction in virtualized environments. Using deduplication, organizations can optimize storage requirements for backup of virtual machine images to reduce the capacity required.

E-mail archives, meanwhile, could prove to be less of a headache than forcing systems that were not designed with e-discovery to handle this increasingly important function. Options include systems from EMC, Hewlett-Packard, Mimosa Systems and Symantec, in addition to hosted services from Dell, Google's Postini and Proofpoint's Fortiva.

Solid state disk solutions may also yield savings, as IBM has invested heavily in a nanotechnology drive that fits inside an iPod and holds a terabyte of information. The evolving solid state device expected later this year, may be priced in the $200 range. Because solid state disk technology has traditionally been expensive, few organizations have implemented much so far. IBM, EMC, NetApp, Sun each have solid state technology inside disk storage arrays. For organizations seeking high-capacity storage and fast performance, however, solid state disk may be worth closer examination.  The energy and maintenance cost savings of implementing 20 terabytes of storage in a single rack-mounted storage system, eliminating the space taken by 200+ disk-based storage systems today, may be cost effective for customers who can gain super fast access in a smaller footprint, Breville explained. Also because solid state technologies use no moving parts, and generate no heat when not in use, Breville contends that, “in a few years, people may wonder why anyone would purchase a spinning disk solution.”

Cloud storage is still considered a tricky option, as most federal organizations will likely have problems allowing external private organizations manage public sector data, especially highly classified or sensitive data files, according to Breville. While a number of service providers host storage for customers, including Amazon, Google and Microsoft, federal auditors typically want to know where data resides, and that it's fully protected, even if a disk drive fails.

In general, this means cloud providers must destroy non-working drives immediately at the storage location to ensure no data leaves the building.

“There's an enormous challenge involved in ensuring all policies, privacy, and security protections of data are the same as they would be in a federal data center,” Breville asserted.