5 cloud gotchas and how to avoid them
For systems integrators to bring a success cloud solution to their customers they must address critical issues around security, storage and systems architectures. Do you know how to avoid these five pitfalls?
Regardless of where federal agencies stand today on cloud computing, most systems integrators (SIs) who serve them agree that some form of cloud deployment on the federal infrastructure can deliver cost-efficiency and agility.
Key considerations for cloud computing include application appropriateness and organizational readiness, which SIs must explore with the agency. Other considerations revolve around source, such as a public cloud infrastructure where applications as well as storage are available to the general public, or a private cloud infrastructure, which is single tenant, or used solely by the owner and offers hosted services. There is also a mix of both with a hybrid cloud infrastructure that provides and manages some resources in-house with others provided externally.
Additionally, several key challenges remain in security, storage and system architectures for the cloud that can create opportunities for contractors. Vendors supplying the cloud infrastructure should provide answers to these issues so SIs serving the government can better manage IT operations in a virtualized and/or cloud-directed environment.
Challenges in virtualization support
Integrators often report great difficulty in deploying virtualized environments, and the challenge only escalates in cloud modalities. Specifically, the consolidation of many applications onto a single server increases the demand on memory utilization and performance. This creates a bottleneck that limits the number of virtual machines (VMs) that can be provisioned on a single physical server. Additionally, multiple VMs on each server creates a single point of failure that will have an increased impact on more applications and users should the server go down. Teams need to consider load balancing strategies to ensure that the right combination of storage and server resources is selected for each application. The same is true for infrastructure technologies that should provide mixed workload support with the modularity as well as high availability features to assure that data is accessible and distributed appropriately with dynamic balancing.
On a practical note, the ease as well as speed of deploying a virtualized application is also important and can make or break the ability for contractors to win awards. Players competing for proof of concept (POC) projects perform much better when the deployments are up and running quickly.
Storage architectures that claim to be 'cloud-ready'
The challenge here is to avoid compromising data storage utilization as well as performance with inflexible and unresponsive architectures — characteristic of older, legacy storage technologies. For example, one approach that works in typical IT infrastructures but not so well in cloud deployments is “short stroking.” This uses only a small amount of the capacity on a disk drive in order to increase input/output (I/O) responsiveness. Because this leaves so much unused space, short stroking forces a federal agency to buy many more drives than it needs and should be avoided. Another flawed yet common practice is the use of relatively low-cost, but low-performance Serial Advanced Technology Attachment (SATA) disks combined with massive caching. The problem here is that the cache might never empty and thus become a bottleneck itself.
Vendors should look for storage architectures that provide multi-tenancy capabilities. Think of it like an apartment building with its individual residences that all have their own locked doors and provisioned electricity, heat, as well as power. Multi-tenancy — along with thin provisioning, clustering, as well as autonomic self-management — optimizes the cost-efficiency and scalability of the cloud without the performance limitations. Multi-tenancy capabilities, ideal for cloud environments, split up a storage array in a highly manageable and secure way.
Costs and procurement policies that don’t leverage private/hybrid cloud scaling
Vendors whose answer to growth in cloud environments is simply to buy more boxes are missing the point and the benefits of a cloud infrastructure. They attempt to deliver cloud solutions using this “bolt-on” logic but are working with legacy, non-scalable architectures that lack the needed efficiencies to avoid future storage acquisitions. A true cloud system should be scalable, able to grow as needed, and have the ability to tier storage autonomically without disruption. Effectively configured private or hybrid cloud deployments can seamlessly scale to encompass two, 10 or even 20 times the initial number of users without the corresponding 2x/10x/20x increase in equipment and human capital. Short-sighted procurement policies, which force SIs to make decisions based solely on near-term costs, drive the wrong kind of purchasing behavior. Instead of implementing a converged cloud infrastructure, government customers end up creating disparate “islands of storage” that don’t scale or integrate well into the rest of the environment.
Equipment not proven in cloud applications
Not every vendor has delivered a functional federal cloud storage infrastructure that is specifically designed to meet an agency’s needs. Often installations can take a year or more to fully prove its effectiveness. Nor is all equipment and technology purpose-built for the cloud. Instead, many vendors take the “bolt-on” approach that simply adds storage technologies one on top of another. However, the method is not in line with a converged infrastructure that seamlessly integrates storage with server and networking resources. Implementations that rely on well-established best practices and track records, deep federal pilot experiences, and the requisite federal certifications, allow SIs to achieve the results demanded by government customers. By delivering an efficient converged infrastructure that eliminates silos and integrates technologies into shared pools of interoperable resources, SIs can enable agencies to simplify, integrate as well as automate their IT environment to meet changing business needs.
Hidden failures in security systems
FedRamp certification and other government regulations that certify a vendor’s compliance are the tip of the iceberg when it comes to the vital information that must be stored securely. Most agencies have security information and event management (SIEM) software that maintains the repository of event information for review as well as historical analysis to ensure accountability along with consistency. SIEM systems must handle large volumes of data due to thousands of IT devices in use throughout every government agency. Additionally, the speed with which data can be created demand very high input/output operations per second (IOPS) from the underlying infrastructure. This requires integrators to deliver a solution that can perform a balancing act of providing both high capacity and high performance. This enables ultra-fast analysis of structured data and distributes it appropriately to maximize storage space as well as alleviate potential bottlenecks. SIs must also make sure the underlying auditing infrastructure needed for vital documentation of agency activity doesn’t constrain the performance of storage systems by placing security demands on the system that it cannot support.
By being mindful of these possible cloud challenges, and addressing them throughout the cloud deployment process if they arise, SIs can implement an agile cloud infrastructure for the agencies they serve. Throughout deployment, vendors should make certain costs and procurement policies best leverage private/hybrid cloud scaling, the storage architecture provides multi-tenancy capabilities and all security systems are properly working. With an enterprise-class converged infrastructure, government customers should be able to easily grow and manage with only a handful of IT staff as well as a small but efficient IT budget.