Tech Briefing


John Zyskowski

Tech Briefing

By John Zyskowski

View all blogs

5 tech shockwaves still felt today

In an upcoming special anniversary issue of FCW, we will take a closer look at five information technologies or technology trends that have had a significant impact on government in the last twenty-five years – that would be back to 1987 for the mathematically challenged among us.

Impact can be measured by the degree to which it transformed how government works, how government serves citizens, or how agencies acquire and manage the IT resources needed to perform their missions. Moreover, we want to single out those technologies whose impact is still felt today in one or more of these areas.

Of the following list of technologies, tell us which five you think had the greatest impact as defined above? Do you think we missed any?

Internet – There really isn’t an argument here, is there?

E-mail – Electronic messaging is such an integral yet glamour-less part of our current work lives, it’s hard to remember a time when we actually had to talk to each other or commit ink to paper to communicate. How much of the worker productivity gains in the past twenty-five years can be tied to e-mail?

Geographic information systems – Being able to answer the “where” in ways that are instantly understandable to the human eye (and thus brain) has opened the door to countless public policy and governance applications, from health to law enforcement and well beyond.

Global positioning system – So much of the U.S.’s modern military superiority, and all that has meant for the world, rests on this one technology. If that doesn’t work for you, just think of all the lost drivers there would be on the roads, contributing to global warming. 

Desktop computing – The democratization of computing began with the personal computer. Who needs those pompous twits in the IT department when you can just gin up something with the spreadsheet software purchased using petty cash. Desktop computers also got the great pendulum swinging between centralized and distributed computing, a dynamic that serves as a never-ending job stimulus plan for those previously mentioned hordes back in IT.

E-commerce – It started humbly with electronic data interchange (quick, X12 or EDIFACT?), got sexied up big time when the Internet took off, and is now just the way business gets done. 

Commoditization of computing – Originally designed for personal computers, commodity chips from Intel and others climbed the food chain and helped break the grip of traditional big system vendors in the datacenter. Enterprise computing has never been the same.

Mobile computing devices – This one is still very much a work in progress, but it’s nearly off the charts in potential impact on all three scores: how we work, serve citizens and manage it from an IT perspective.

What do you think? What are your top five game changers?

Posted on Apr 05, 2012 at 7:28 PM1 comments


Taking stock of top 10 lists

The consulting firm Gartner issues its list of the top 10 strategic technologies for the coming year each October, and has for the past five years. Viewing those lists side by side presents an interesting perspective on the issues that presumably have occupied CIOs during a given time period.

Gartner defines a strategic technology as one with the potential for making a significant impact in the next three years. That impact might include a high potential for disruption to IT or business operations, the need for a major dollar investment or the risks associated with being late to adopt the technology.

How well do Gartner’s priorities match those at your agency? Green IT, which appeared for three years in a row, dropped off as an agenda item starting in 2011. Interestingly, security appears only once in five years — in the form of activity monitoring on the 2010 list. Has security really become that routine?

On other fronts, cloud computing first appeared on the 2009 list and has held a spot every year since. No surprise there. And of course, all things Web have also made perennial appearances, from “mashup and composite apps” in 2008 to “contextual and social user experience” in 2012.

Interestingly, the one technology with the most endurance has been analytics (earlier called business process modeling or business intelligence), with an uninterrupted run of appearances starting in 2008. The devices, hardware, software and infrastructure that CIOs manage might constantly change, but the main reason for fussing with it all remains: improving how organizations operate through the automation and analysis of information. Everything else in IT is a means to that end.

2012
Media tablets and beyond
Mobile-centric applications and interfaces
Contextual and social user experience
Internet of Things
App stores and marketplaces
Next-generation analytics
Big data
In-memory computing
Extreme low-energy servers
Cloud computing

2011
Cloud computing
Mobile applications and media tablets
Social communications and collaboration
Video
Next-generation analytics
Social analytics
Context-aware computing
Storage-class memory
Ubiquitous computing
Fabric-based infrastructure and computers

2010
Cloud computing
Advanced analytics
Client computing
IT for green
Reshaping the data center
Social computing
Security: Activity monitoring
Flash memory
Virtualization for availability
Mobile applications

2009
Virtualization
Cloud computing
Servers: Beyond blades
Web-oriented architectures
Enterprise mashups
Specialized systems
Social software and social networking
Unified communications
Business intelligence
Green IT

2008
Green IT
Unified communications
Business process modeling
Metadata management
Virtualization 2.0
Mashup and composite apps
Web platform
Computing fabric
Real-world Web
Social software

Do you think the gurus at Gartner have missed anything over the years? Or perhaps made too much of a technology that wasn’t really as big a deal as they thought?

Posted on Feb 21, 2012 at 7:27 PM0 comments


Disaster recovery goes virtual

As federal technology executives gain experience using virtualization technology to reduce the number of physical servers eating up space and power in their datacenters, many are starting to discover that virtualization can also offer similar efficiency and cost-cutting benefits for their business continuity capabilities.

Mike Rosier, senior systems administrator at Fermi National Accelerator Laboratory, explains how he and his colleagues are using virtualization to create a more resilient IT infrastructure for the lab for a fraction of the cost of traditional business continuity options.

Federal Computer Week: Can you give provide an overview of the general use of server virtualization at your organization?

Mike Rosier: At Fermilab, we've been using modern server virtualization technologies for over 5 years. In fact, I'm sure we were utilizing earlier implementations back in our mainframe days.

Some of the early reasons we decided to invest in virtualization were to address power and cooling issues in our computer rooms. This was at a time we were trying to keep up with the growing demands for development and test systems. The procurement costs for dedicated physical servers were also eating into our yearly budgets.

At this point in time, we've migrated between 60 and 70 percent of the physical systems that we originally identified as being good candidates for virtualization. As virtualization technologies continue to improve, we'll look to identify even more systems that may have originally been excluded from our list.

We’re supporting a wide variety of systems as virtual machines, including those used for test, development, integration, and production environments. Some of these include web servers, file servers, custom application servers, data acquisition systems, email servers, monitoring systems, authentication systems, terminal servers, and print servers.

In recent years, we’ve significantly reduced the number of new physical system purchases and now consider virtualization as a first option. Although it might not be a perfect match every time, we’re seeing fewer systems that do not make sense to setup as virtual machines.

FCW: Was using virtualization to support business continuity objectives part of the original impetus and business case for server virtualization, or was it a second stage objective?

Rosier: For the most part, using virtualization to support business continuity objectives has been a second stage objective until recently. While our virtual infrastructure continues to grow and mature, we see more and more customers looking to virtualization as a way to avoid costly clustering solutions, which can provide quick restoration of service after hardware failure or data loss. As service providers have become more and more aware of the capabilities virtualization can provide, we’ve spent just as much time discussing backup/replication options and failover strategies as we spend discussing virtual machine sizing and application specific requirements.

FCW: Can you describe with a little more technical detail how virtualization supports business continuity objectives?

Rosier: In order to describe how virtualization supports business continuity objectives, it helps to understand just what those objectives are in your environment. Some customers require their systems to be available 24x7, while others are satisfied with 8x5. Not only do systems need to be available, but they also need to perform adequately.

Virtualization can allow you to meet your objectives by allowing you to focus more of your efforts on configuring a relatively low number of redundant systems capable of providing enough failover capacity to weather varying types of outages. In our environment, we’re using technologies such as [network interface controller] teaming, redundant storage adapters and paths, live virtual machine migration, full virtual machine image and file-level backups, and cloning/replication. We’re also utilizing multiple data centers using separate power/cooling feeds to meet our business continuity objectives.

Virtualization has given us the ability migrate workloads from one building to another without impacting production operations.

FCW: How does using virtualization for business continuity compare technically and cost-wise to prior approaches for achieving availability objectives?

Rosier: When you compare the use of virtualization technologies to prior approaches for achieving availability objectives, you'll quickly notice how simple it can be to achieve server, storage and network redundancy. With today's technology, you can also easily achieve data center redundancy using fairly low cost solutions compared to what was available in the past.

Some of the earlier solutions providing business continuity required costly clustering software or hardware, and specific knowledge of how each of those solutions functioned in order to quickly recover a workload onto a different system. With the advancement of virtualization technologies, it becomes easier to provide the ability to recover from a hardware failure or to separate certain virtual machines from each other onto different physical servers. Less complexity generally translates into greater reliability.

Virtualization allows you to achieve high availability for a greater number of systems for a fraction of the cost if you consider what it might take to provide “like” hardware for each of your systems. Since the physical server hardware is often abstracted from the guest operating systems, most virtualization platforms make it easy to automatically restart or keep a guest running after a full system failure.

In some cases, we've been able to take a bit of a hybrid approach to providing business continuity for key applications. Mixing virtual machines and dedicated physical servers into a single cluster can be an option that not only saves on hardware costs, but also gives you a foot in each door if vendor certification is a concern. In many cases, devices such as load balancers can certainly function for both virtual and physical systems belonging to the same cluster.

FCW: What pitfalls would you warn others to avoid when using virtualization to support continuity objectives?

Rosier: You should always be prepared to discuss your strengths and weaknesses. If you find shortcomings in your environment, be sure to investigate, identify, and prioritize cost-effective solutions that can be easily integrated into your virtual infrastructure. Virtualization technologies continue to evolve rapidly, so make sure you keep up with industry trends before making any large, strategic purchases.

Make sure your customers and management chain has a clear understanding of your capabilities are and what you're protected against when failures occur. For example, high availability means different things to different people. Sometimes it means no downtime and sometimes it means minimal downtime. Make sure this is clear up front. You should simulate failures and test your resiliency from time to time. You certainly don't want to find out that you're not protected against something you invested in heavily to avoid. It could reflect poorly on you and by extension, your organization.

Depending on the size and structure of your organization, you may need to engage members of other groups to help you meet your business continuity objectives. Don't miss this step! Unless you have a clear understanding of all the resources your virtual infrastructure is dependent on or interacts with, there's a good chance you'll make assumptions that can cost you time and money in the future.

For example, if you purchased iSCSI storage arrays as a cost-effective way to provide storage to a new data center, you might soon learn that there are network switches in your path that do not support or are not configured to support jumbo frames, which can be a requirement for certain workloads. Or, maybe you discover that your fibre channel switches and servers might support 8Gb connections, but your fiber [cables] support less than half of that in a reliable manner.

As they say, “The devil is in the details,” so try to learn those details by communicating with the experts in each area before you purchase and deploy a technology intended to address business continuity objectives.

Posted on Jan 20, 2012 at 7:27 PM1 comments


How microservers could change the data center

There’s a growing wind in the sails of microservers, a new type of datacenter computer that is extremely energy efficient and tailor made for the cloud- and Internet-style workloads that are increasingly common at agencies and other big enterprises.

Dell joins some smaller players and recently introduced its initial line of microservers, Intel has begun shipping the first of several processors designed specifically for use in microservers, and Facebook officials say they have big plans for the diminutive computers.

In a recent story we covered the reasons why microservers are expected to make a huge splash even though they buck one of the hottest trends in enterprise computing, the use of virtualization software to consolidate data processing chores on to fewer, more powerful servers. If you are interested in learning more about microservers, read on for some links to key news items, case studies, analysis and technical discussions.

Microservers put a low-power microprocessor, like those designed for smartphones and tablets, on a single circuit board, then pack dozens or hundreds of those cards into one server cabinet that provides centralized power supply, cooling fans and network connections.

There are many opportunities for using microservers to drastically reduce datacenter operating costs, including:

* Web applications with high volumes of small discrete transactions, like user logins, searches, checking e-mail and simple Web page views.
* Running hosting services that provide a dedicated physical machine for each application or user.
* Creating grids or clusters in which multiple server nodes work in parallel on a specific task.
* Environments that need to reduce the energy consumption and physical footprint of their data center servers.

Some folks think microservers will eventually dominate most cloud datacenters. John Treadway, global director, cloud computing solutions at Unisys, makes a persuasive case for microservers on his personal blog CloudBzz. He predicts that microservers will replace bigger servers running virtualization in most commercial cloud datacenters by 2018, with internal enterprise datacenters on the same path though a few years later.

To see what a large-scale cloud datacenter packed to the gills with microservers looks like, click on the YouTube video available on this webpage from Data Center Knowledge. It’s taken inside French hosting company Online.net’s facility and features early microservers built by Dell. The section showing the microservers begins about 1:20 into the video.

Facebook is one of the highest profile players in the U.S. to endorse the microserver approach for large scale data centers. In this article from PCWorld, Gio Coglitore, director of Facebook labs, lays out the rationale for the social networking giant’s plans to move to microservers, with reasons that include energy efficiency, avoiding virtualization vendor lock in and increasing system resiliency.

One of the better write ups that clearly explains the value proposition for microservers is available from one of the industry’s earliest players, SeaMicro. It’s a case study about Mozilla, the group that organizes the development of the Firefox Web browser, and their use of microservers. One of the most interesting parts of the article describes how Mozilla officials calculated, among other cost and efficiency metrics, the energy required to perform a certain processing task, in this case an Internet HTTP request. They concluded that microservers used one-fifth the power per HTTP request than a traditional server.

Correlating power consumption with the work output of an IT system is a more advanced and meaningful way to calculate datacenter energy efficiency than the metrics CIOs now use most often. Last year I wrote a story about efforts to increase the use of these more sophisticated metrics. [

If you really want to get into the weeds about the relative performance of different processor approaches and their suitability for varying types of workloads, there are a couple of good papers that size up those debates.

One paper from a group of researchers out of Carnegie Mellon University evaluates the use of clusters of low power chips like in microservers deployed in what is called a Fast Array of Wimpy Nodes. The FAWN approach can be a much more energy efficient option for many types of workloads, but not all. The researchers note that the power costs of large datacenters accounts for up to half of the three-year total cost of owning a computer.

On the other hand, Google released a paper from one of its researchers that details the drawbacks in certain situations of arrays of wimpy chips. What happens is that wimpy-core systems can require software applications to be specially written to run on them, resulting in extra development costs that can take a big bite out of the energy savings.

Posted on May 27, 2011 at 7:27 PM0 comments


Looking for help from veterans (again)

We’re working on a story for FCW about the Department of Veterans Affairs “Blue Button” Web application that allows veterans to download their personal health information from the department’s MyHealtheVet site.

We’re looking for veterans who have used the VA’s Blue Button to share their opinions about the application’s usefulness with the FCW community. If you’ve given this application a try, you can use the Comment button below to tell us about your experience.

One veteran who checked out the application and wrote about the experience on a blog last fall reported being distinctly underwhelmed by the experience. “Here’s the cream that floats to the top, the icing on this cake, the best of the best; If you download and install the Blue Button to your personal computer, you will be able to securely access and download and print and share all the data that you yourself put in to the system,” wrote Jim Strickland, a veterans’ advocate.

Since then, the Centers for Medicare and Medicaid Services launched its own Blue Button application on its MyMedicare.gov website. That feature lets 47 million Medicare beneficiaries view, download and print their medical records.

Posted on Apr 08, 2011 at 7:27 PM0 comments


Can we avoid telework train wrecks?

Finally, thanks in no small part to the recent Telework Enhancement Act, it looks like a lot more government offices will be giving telework a try. Previously resistant managers are coming on board (for the moment, anyway), identifying positions for telework eligibility, dealing with equipment needs, and developing agreements about employee performance and expectations.

Of course, most telework programs start with the best of intentions, but not all march on to meet great success. Employees who abuse the telework privilege with lackluster performance hurt productivity and can poison office morale. They can also jeopardize management support for the telework program. Sometimes managers have to work with poorly conceived telework policies, so they lack tools that could help them bring wayward employees in line.

So, tell us, what are some of the mistakes that employees or their managers can make with telework? And what can they do to avoid falling into the same traps?

Readers who have commented on past FCW stories about telework have mentioned some of the problems that can arise.

One mistake mentioned is the teleworker who isn’t responsive to communications from managers and co-workers. “I have been constantly frustrated and so have others in my division when you try to contact (phone/email) a teleworker for immediate answers/assistance and they do not respond quickly,” wrote one reader.

And what can managers do for their part to make telework a success? One reader said managers should give telework a chance but should also be ready to rein it in if it’s not working. “A manager must immediately send a clear signal and not give so much as 1/64th of an inch but yank the privilege upon the slightest infraction. When employees know that they will not be allowed to take advantage of a given situation, they quickly fall in line.”

But some managers don’t have this kind of power. They talk about being hamstrung by policies that don’t allow them to revoke telework privileges, even when some employees are clearly abusing them. “As a manager I should have the authority to approve or deny telework should there be an upcoming holiday,” writes one reader. “Employees go through the calendar and always telework prior to and following a holiday — [that] should not be allowed.”

So what do you think? What are the problems that can undermine telework programs, and how can they be avoided? What kinds of policies do managers need to make telework successful? Please share your comments below.

Posted on Mar 11, 2011 at 7:27 PM2 comments


What is your e-mail address?

My e-mail address is:

Do you have a password?

Forgot your password? Click here
close
SEARCH
contracts DB

Trending

  • Dive into our Contract Award database

    In an exclusive for WT Insider members, we are collecting all of the contract awards we cover into a database that you can sort by contractor, agency, value and other parameters. You can also download it into a spreadsheet. Read More

  • Is SBA MIA on contractor fraud? Nick Wakeman

    Editor Nick Wakeman explores the puzzle of why SBA has been so silent on the latest contractor fraud scandal when it has been so quick to act in other cases. Read More

Webcasts

  • How Do You Support the Project Lifecycle?

    How do best-in-class project-based companies create and actively mature successful organizations? They find the right mix of people, processes and tools that enable them to effectively manage the project lifecycle. REGISTER for this webinar to hear how properly managing the cycle of capture, bid, accounting, execution, IPM and analysis will allow you to better manage your programs to stay on scope, schedule and budget. Learn More!

  • Sequestration, LPTA and the Top 100

    Join Washington Technology’s Editor-in-Chief Nick Wakeman as he analyzes the annual Top 100 list and reveals critical insights into how market trends have impacted its composition. You'll learn what movements of individual companies means and how the market overall is being impacted by the current budget environment, how the Top 100 rankings reflect the major trends in the market today and how the biggest companies in the market are adapting to today’s competitive environment. Learn More!