How to better build T&E into the acquisition process
- By Michael Wright
- Aug 22, 2014
At a July conference hosted by the National Defense Industrial Association, Principal Deputy Assistant Secretary of Defense Darlene Costello addressed the fact that conducting testing and defining project requirements earlier in the acquisition cycle were top priorities in recent guidance provided to industry by the Department of Defense (DoD).
The guidance suggests an increased focus on software testing and evaluation during the government acquisition process, and recognition that late or insufficient testing can negatively impact the ability of contractors to meet agency expectations, operate efficiently and cost-effectively.
One doesn’t have to dig deep to uncover the consequences of software and websites not being properly tested until it is too late. In a hearing before the House Energy and Commerce Committee, executives with the primary IT contractors for the Healthcare.gov website indicated that comprehensive testing of the insurance exchange began just a couple of weeks – rather than months – before the public launch, and that testing overall, in their opinion, was insufficient.
While the trials and tribulations of Healthcare.gov were more public than most government IT launches, it is indicative of the fact that software testing and evaluation is often relegated to an afterthought for the contractor until after program implementation has begun and there is something to deliver. Delayed and/or insufficient software testing is damaging for the contractor, the agency, and ultimately, the citizen. Given the renewed focus by the DoD as well as civilian agencies on software testing and evaluation, there are several strategies for government contractors and Federal IT providers to consider in order to adequately build testing and evaluating into the acquisition process.
Build a repeatable testing and evaluation process
Truth be told, any software application built for and delivered to a government agency is a candidate for testing. There are several different types of testing (unit, functional, integration, performance, etc.) but at a minimum, a custom software application needs to be tested for both functionality and performance, as does a website or mobile app.
The key to a repeatable testing process starts with precise requirements. As requirements change during the lifecycle of a project, controlling the changes and communicating them effectively to the test teams ensures that testers stay in line with the most up-to-date requirements that need to be validated.
Precision comes with an understanding early on in the lifecycle of the user and system requirements. Without a precise understanding early on, you inevitably introduce rework costs. When change is required for requirements and other lifecycle artifacts (i.e. change requests, code, deliverables, etc.), controlling those changes throughout the release cycles with defined workflows helps gain stakeholder acceptances. Once you get to a testable state, validation of requirements means that tests have been executed and the user and system requirements have been verified.
Improve understanding of which requirements to test
In September 2013, when Undersecretary of Defense for Acquisition, Technology and Logistics Frank Kendall outlined Better Buying Power 2.0 – the DoD initiative to drive continuous improvement in defense acquisition programs – he spoke to the need to achieve affordable programs and control costs throughout products lifecycles.
Although application reliability is important, an application does not need to be tested every time a change is made. For testing and evaluation to play a beneficial role in controlling product lifecycle costs, contractors must gain an understanding of which requirements to test. Program time and cost overruns are caused by insufficient testing and a poor understanding of the requirements to test, and contractors have no choice but to drive greater efficiency with program deliveries.
In a perfect world, you would have the time and resources to test even the minutest update on every device and carrier network, but with today’s short application lifecycle, you shouldn’t even try. If your application is transactional, has high traffic or acts as your external touch point, then you’ll need to make sure that it works all the time and on all popular devices. But, if you are fixing a bug with a non-critical component of an application, you can reduce the testing for every implemented change.
One way to determine what components of an application need to be tested is to implement site analytics. This will provide informed insight into real-life usage and help evolve your testing over time. What was important at the onset could change as other problems arise, so this will allow you to amend your test plan as needed.
Leverage cloud infrastructure and automation to deliver software testing
Tight agency budgets can provide a convenient yet ill-fated excuse to skip testing based on the assumption that these processes are time consuming and costly. The irony, of course, is that effective testing can save agencies upwards of millions of dollars – if not more – by correcting an issue before it becomes a program failure.
Building time into the application lifecycle to test an application can increase the chance of catching and correcting bugs before they negatively impact the user. This prevents problems and saves time in the long run. Another cost saving measure is to reuse or automate tests as you continue to build your application. This allows the contractor to record a test once and replay it many times, increasing your coverage, but not the required working hours. Automation of manual steps in the testing process reduces manpower costs and frees up cycles to do higher quality testing.
Finally, cloud infrastructure for hosting requirements and testing solutions can be utilized if FedRAMP requirements are met. This isn’t always viable, but contractors that have gone through FedRAMP successfully have this option at their disposal.
Avoid common software testing/evaluation mistakes
Placing more of a priority on software testing can reduce errors, but it does not by default eliminate them. Contractors should be mindful of getting tripped up by common software testing and evaluation mistakes. Assuming requirements will be set in stone throughout a program, not designing tests early on based upon defined requirements, and not having connected requirements and test platforms for the program team have been areas, historically, where the testing process breaks down.
As software testing and evaluation gain a greater focus by agency decision makers, the onus falls to contractors and Federal IT providers to build these capabilities into the contracting process earlier, and in a more substantive fashion to achieve program and mission success.
Michael Wright is Federal Field Director for Micro Focus, an international firm specializing in application modernization and management. He can be reached at firstname.lastname@example.org.