NetCents 2 suffers four major strikes
The Air Force piled on mistake after mistake as it evaluated the bids for the $980 million NetCents 2 Application Services contract, leaving the GAO little choice but to side with the losing bidders and ask for new evaluations and award decisions.
Oh, Air Force NetCents 2 where did you go wrong? Let me count the ways.
It’s taken a couple weeks, but the Government Accountability Office has released its decision sustaining the protests of Computer Sciences Corp., HP Enterprise Services, Harris IT Services and Booz Allen Hamilton.
The document paints an ugly picture of how the Air Force made its source selection decisions for the $980 million NetCents 2 Application Services contract.
In a nutshell, the Air Force failed to follow much of the criteria it laid out in its request for proposals.
By my count, four major categories of mistakes were made, and any one of those alone likely would have led to the Air Force losing the protest. With four such stumbles, the service never really stood a chance in the prevailing against the protesters.
The mistakes were made in the areas of:
- Cost realism analysis
- Technical evaluations
- Past performance evaluations
- Trade-off decisions in the best-value award criteria
There is a lot of legal language in the decisions – it was written by lawyers, after all – but the criticism of the Air Force for is unmistakable.
In the conclusion of the section explaining the failings of the Air Force’s cost realism analysis, the GAO wrote:
“The evaluation record is simply devoid of any independent assessment of whether the offerors’ proposed labor hours, skill mix, and labor mix were sufficient to successfully perform the requirements of the cost reimbursable sample task order. … the agency failed to conduct a reasonable cost realism analysis as required by the RFP and the FAR.”
Deficiencies in assessments and analysis come up repeatedly in the 25-page decision.
In the technical evaluation section, GAO describes how the Air Force evaluators only verified that the proposed labor hours, descriptions and qualifications in the technical proposals matched what was in the cost proposals.
“This mere confirmation that the offerors provided the required information in their proposals, and that the technical and cost proposals were consistent, is no substitute for an evaluation of whether the information provided demonstrated an understanding of the work to be performed,” GAO wrote.
When it looked at how the Air Force evaluated past performance, GAO found inconsistencies between how it scored the past performance of winning bidders and losing bidders.
For example, CSC had three of its references deemed “less recent” while an unnamed winning bidder had references deemed “more recent.” The big difference: CSC’s reference work ended in May 2010 and the winning bidder’s work ended in June 2010.
One month separated less recent from more recent in the eyes of the Air Force evaluator.
This is an important distinction because the Air Force also gave more weight to “more recent” past performance over other criteria such as quality and how relevant the work was. CSC’s references actually were better but weren’t given the same wait as the lesser quality references of the winning bidder.
GAO didn’t like that one bit. “We see no reasonable explanation” for the differences in how CSC and the winner bidder were treated, GAO wrote.
Because of the mistakes with the technical evaluations, past performance and cost realism analysis, “the agency’s tradeoff decision cannot stand,” GAO wrote.
GAO also agreed with HP’s allegation that the Air Force didn’t follow the tradeoff criteria described in the solicitation. Instead, the Air Force emphasized price and getting enough contractors to make sure that together they could cover 27 subcriteria. I’m not 100 percent sure what the subcriteria are, but it has something to do with the level of past performance and the confidence that gives the Air Force that they can do the job.
The RFP said the selection process would be “based on an integrated assessment of past performance in support of a tradeoff considering performance confidence and price.”
The one part of the protests where GAO sided with the Air Force was the protesters contention that discussions were unequal because of an exchange between the Air Force and Raytheon. But the discussions were only about clarifying information already in Raytheon’s proposal. No changes were made to the proposal.
In light of their findings, GAO has told the Air Force to conduct a proper cost realism analysis, document its technical evaluation and conduct a new performance confidence assessment.
After they have done all that, they can make a new source selection decision.
It’s almost unimaginable to me that the Air Force is in this position four years after the release of the RFP and scores of protests involving this program. It seems every portion of NetCents 2 has run into major issues.
I can’t really see the light at the end of the tunnel, either. Perhaps the best thing for the Air Force to do is skip NetCents 2 and start with a clean slate on NetCents 3.