ANALYSIS

Why big procurements struggle and what can be done about it

There’s a wealth of constantly evolving IT products available that can be leveraged to answer some of the most difficult missions in government.

How has government attempted to harness an incredibly fast moving market while adhering to regulations that govern the notoriously slow process of federal procurement?

Six different government entities took similar approaches to that business problem by creating large, long-term contracts by essentially vetting an initial group of government-savvy manufacturers and resellers. This process gave them each a contract where these companies compete on a daily basis for government IT commodity requirements by offering the latest technology at the best prices.

Today, these high-volume, billion-dollar contracts include Air Force’s NETCENTS -2 NetCentric Products, Department of Homeland Security’s (DHS) First Source II, Department of Veteran Affairs’ (VA) Commodities Enterprise Contract (CEC), Army’s ITES-3H, NASA SEWP V, and National Institutes of Health’s (NIH) CIO-CS.

These contracts, which are task order contracts, have long been a very successful and useful tool.

Many of these contracts are in their second or third iteration. And all of these contracts were opened again for competition in this decade, offering an opportunity for new technology resellers and manufacturers to earn a spot on these contracts and force existing awardees to compete again to maintain their position. Despite being an established concept, something started to go very wrong for all of these contracts. It’s a story best told in dates and numbers.

Despite being an established concept, something started to go very wrong for all of these contracts.

The NETCENTS II solicitation was issued Feb. 2, 2010, and was finally awarded and functional by Nov. 6, 2013. That’s 1,373 days. During that time, the Air Force attempted to make awards twice, but the decisions were protested 27 times. Eventually, the solicitation resulted in awards to 25 bidders, well over the original goal of nine awardees.

First Source II took 433 days to award after solicitation release and endured several protests.

CEC went 495 days before the dust settled and performance began.

ITES-3H’s RFP came out on Sept. 25, 2012 and has still not been awarded. It has been protested in each phase of the procurement, with 17 protests occurring in the first phase and 19 protests in the second.

The Army hopes to make awards in the second quarter of 2015, which is nearly three years from the solicitation release.

SEWP V was released on Aug. 16 2013. NASA successfully resisted several pre-award protests and made awards on Oct. 1 of this year, but then received 25 more protests. Rather than fight their protests as had been done on First Source II and CEC successfully, NASA rescinded their initial awards in order to dismiss the protests and will now reevaluate the hundreds of proposals they received.

NASA states they will make an award in 2015. NIH issued CIO-CS on May 7th 2014, and it is already battling pre-award protests.

While it’s not unusual for billion dollar procurements to take up to a year to award from RFP release, these contracts are taking well over a year, in some cases several years. They are also loading the Government Accountability Office’s bid protest docket, with many of the protests resulting in corrective action on the Government’s part.

So, what happened?

These procurements were conducted by six different contracting offices, working with completely different program offices, over the course of four years. Some of the solicitations contained traditional evaluation factors such as past performance and management approach, and some, like NETCENTS, discarded with these factors entirely.

Some used a multi-phased approach that rejected proposals in stages and others attempted to do all their evaluations at once. Some of these procurements were made up of smaller contracts that separated businesses based on their size and socio-economic profiles, while others lumped all contractors together.

However, there are a few commonalities that each of these procurements share.

Each procurement required bidders to propose products that fit a profile of specifications given by the government. In each solicitation, the products covered a wide spectrum of technology, from basic personal computers to enterprise-class equipment.

Many of the procurements had a long list of products, sometimes hundreds of items long. The flexible nature of these contracts means the government is not obligated to buy any item from that original list, and it is widely known that proposing items against this list is done primarily for proposal evaluation purposes, and has little to do with post-award activities.

That’s because each of these contracts has a mechanism known as “technology refresh” and “technology insertion,” which allows bidders to update the list of products they offer after award. These mechanisms are necessary to keep pace with a rapidly changing technology market.

The companies that bid on these contracts will tell you that these contract features, while central to the success and identity of these contracts, combine in such a way that makes the initial procurement process ripe for delays and protests.

The companies that bid on these contracts will tell you that these contract features, while central to the success and identity of these contracts, combine in such a way that makes the initial procurement process ripe for delays and protests.

It initially appears logical for the government to request a broad list of technology products from bidders that are vying for seats on billion dollar contracts. Successfully doing so proves the contractor can properly vet products that meet not only the technical requirements of the customer, but also government-wide regulations that cover these purchases, such as the Trade Agreements Act and Energy Star.

Essentially, each procurement asked bidders to do exactly what they would do post award: provide compliant technology at their best price, only on a much larger scale than the average delivery order. That seems fair enough.

In practice, it has proven to be a nightmare.

In every case, the government used this initial list of products as a pass or fail measure for the technical evaluation factor. That means a bidder had to provide a combination of technology products that met every single specification the government marked as mandatory to be eligible for an award.

For example, when the government asked for a blade server in the evaluated product list, it asked for a blade server with a specific number and type of processors, a specific number and type of memory, storage capacity, type of warranty, redundant power supplies, and so on. The list of requirements attached to a single product could sometimes be 20 or 30 specifications long.

When you multiply this list by the dozens or sometimes hundreds of products, you end up with individual pass/fail elements that can go well into the thousands.

The vast number of requirements caused significant delays during the proposal phase and much longer delays during the government’s evaluation and award periods. In many cases, the technical requirements for a product were initially unclear, impossible to fulfill, or specific to a single manufacturer.

The vast number of requirements caused significant delays during the proposal phase and much longer delays during the government’s evaluation and award periods.

Because these were pass/fail requirements that could invalidate an entire bid with one deficient response, bidders asked questions to clarify requirements. Many offerors also used the question and answer period to do what they do best: compete. They tried to steer the specifications to their preferred manufacturers, thereby eliminating part of their competition.

And their competitors, of course, did the same. That, coupled with the fact that these procurements attracted a very large number of bidders (NASA has stated that there were 233 initial proposals for SEWP V) resulted in some of the longest lists of questions and answers and subsequent amendments seen in all of federal contracting.

SEWP V exceeded 1,000 questions and the Q&A period went on for months, delaying the submission date several times and no doubt clogging internal NASA resources. CEC was delayed 62 days, much of it stemming from its lengthy and intricate list of specifications for enterprise equipment.

The delays and frustrations during the proposal period were nothing compared to what actually happened when the government began evaluating proposals, which resulted in mutual aggravation between industry and government, often ending in litigation.

Anyone who has spent time developing proposals for federal contracts will tell you that much of the evaluation process is subjective, except when it comes to pass/fail evaluation factors. Those should, by definition, be black or white.

When the government created these long lists of initial products and specifications, I imagine they felt like they were making their procurements more substantial, and perhaps they might even scare away a few of the more marginal bidders. They may not have considered the workload and liability they created for themselves. They took on the responsibility for correctly verifying the compliance of hundreds of products and all individual requirements for each and every bid.

In a perfect world, with plentiful and skilled resources and no gray areas, it would take an evaluation team several months to properly evaluate and document the number of proposals received.

In an imperfect world, the one we live in, resources are scarce and there is plenty of gray area. In the imperfect world, people make mistakes and omissions, particularly when large volumes are involved with limited resources.

This occurred both with government and industry.

In an imperfect world, the one we live in, resources are scarce and there is plenty of gray area. In the imperfect world, people make mistakes and omissions, particularly when large volumes are involved with limited resources.

The product evaluation decisions for NETCENTS-2 and ITES 3-H, the two contracts that have experienced the longest delays, were thoroughly protested by industry. And while the government successfully deflected some of the protests, many of them pointed out valid evaluation errors, causing the agencies to take a corrective action and revisit what we’ve established is a tedious, lengthy, and dangerous evaluation process.

Most of the First Source II awards were delayed by a single bidder that went all the way to the Court of Federal Claims to combat their dismissal for a single omission on their products list.

CEC, which likely had the smallest evaluation volume being a highly complex small business set-aside, went through a careful process to clarify requirements and allow offerors to fix their proposals after submission. CEC’s awards were still protested, but they were not based on the products list, and the protests were dismissed. The nature of the most current SEWP V and CIO-CS protests are unclear.

At best, the products list evaluation method is a time consuming endeavor, and at its worst, it’s a liability for litigation. But some undertakings that are hard and risky are worth it. So what value does this onerous evaluation process offer to government? When looking at pricing methodologies, I believe the most knowledgeable and honest in industry would tell you that it offers none.

It is important to remember that the government is not buying any initial products with the creation of these contracts. They simply serve to initiate the vehicles that will later fulfill billions of dollars of competed delivery orders after award. We’ve established that contractors will also refresh, replace, or insert entirely new products into their offerings after contract award to stay current with technology on these multi-year contracts. This creates an environment where the initial pricing submission has little basis in reality.

Most often, evaluators compare the prices of bids by looking at the entire price of a bid, known as a total evaluated price (TEP). The government isn’t precluded from looking at individual cost elements, but it becomes increasingly difficult to look at individual cost elements when there are hundreds of cost elements and the volume of bids is more than an evaluation team can handle.

So evaluators tend to rely quite a bit on the TEP. In the contracts we’ve discussed, the majority of the TEP is calculated by multiplying the unit price given for each individual item in the bidder’s product sheet by a given quantity. The bidder is aware, or can figure out, these quantities.

Because the price of some products are significantly less than others (for example, an LCD monitor and an enterprise-class storage system), certain products, based on their price and quantity, end up contributing much more to the TEP than other products.

In industry, we call this “weighting,” meaning that one product “weighs” more than others as far as its impact on pricing. Extreme weighting give rise to a pricing methodology known as gaming. All of these procurements had weighting issues.

Extreme weighting give rise to a pricing methodology known as gaming. All of these procurements had weighting issues.

Gaming can be easily explained through an example. Let’s say one of these procurements was vastly simplified and asked for 100 keyboards and 10,000 laptops for the initial submission. As a bidder, you know a compliant keyboard will cost you $9 a unit and a laptop will cost you $900 unit. You decide to propose the keyboard at $10 a unit and the laptop at $1000 a unit.

When you look at the TEP in this scenario, the laptop dwarfs the keyboard in weighting, making up 99.99% of the price. Would you make the smart business decision to propose the laptop at a much lower profit margin to win the overall contract? Sure. Would you consider even selling it with no profit? Why not? It’s a small portion of what you could make on the entire contract. But would you sell it a loss?

Remember, neither of these products is likely to be sold with the awarded contract. By the time a contract is actually awarded, much of an original product list has reached end-of-life and is no longer available as new on the market.

Even for the products that are still available on the market, because these are multiple-award contracts, an awardee is not necessarily obligated to bid on a large delivery order post-award for serious quantities. Everything above a low dollar threshold has to be competed on these contracts, and an offeror can simply choose not to bid on an RFQ for an item they proposed below their cost.

Because these contracts allow you to regularly change your catalog post-award, you can propose a product with the initial submission that is compliant and cheap, but you know to be inferior and undesirable, with the intention of replacing it or putting another item on the contract post-award.

With these options, each bidder is presented with a much lower risk when understating prices. Bidders know to understate the price on the most heavily weighted items - it’s grade-school math.

This gaming approach has become the standard rather than the exception on these large procurements.

The government is fully aware that gaming happens. It’s not a practice that was created by the technology commodity industry and it has been practiced for some time. There is an entire acquisition regulation dedicated to gaming, which the government addresses as “unbalanced pricing.”

They define this in FAR 15.404-1: “Unbalanced pricing exists when, despite an acceptable total evaluated price, the price of one or more contract line items is significantly over or understated as indicated by the application of cost or price analysis techniques.” The government has the right to reject offers for unbalanced pricing.

The government can find unbalanced pricing by comparing the prices of units among offerors or using other information.

But what happens when gaming becomes a wide-spread practice? It becomes very hard for evaluators to find and prove that pricing is unbalanced.

With each of these procurements, industry has experienced a consistent lowering of the bar for what is considered competitive, but reasonable, among companies for product pricing. With each debrief after an award decision, bidders were often shocked to find that their supposedly rock-bottom price was actually relatively high when compared to the field of bidders.

This has created a downward spiral with each bid, as prices move lower and lower, becoming less and less realistic while testing business ethics.

This has created a downward spiral with each bid, as prices move lower and lower, becoming less and less realistic while testing business ethics.

In summary, the common evaluation strategies undertaken by government on large IT commodity contracts is an incredibly arduous activity for both industry and government that causes massive delays and litigation. It results in contract awards with an initial list of products that Industry will almost never sell and government will likely never buy.

Once the government makes it over the solicitation hump and has an actual contract these contracts can be greatly beneficial to the government. They are dynamic and responsive to technology needs and can often be updated much faster than a GSA schedule. The competitive aspects of the contract drive down prices on products while also offering technology solutions the government might not have previous considered.

So, there’s no need to throw the baby out with the bathwater. But we do need to change the bathwater.

I believe government and Industry would be in favor of a change for the next round of competitions. I believe there is a middle ground that allows the government to truly evaluate a company in a reasonable amount of time while deriving actual value from the initial procurement. The key is to make the product evaluation portion simple.

The government should rely more heavily on subjective evaluation factors like corporate experience, past performance, and written technical and management approaches. These elements provide the government with valuable information to evaluate a company while not presenting a protest minefield.

Detailed corporate experience and past performance information are reliable sources for evaluation in this industry as it’s not something that is often fabricated. This information should only come from the prime contractor. Subcontractors come and go in this industry. Technology resellers and manufacturers that participate in the federal space keep copious records on each transaction. They should be able to provide extensive information on their previous contracts and performance.

If a bidder is unable to provide this detailed information, it can be an indication that they lack business intelligence on their own operations.

Technical and management approaches are a bit trickier. Bidders can hire outside technical consultants to generate quality technical solutions or write a management approach that makes a fly-by-night operation look like an orderly and precise enterprise. However, the government can ask for bidders to disclose their current (and not future) business practices by asking for information on their supply chain or manufacturing practices, reseller certifications, lines of credit and financial practices, and information on their internal business systems and process.

If the government finds out that a bidder uses a spreadsheet to track all of their quotes and orders today, they will know the bidder presents an unacceptable performance risk on a billion dollar contract tomorrow. The more the government focuses on specific bona fides rather than future promises, the more valuable and realistic information they will receive in a proposal. 

But how do we handle the part that has derailed the largest IT commodity procurements?

The proposed list of products should be vastly simplified, in both number of products and specifications. If the likelihood of a contractor having to deliver on their initial list of products is very low, then it should be reduced to a simple exercise for both industry and government. Using a much shorter list of products that are widely available and compliant with government standards, like laptops and monitors and servers, will ease several pre-award problem areas. At a minimum, it will reduce the amount of data the government has to evaluate, which should significantly speed these procurements up. It would be a change welcomed by both Industry and government.

Reader Comments

Thu, Dec 18, 2014 Rich McLean, VA

This is one of the better articles I've ever read on Washington Technology. Bravo, Mr. Shafer. I agree with the other commenter that the recommendations are a little generic, but (as a participant in several of the contract competitions noted) I agree that these types of product evaluation are unnecessarily arduous and voluminous. I would like to see the government move to more of a sample delivery order model for these types of contracts. The RFP should provide a few hypothetical buying scenarios, with some basic background information, and ask contractors to propose BOMs on their own. DHS CDM did this well, IMHO. Forcing every contractor to bid to the same BOM promotes a race to the bottom. The government would be better suited to let the contractors be creative and propose their own solutions. This would show better technical prowess, understanding of industry trends, and understanding of value propositions.

Thu, Dec 18, 2014 LackLusterBuster

As a diagnostic, this is a tour de force. The recs, however, are only good--plausible and adequately sensible, if unoriginal. Overall, tho, great thinking. The threshold problems are trust and competence. When it comes to corp experience, one cannot expect truth-telling from the companies, or for them to surface their problems. The govt cannot sense these flaws, and, the customer involved, will want to cover them up as much as the contractor. This is why contractor performance rating and performance eval data bases are so troubled, controversial, and unreliable. When it comes to the government, competence is a perilous threshold issue. Those who come up with reqts often do not understand them or are subject to vendor manipulation--that explains why so many Section Cs and other RFP sections are hosed or obviously defective or reflect institutional COI. The same people evaluate proposals. Further, many govt emps have COI individually, as in looking for their next job, for example. We cannot expect sufficient integrity. Accountability, administrative discipline, or the necessary criminal investigations are never conducted. And the miscreants stay in place, haplessly condemning the end-customers and the companies to live in an untrustworthy, flawed environment where the taxpayers' dollars are often wasted or misused. Solutions include: buying less, buying more OTS commercial, using truly independent evaluation people. Note, this would clearly exclude the FFRDCs, which are particularly conflicted due to their intimate, decades long relationships with their sponsors and their shelter from all competition. Again, great analysis, good conclusions, recs need work. Thanks, Mr. Shafer

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

What is your e-mail address?

My e-mail address is:

Do you have a password?

Forgot your password? Click here
close

Trending

  • Dive into our Contract Award database

    In an exclusive for WT Insider members, we are collecting all of the contract awards we cover into a database that you can sort by contractor, agency, value and other parameters. You can also download it into a spreadsheet. Our databases track awards back to 2013. Read More

  • Navigating the trends and issues of 2016 Nick Wakeman

    In our latest WT Insider Report, we pull together our best advice, insights and reporting on the trends and issues that will shape the market in 2016 and beyond. Read More

contracts DB

Washington Technology Daily

Sign up for our newsletter.

I agree to this site's Privacy Policy.