WT Business Beat

By Nick Wakeman

Blog archive
Nick Wakeman

How bad are RFPs? Plenty

It is no secret that solicitations can be hard to read. The language is technical and often convoluted. It can lead to confusion and misunderstandings. Not to mentions delays and inefficiencies.

But just how bad is it? Well, maybe even worse than many of us thought, according to a report that was released this summer.

VisibleThread, a company with software products that use algorithms to analyze the language of documents and websites for changes and clarity, turned one of its tools loose on five solicitations for contracts worth a total of $7 billion.

The company’s primary business is helping contractors track changes in solicitations. Its algorithms don’t just identify changes in wording but it also highlights changes in context and substance.

A second product crawls websites and analyzes the language used on the website for clarity.

It was this website tool that VisibleThread used to analyze the solicitations for GSA’s Human Capital and Training Solutions (HCaTS), HHS United Program Integrity Contractor (UPIC), HHS Research, Measurement, Assessment, Design and Analysis (RMADA), the Air Force Joint Range Technical Services II (J-Tech II), and Navy Fielded Training Systems Support IV (FTSS-IV).

Company CEO Fergal McGovern said that the RFPs were evaluated for readability, passive language, long sentences and word complexity density.

The scores were then compared using the Flesch Kincaid tests, which the Navy developed in the 1970s to improve training manuals and other technical documents.

Each of the four parameters had a target goal of what would be considered good:

  • Readibility – a score of 50, which is about the 8th grade reading level.
  • Passive language – 4 percent or less of the sentences have passive construction.
  • Long sentences – 5 percent of less have 25 or more words.
  • Word complexity density – This scan looks for complex words and phrases based on the plain language guidelines the federal government has established. A score of 100 is the ideal.

VisibleThread didn’t look at entire solicitations but focused its analysis on the statement of work, Section L (the instructions) and Section M (the evaluation criteria.)

These areas are often the ones that cause the most confusion for contractors and result in plenty of back and forth between agency and bidders, McGovern said. After award these sections also are often cited in bid protests and lead to delivery issues.

The result of the analysis shows plenty of room for improvement:

  • Readability scored 32.9, four grade levels higher than recommended for clear writing.
  • Passive voice was present in 14 percent of the sentences, more than three times higher than recommended.
  • Twenty percent of sentences exceeded recommended length.
  • The average complexity score was 3.67, suggesting opportunities to simply word choices across the documents, according to the report.

Of the three parts of the RFPs that VisibleThread evaluated, Section M (the instructions) scored the worst. The statements of work were poor performers as well.

One of the key takeaways according to VisibleThreads report, is that the quality of solicitations can vary widely from one section to the next, which McGovern said points to different people writing the different sections and no clear authority to make them more consistent.

The analysis wasn’t just an intellectual exercise for McGovern and VisibleThread. He said he’s trying to call attention to poorly written solicitations by giving concrete data on where improvement is needed.

If a solicitation is difficult to understand it increases the cost for bidders and the government during the procurement process, McGovern said.

Because of confusion during the bidding process there are delays, which drive up costs. And then there is the issue of the contractor bidding and winning and then realizing what the government needs is different from what was in the solicitation. Again, there are more delays and costs overruns, McGovern said.

Improved writing will pay dividends for the government he said. “The better the RFP, the likelihood of success gets better down the road.”

A lot of the findings in the report point to good basic writing and communications practices:

  • Active voice is better because it is clearer and more direct.
  • Shorter sentences are clearer and easier to understand.
  • Word choice can improve readability and clarity.

The government has tried to address these issues as recently as 2010 when the Clear Writing Act was passed, but the law has no real teeth, McGovern said.

He compared the current practice of developing RFPs to writing and releasing code and not testing it.

“It should be a very simple step,” he said.

Just think of the pain and heartache and waste resources that could be avoided.

Posted by Nick Wakeman on Aug 25, 2016 at 6:49 AM


Reader Comments

Wed, Aug 31, 2016

They seem to be getting worse. Some this year are just awful and ripe for protest. The worst is the Q&A. Often there are conflicting answers or cases where the government answers something unrelated to the question asked.

Mon, Aug 29, 2016 JC WV

The criteria used is pedantic at best. Really? passive vs. active? This really depends on context. Long sentences are a problem? Really? Complex thoughts and intricate problems often require complex sentences. Writing above an 8th grade level is troubling? Government contractors are receiving billions in business. Let's hope their degrees are not from paper-mills and can understand college level writing.

Mon, Aug 29, 2016 Troy

Poorly written RFPs create poorly executed Projects

Sun, Aug 28, 2016

I am sure RFPs could be better and that is not a surprising conclusion, though many of the specs or SOWs are technical in nature and the test would be whether they are understood by contractors in the specific industry. The SOWs are often drafted by contractors, though the Govt is responsible for the final product that goes in the solicitation. Sections L&M are written by the contract specialist and CO and should not be the product of a contractor. Most of the remaining portions of an RFP consist of standard language and FAR provisions and clauses. Of all the sections, the SOW and Sections L&M are the most important. Before you spend time and money preparing a proposal, these need to be understood and if they are not clear, then ask questions. Pre-solicitation notices may allow comment on draft solicitations. After the solicitation is issued, COs usually allow for Q&As. The CO may amend the solicitation based on Q&As. This process will not address issues such as active versus passive voice or create a dumbed down solicitation, but it will allow contractors to ask questions and participate. Standardized language may help. Sections L&M is crated for the requirement, but I do not see why it needs to be as individualized as a SOW. There are only so many choices for evaluation factors and the basis of award, but across the board there variance. This creates risk of ambiguities, unclear language, and evaluation criteria that is not intended. After proposals are submitted, the criteria is hard to change. The biggest problem with unintended evaluation criteria is that if the Govt may not follow it in the source selection and they don't realize it because it was not intended. Examples of unintended evaluation criteria include price realism, definitive responsibility criteria, restrictive qualifications, heightened past performance or past experience, and a low price or LPTA basis of award when the Govt does not necessarily desire to award to the lowest priced offeror. These situations create risk of delays and protests. Sections L&M are usually written by contract specialist. COs should take ownership and do a better job of reviewing the solicitation. A warranted CO with a college degree and requisite experience has the capability to ensure the solicitations are better and to prevent mistakes. I would suggest more training, but this is that half-baked solution to all the Govt's problems. Standardization and automation does solve a lot because solicitation instructions and evaluation criteria are susceptible to a cookie cutter approach. All the solicitations are not as special and unique as the customers may believe. This still poses the risk of selecting the proper language for the RFP. This is why the CO must take ownership.

Fri, Aug 26, 2016

The most critical document in any solicitation is the requirement and how clearly it is captured. I have seen little to no training provided for the folks who have the challenging job of writing a requirement often for the first time. This is one of the key reason requirements often reflect a staffing plan rather than defined tasks and performance standards. I have seen the ARRT tool provide that structured thought process that helps requirements generators start defining performance tasks rather than position descriptions.

Show All Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

What is your e-mail address?

My e-mail address is:

Do you have a password?

Forgot your password? Click here
close

Trending

  • Dive into our Contract Award database

    In an exclusive for WT Insider members, we are collecting all of the contract awards we cover into a database that you can sort by contractor, agency, value and other parameters. You can also download it into a spreadsheet. Our databases track awards back to 2013. Read More

  • Navigating the trends and issues of 2016 Nick Wakeman

    In our latest WT Insider Report, we pull together our best advice, insights and reporting on the trends and issues that will shape the market in 2016 and beyond. Read More

contracts DB

Washington Technology Daily

Sign up for our newsletter.

I agree to this site's Privacy Policy.