I recently met with the latest group of China Future Leaders university students coming through Boston. There were 140 of them this time -- reflecting the growth of Chinese tourism to the US -- and I had to divide up my appearance into two meetings, because the room we had couldn't fit more than 100.
As I always do (faithful blog readers may recall earlier posts on these student visits), I asked them a bunch of questions at the beginning of each session. I started by asking them a version of a question I had asked in the past. First I asked them what the best thing about the US was, and the worst thing. Then I asked them the same question about China -- what were the best and worst things.
What was interesting was that there was very strong agreement, for both the US and China, about the best thing about each country, and much less about the worst thing. For the US, by far what the students shouted out was that the best thing was "freedom," with "education system" in a fairly distant second place as the best thing. (Critics of US education take note, though -- the Chinese were probably thinking more about universities than elementary or secondary schools).
For China, two answers dominated the best thing -- Chinese history and Chinese culture, with Chinese food in distant third place. Anybody who looks at the popularity of historical dramas, involving long-ago dynasties, on Chinese television would not be surprised about this answer. Things were relatively silent -- maybe due to politeness? -- about the worst thing about the US, and no real consensus, but individual answers included arrogance and high crime. For China, again there wasn't as much shouting out about the worst thing, but the most common answer was "too many people," with pollution in second place.
Then, as I have frequently asked them before, I asked them whether they thought the US government was on the whole friendly or unfriendly to China, and then whether the Chinese government was on the whole friendly or unfriendly to the US.
The response was the same as it has been every time before: Most students thought the US government was unfriendly to China, but that the Chinese government was friendly to the US. This time I asked the majority why they answered the way they did. On the US being unfriendly, the good news was that the students pointed to very concrete issues, such as the value of the Chinese currency, trade relations, and Taiwan. Nobody suggested anything more broad. Why did they think the Chinese government was friendly to the US? One answer dominated: "They buy US government debt."
At the end -- in the context of inviting interested students to "friend" me on Facebook -- I asked them how many of them were on Facebook, which is of course blocked in China and can be accessed only using special software to "jump the wall" (the so-called "Great Firewall of China"). To my surprise, about a quarter to a third of the students raised their hands -- though this number probably isn’t a good guide to guess the number of people who have managed to get and use this software, since some of these students are studying in Hong Kong, where the Internet isn't blocked.
I then asked them, "If Facebook were allowed in China, would you participate?" Essentially every student in the audience raised a hand. This is interesting, since just about all of them are already on Facebook's Chinese knockoff Renren, so this suggests they want to be able to communicate with foreigners. These students aren't necessarily typical of all Chinese students -- they come from families rich enough to afford this trip, and they have chosen to come to visit the US -- but their responses suggest a continuing desire by young Chinese to cultivate contacts with us.
Posted on Jan 30, 2012 at 7:27 PM5 comments
Good news travels slowly, so perhaps it is not surprising that I only recently found out about an October 2011 GAO report called Critical Factors Underlying Successful Major Acquisitions, which examines seven recent government IT systems acquisitions -- ranging in dollar value from $35 million to $2 billion -- that have met their schedule, cost, and performance targets. (I don't recall seeing anything even in FCW about this report when it came out, and a search of the fcw.com website with the key words GAO and the title of the report came up dry. FCW, tell me it isn't so!)
The projects ranges from a logistics support system fielded by the Defense Information Systems Agency to a Department of Homeland Security for high-volume Mexican and Canadian border crossings that uses license plate identification and other technologies to allow checking background information about visitors without slowing down border crossings too much. Agencies were asked to identify projects, and GAO vetted project performance. They then interviewed people involved in the program with open-ended questions about what factors in their view contributed the most to program success. GAO coded and tabulated replies, and the report presents information about the most-common success factors the interviews revealed.
The single most common success factor -- mentioned in all 7 of the programs -- was that "program officials were actively engaged with stakeholders." GAO noted that the stakeholders were internal (including top management) and external (such as oversight bodies and non-government customers). Internal stakeholders were involved in ongoing meetings with the contractors, assessing progress and issues, and in reviewing contractor deliverables. In all but two of the cases, end users and other stakeholders were involved in the development of requirements and in informal testing prior to formal acceptance testing.
The next most common factor, present in six of the seven programs, was "program staff had the necessary knowledge and skills" and "senior department and agency executives supported the programs."
In the successful projects, program managers were knowledgeable about procurement, contract management, organizational change, and/or earned value management. As for senior leadership, they were credited with interventions that would have been more difficult at the program manager level, such as reaching agreements with senior leaders in other departments about necessary cooperation or assuring end user involvement.
Perhaps surprisingly, sufficient funding was named as a success factor in only three of the cases.
It was also interesting that features of the technology approach used generally didn't make it on the list at all, with the exception of agile development, which some successful projects mentioned. Nobody mentioned commercial off-the-shelf products or cloud, or much else about the technology itself as success factors.
The message seems to be: it’s the management, stupid.
Researchers (including me) are often skeptical of research that tries to draw conclusions only from successful cases, because even if you see that all the successes did x, you don't know whether maybe the failures did x as well unless you compare successes and failures. In this report, however, I feel more confident in the results because past studies of failures have developed long lists of mistakes that IT systems that run into problems make. This report draws from that long list to identify mistakes avoided in the successful projects.
All in all, good job GAO.
And aside from the helpfulness of the lessons learned reported in this GAO study, this report should be getting wide distribution to give GAO credit for writing a report that tells us what we might learn from success instead of just dumping on failure. If the silence is deafening, it will hardly encourage more such reports from GAO.
Posted on Jan 26, 2012 at 7:27 PM7 comments
I promised in my blog earlier this week, the one about research on cooperation among colleagues, to write about another interesting presentation we heard from a candidate for an assistant professor job in management: Eileen Chou of Kellogg Business School at Northwestern.
Chou's presentation -- with the cutesy title "The devil is in the details" -- presents some experiments challenging the common view that contracts with more specific contract terms are better than those with less specific ones. Some of the experiments were done in the lab, and others involved recruiting people for a job on the website mTurk. (The site is usually used to recruit real people for jobs they do at home, and has become an increasingly popular tool for academics who wish to study psychological phenomena.)
Chou looks at the effect the specificity of contract terms has on the performance of the person who has signed the contract. The contracts she considers are employment contracts, and she looks at them from the perspective of the employee. In the experiments, she manipulates some of the terms of the contracts to make them more or less specific.
So, for example, in one experiment the two versions are "we will check the responses of 25 percent of the employees" vs. "we might check some employee responses." In another, the alternatives are a group that meets “two days a week (on weekdays)" vs. "a bi-weekly focus group." In a third, participants will speak "for two to three minutes" vs. "for a couple of minutes."
What she found is that the less specific contracts increased employee persistence, creativity and organizational commitment compared with the more specific ones. Chou argues that this occurred because the less specific contracts increased employee intrinsic motivation, which research has shown is related to a person's feelings of competence and autonomy. She suggests that lack of detail may be interpreted as a vote of confidence in the employee's competence, and that fewer constraints, and incentives that may not be well specified, promotes a feeling of autonomy.
These experiments touch on age-old issues in government contracting and in how to promote good contractor performance. In some sense, these results are more consistent with the "trust/cooperation" view of how to organize contracting than with the "adversarial/monitoring" view.
Of course, it's a real stretch to go from these results to definite conclusions about government contracting. First, it should be noted that all the specific terms involved either inputs or monitoring -- the experiments did not include performance demands ("do your best" vs. "keep the system up and running 99.5 percent of the time"). Second, the experiments looked at effects of specificity on the behavior of individual employees in one-on-one contracts, and it is unclear whether or to what extent these findings would apply to employees if the contract is between a government agency and a service provider.
Nonetheless, it is an interesting contribution to debates on government contracting from an unlikely source.
Posted on Jan 20, 2012 at 7:27 PM8 comments