Smarter machines means tougher choices
Early last year, Bob Work, deputy secretary of Defense, gave a speech talking about the Third Offset Strategy, which is a plan on how to use technology to gain an advantage over foes with more people and equipment.
It is the heart of the U.S. strategy on how we will confront the broad set of challenges we face, from nation-states such as China and Russia to terrorist organization such as ISIL to unforeseen threats such as the Ebola outbreak.
“The United States has relied on a technological edge ever since, well even in World War II,” Work told the attendees at a Center for a New American Security event. “Now we still believe we have a margin but the margin is steadily eroding and it’s making us uncomfortable.”
Work described how the U.S. is making significant investments in what he called the nuclear enterprise as well as new space capabilities, advanced sensors, communications, missile defense, and cyber. Promising technologies include unmanned undersea vehicles, advanced sea mines, high-speed strike weapons, electromagnetic rail guns and high-energy lasers.
The idea is power projection. Work didn’t mention this example, but talked about how China is exerting influence in the South China Sea, from building islands to launching its own aircraft carrier. The U.S. needs smart weapons to project power into that region but from a farther distance.
It is the use of smart weapons that gets me to the main point of this blog: a New York Times article about arms control groups advocating for laws and international treaties that will restrict the use of autonomous weapons, or weapons that rely on artificial intelligence.
According to the New York Times article, the U.S. plans to spend $3 billion on “human machine combat teaming.” The example the newspaper used was a missile or drone that launches an attack without direct human control.
The human rights organizations say that this kind of weapon raises many moral and ethical questions. Supporters say that machines may actually be better at conforming to the rules of war than people.
Current U.S. policy doesn’t ban autonomous weapons but requires that high-level government officials approve of their use, according to the Times.
The distinction appears to be between targets chosen by a person and targets that the weapon choses on its own.
New weapons that will come on line in the next several years will make the distinction even finer because the weapons are getting smarter.
While the moral questions are readily apparent when you are talking about weapons, the implications of this debate go well beyond military applications.
We are entering a new era where computers will make more decisions without our direct input. The prime example is the driverless car, but there are countless others: smarter smart phones, virtual assistance, smart homes, etc.
Machine learning, the Internet of Things, and other advances are making computers smarter and more intuitive. They can do more for us, and often without being asked.
It’s a brave new world, and there are great benefits to be reaped, but also risks. We can’t run from the benefits because the risks scare us, and we can’t blindly embrace the benefits without seriously addressing the risks.
I think it’ll be important to see how Defense Department thinks through the moral and ethical issues of autonomous weapons. Perhaps this will provide a framework for looking at the issue as we move forward in other areas.
I think smarter machines will only make our lives better, but an important question need to be answered: Where is the line that says we’ve gone too far? Just because we can do something, does that mean we should?
Government contractors who rightly see this area as a important growth market won’t answer these questions per se, but they will have to live with the results.
Posted by Nick Wakeman on Apr 14, 2016 at 9:27 AM