Artificial intelligence experts shook up the tech world this month when they called for the United Nations to regulate and even consider banning autonomous weapons.
Attention quickly gravitated to the biggest celebrity in the group, Elon Musk, who set the Internet ablaze when he tweeted: “If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.”
If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea. pic.twitter.com/2z0tiid0lc
— Elon Musk (@elonmusk) August 12, 2017
The group of 116 AI experts warned in an open letter to the UN Convention on Certain Conventional Weapons that “lethal autonomous weapons threaten to become the third revolution in warfare.” Speaking on behalf of companies that make artificial intelligence and robotic systems that may be repurposed to develop autonomous weapons, they wrote, “We feel especially responsible in raising this alarm.”
The blunt talk by leaders of the AI world has raised eyebrows. Musk has put AI in the category of existential threat and is demanding decisive and immediate regulation. But even some of the signatories of the letter now say Musk took the fear mongering too far.
What this means for the Pentagon and its massive efforts to merge intelligent machines into weapon systems is still unclear. The military sees a future of high-tech weapon systems powered by artificial intelligence and ubiquitous autonomous weapons in the air, at sea, on the ground, as well as in cyberspace.
The United Nations has scheduled a November meeting to discuss the implications of autonomous weapons. It has created a group of governmental experts on “lethal autonomous weapon systems.” The letter asked the group to “work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies.”
CONTINUE READING ON PAGE 2 —->