One activity humans should be exceptionally good at is innovation. Being able to conceive of new ways to shape the material world to our advantage is what differentiates us from animals. Yet, surprisingly, while humans are great at creating ideas, they are extremely poor managers of the social processes that create stellar new projects.
Earlier this year, Google CEO Sundar Pichai described artificial intelligence as more profound to humanity than fire. Thursday, after protests from thousands of Google employees over a Pentagon project, Pichai offered guidelines for how Google will--and won’t--use the technology. One thing Pichai says Google won’t do: work on AI for weapons. But the guidelines leave much to the discretion of company executives and allow Google to continue to work for the military.
When people see machines that respond like humans, or computers that perform feats of strategy and cognition mimicking human ingenuity, they sometimes joke about a future in which humanity will need to accept robot overlords. But buried in the joke is a seed of unease.
This new focus on AI is part of the US’s renewed drive to advance its at-home capabilities, to keep up with competitors, such as China and Russia. The news is somewhat of a change of heart from the Trump administration. Some members of the government had previously shown initial skepticism about the technology, which contrasted starkly with China’s full-throttle approach.
Affordable consumer technology has made surveillance cheap and commoditized AI software has made it automatic. Those two trends merged this week, when drone manufacturer DJI partnered June 5 with Axon, the company that makes Taser weapons and police body cameras, to sell drones to local police departments around the United States. Now, not only do local police have access to drones, but footage from those flying cameras will be automatically analyzed by AI systems not disclosed to the public.
Among the most important lessons in human history is that those who adopt innovation in the most advantageous manner often triumph over competitors. This has never been truer than in the rapidly evolving artificial intelligence revolution underway, where we face great risk from a tripartite of totalitarian nations, corporate oligopolies and complacent democracies.
Now, fresh details from Uber’s fatal self-driving car crash in March underscore not just the difficulty of this problem, but its centrality. According to a preliminary report released by the National Transportation Safety Board last week, Uber’s system detected pedestrian Elaine Herzberg six seconds before striking and killing her. It identified her as an unknown object, then a vehicle, then finally a bicycle.
When former Google CEO Eric Schmidt was asked about Elon Musk’s warnings about AI, he had a succinct answer: “I think Elon is exactly wrong.” “He doesn’t understand the benefits that this technology will provide to making every human being smarter,” Schmidt said. “The fact of the matter is that AI and machine learning are so fundamentally good for humanity.”
The U.S., China, and Russia, are only a few of the countries that have announced they are ready to invest in research and in industries to keep pace with a technology that some say is changing the world. Smaller countries, such as the U.K., are exploring ways to become leaders in niche areas, while others such as South Korea see artificial intelligence as a way of maintaining their sovereignty by offsetting potential military threats.
The White House’s planned advisory committee on artificial intelligence may or may not help keep the country at the forefront of technological innovation, but it is another sign that the government is getting more serious about the importance of AI and the potential threats of falling behind in the “AI arms race.”