Drone technology is moving forward, whether we like it or not. MQ-9 Reapers manufactured by General Atomics are sold to the U.S. Air Force, fitted with hellfire missiles provided by Lockheed Martin. The military industrial complex is ticking, unmanned aerial vehicles are soaring, and all is not quiet on the Western front. Few places are quiet on the Eastern hilt of the world. Drone strikes pepper Pakistan, Yemen, Somalia, Libya, and Afghanistan, as the world has become an all-access battlefield where remote-controlled homicide can be carried out with minimal effort, for the first time in human history.
Things are changing. Warfare has been altered forever. Machines are learning...how to learn. Humans are doing less of the hunting and killing and delegating these duties to tougher, colder customers. The purpose of this series is to examine players, characters and ideologies that are deeply influencing the way that our future is shaping up, in both negative and positive ways. When one drone strike kills an innocent child in a foreign village, another is used for ocean exploration and hurricane detection. We will enter into the eye of the storm of controversial issues and attempt to chart through territory that pits the right to due process against the rich vein of untapped A.I. (artificial intelligence) technology, which kicks up dirt on greedy politicians, lobbyists and arms dealers who would rather push a button than fight a war themselves. If you think the United States is winning... I'll only tell you this once. The new drone order is only just beginning, and all is buzzing on the geopolitical front.
Editor’s Note- BFP welcomes Erik Moshe to its team. Future articles in Erik’s new series will be available only to BFP activist members.
The New Drone Order: Part I- A.I. Entities, Our Future Friends or Enemies?
Steve is a scientist, entrepreneur, and a jack of many trades. He has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. He can be seen online contributing to a wide variety of podcasts, discussions, conferences and foundations. One of his goals is to ensure the smooth transition of autonomous robots into our lives without mucking up our own livelihoods in the process. His company Self-Aware Systems started out to help search engines search better, but gradually, he and his team built a system for reading lips and a system for controlling robots. If he ever owns a cyborg in the near future and he's able to program it himself, it will not be cold-hearted. I'm confident it would be a warm, hospitable homemaker with culinary and family therapeutic skills to boot.
"The particular angle that I started working on is systems that write their own programs. Human programmers today are a disaster. Microsoft Windows for instance crashes all the time. Computers should be much better at that task, and so we develop systems that I call self-improving artificial intelligence, so that's AI systems that understand their own operation, watch themselves work, and envision what changes to themselves might be improvements and then change themselves," Steve says.
In addition to his scientific work, Steve is passionate about human growth and transformation. He holds the vision that new technologies can help humanity create a more compassionate, peaceful, and life-serving world. He is one of the men and women behind the scenes who are doing their very best to ensure that killer robots never reach an operable level - either in perpetuity, or before we're ready to handle it as a species. His "safe AI scaffolding strategy" is one of his main proposed solutions, and a positive way forward.
You can call him an expert in the field of FAI, or friendly artificial intelligence, which is "a hypothetical artificial general intelligence that would have a positive rather than negative effect on humanity.” The term was coined by Eliezer Yudkowsky to discuss superintelligent artificial agents that reliably implement human values.
Getting an entity with artificial intelligence to do what you want is a task that researchers at the Machine Intelligence Research Institute (MIRI), in Berkeley, California are taking on. The program’s aim is to make advanced intelligent machines behave as humans intend even in the absence of immediate supervision. In other words, “take initiative, but be like us.”
Yudkowsky realized that the more important challenge was figuring out how to do that safely by getting AI to incorporate our values in their decision making. "It caused me to realize, with some dismay, that it was actually going to be technically very hard," Yudkowsky says. “Even if an AI tries to exterminate humanity,” it is “outright silly” to believe that it will “make self-justifying speeches about how humans had their time, but now, like the dinosaur, have become obsolete. Only evil Hollywood AIs do that.”
Adam Keiper and Ari N. Schulman, editors of the technology journal The New Atlantis say that it will be impossible to ever guarantee "friendly" behavior in AIs because problems of ethical complexity will not yield to software advances or increases in computing power.
Steve differs in that he is wholly optimistic about the subject. He thinks that intelligent robotics will eliminate much human drudgery and dramatically improve manufacturing and wealth creation. Intelligent biological and medical systems will improve human health and longevity, and educational systems will enhance our ability to learn and think, (pop quizzes won’t stand a chance). Intelligent financial models will improve financial stability, and legal models will improve the design and enforcement of laws for the greater good. He feels that it's a great time to be alive and involved with technology. With the safety measures he has developed, Steve hopes to merge machine with positive psychology - a division that's only a few decades old but has already given us many insights into human happiness.
Cautious attitudes in an evolving drone age
In an article on Vice’s Motherboard entitled "This Drone Has Artificial Intelligence Modelled on Honey Bee Brains", we can see firsthand how bizarre science can get, and how fast we are progressing with machine intelligence.
Launched in 2012, the Green Brain Project aims to create the first accurate computer model of a honey bee brain, and transplant that onto a UAV.
Researchers from the Green Brain Project—which recalls IBM’s Blue Brain Project to build a virtual human brain—hope that a UAV equipped with elements of a honey bee’s super-sight and smell will have applications in everything from disaster zone search and rescue missions to agriculture.
Experts, from physicist Stephen Hawking to software architect Bill Joy, warn that if artificial intelligence technology continues to be developed, it may spiral out of human control. Tesla founder Elon Musk calls artificial-intelligence development simply “summoning the demon.”
British inventor Clive Sinclair said: "Once you start to make machines that are rivaling and surpassing humans with intelligence, it's going to be very difficult for us to survive," he told the BBC. "It's just an inevitability."
"I am in the camp that is concerned about super intelligence," Bill Gates wrote. "First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."
Are we jumping the gun with all of this talk of sentient robots triggering an apocalypse? Rodney Brooks, an Australian roboticist and founder of iRobot, thinks so. He views artificial intelligence as a tool, not a threat. In a blog post, he said:
Worrying about AI that will be intentionally evil to us is pure fear mongering. And an immense waste of time.
In order for there to be a successful volitional AI, especially one that could be successfully malevolent, it would need a direct understanding of the world, it would need to have the dexterous hands and/or other tools that could out manipulate people, and to have a deep understanding of humans in order to outwit them. Each of these requires much harder innovations than a winged vehicle landing on a tree branch. It is going to take a lot of deep thought and hard work from thousands of scientists and engineers. And, most likely, centuries.
In an interview with The Futurist, Steve talked about the best and worst case scenarios for a fully powerful AI. He said:
I think the worst case would be an AI that takes off on its own, its own momentum, on some very narrow task and works to basically convert the world economy and whatever matter it controls to focus on that very narrow task, that it, in the process, squeezes out much of what we care most about as humans. Love, compassion art, peace, the grand visions of humanity could be lost in that bad scenario. In the best scenario, many of the problems that we have today, like hunger, diseases, the fact that people have to work at jobs that aren't necessarily fulfilling, all of those could be taken care of by machine, ushering in a new age in which people could do what people do best, and the best of human values could flourish and be embodied in this technology.
Autonomous technology for the greater human good
Steve’s primary concern has been to incorporate human values into new technologies to ensure that they have a beneficial effect. In his paper, “Autonomous Technology and the Greater Human Good”, the most downloaded academic article ever in the Journal of Experimental and Theoretical Artificial Intelligence, Steve summarized the possible consequences of a drone culture that’s moving too swiftly for its own good:
Military and economic pressures for rapid decision-making are driving the development of a wide variety of autonomous systems. The military wants systems which are more powerful than an adversary's and wants to deploy them before the adversary does. This can lead to ‘arms races’ in which systems are developed on a more rapid time schedule than might otherwise be desired.
A 2011 US Defense Department report with a roadmap for unmanned ground systems states that ‘There is an ongoing push to increase unmanned ground vehicle autonomy, with a current goal of supervised autonomy, but with an ultimate goal of full autonomy’.
Military drones have grown dramatically in importance over the past few years both for surveillance and offensive attacks. From 2004 to 2012, US drone strikes in Pakistan may have caused 3176 deaths. US law currently requires that a human be in the decision loop when a drone fires on a person, but the laws of other countries do not. There is a growing realization that drone technology is inexpensive and widely available, so we should expect escalating arms races of offensive and defensive drones. This will put pressure on designers to make the drones more autonomous so they can make decisions more rapidly.
Thoughts on Transhumanism
In an interview featured on Bullet Proof Exec, Steve briefly expressed his views on transhumanism, which is a cultural and intellectual movement that believes we can, and should, improve the human condition through the use of advanced technologies:
My worry is that we change too rapidly. I guess the question is, how do we determine what changes are like, “Yeah, this is a great improvement that’s making us better.” What are changes like, let’s say, you have the capacity or the ability to turn off conscience and to be a good CEO, well, you turn off your conscience so you could make those hard decisions. That could send humanity down into a terrible direction. How do we make those choices?
Interview with Dr. Steve Omohundro
I had the privilege of speaking with Steve, and here's what he had to say.
BFP: Thanks for taking the time to speak with us today. You have an interesting last name. If I may ask, where does it come from?
Steve: We don't know! My great grandfather wrote a huge genealogy in which he traced the name back to 1670 in Westmoreland County, Virginia. The first Omohundro came over on a ship and had dealings with Englishmen but we don't know where he came from or the origins of the name.
BFP: How have drones changed our world?
Steve: I think it's still very early days. The military uses of drones, both for surveillance and for attack, have already had a big effect. Here's an article stating that 23 countries have developed or are developing armed drones and that within 10 years they will be available to every country:
On the civilian side, agricultural applications like inspecting crops have the greatest economic value currently. They are also being used for innovative shots in movies and commercials and for surveillance. They can deliver life-saving medicine more rapidly than an ambulance can. They can rapidly bring a life saver to a drowning ocean swimmer. They are being used to monitor endangered species and to watch out for forest fires. I'm skeptical that they will be economical to use for delivery in situations which aren't time-critical, however.
BFP: Do you think artificial intelligence is possible in our lifetime?
Steve: I define an "artificial intelligence" as a computer program that can take actions to achieve desired goals. By that definition, lots of artificially intelligent systems already exist and are rapidly becoming integrated into society. Siri's speech recognition, self-driving cars, and high-frequency trading all have a level of intelligence that existed only in research systems a decade ago. These systems still don't have a human-level general understanding of the world, however. Researchers differ in when that might occur. A few believe it will be impossible but most predict it will happen sometime in the next 5 to 100 years. Beyond the ability to solve problems are human characteristics like consciousness, qualia, creativity, aesthetic sense, etc. We don't yet know exactly what these are and some people believe they cannot be automated. I think we will learn a lot about these qualities and about ourselves as we begin to interact with more intelligent computer systems.
BFP: According to a report published in March by the Association for Unmanned Vehicle Systems International, drones could create as many as 70,000 jobs and have an overall economic impact of more than $13.6 billion in three years. Which means, the report says, that each day U.S. commercial drones are grounded is a $28-million lost opportunity. If these economic projections prove to be accurate, do you see a prosperous industry on the horizon for them as well?
Steve: I believe they could have that impact but $13.6 billion is a small percentage of the GDP. The societal issues they bring up around surveillance, accidents, terrorism, etc. are much larger than that, though. For there to be a prosperous industry, the social issues need to be carefully thought through and solved.
BFP: Do you think that autonomous robot usage will spin out of control without implementation of the Safe-AI Scaffolding Strategy that you and your colleagues formulated?
Steve: Autonomous robots have the potential to be very powerful. They may be used for many beneficial applications but also could create great harm. I'm glad that many people are beginning to think carefully about their impact. I believe we should create engineering guidelines to ensure that they are safe and have a positive impact. The "Safe-AI Scaffolding Strategy" is an approach we have put forth for this but other groups have proposed alternative approaches as well. I'm hopeful that we will develop a clear understanding of how to develop these systems safely by the time that we need it.
BFP: Drones have landed on the White House lawn and in front of Angela Merkel. Where they might land next is unpredictable, but this uncertainty is a reminder that governments around the world are still trying to find their balance when it comes to an emerging technology of this scale and wide application. What positive ways do you posit that drones can affect the world, or affect the work that you are involved in?
Steve: Flying drones are just one of many new technologies that have both positive and harmful uses. Others include drone boats, self-driving vehicles, underwater drones, 3-D printing, biotechnology, nanotechnology, etc. Human society needs to develop a framework for managing these powerful technologies safely. Nuclear technology is also dual-use and has been used to both provide power and to create weapons. Fortunately, so far there hasn't been an unintended detonation of a nuclear bomb. But the recent book "Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety" tells a terrifying cautionary tale. Among many other accidents, in 1961 the U.S. Air Force inadvertently dropped two hydrogen bombs on North Carolina and 3 out of 4 of the safety switches failed.
If we can develop design-rules that ensure safety, drones and other autonomous technologies have a huge potential to improve human lives. Drones could provide a rapid response to disaster situations. Self-driving vehicles could eliminate human drudgery and prevent many accidents. Construction robots could increase productivity and dramatically lower the cost of homes and manufactured goods.
BFP: Have you read any science fiction books that expanded your perspective on A.I.? In general, what would you say got you into it?
Steve: I haven't read a lot of science fiction. Marshall Brain's "Manna: Two Views of Humanity's Future" is an insightful exploration of some of the possible impacts of these technologies. I got interested in robots as a child because my Mom thought it would be great to have a robot to do the dishes for her, and I thought that might be something I could build! I got interested in AI as a part of investigating general questions about the nature of thought and intelligence.
BFP: You recently showed me a video of a supercharged drone with advanced piloting tech that could reach speeds of 90 miles per hour, and that costs about $600. Could you see yourself going out and buying a quadcopter like that, maybe having a swarm of drones spell out "Drones for the Greater Good" in the sky? Or would you rather keep your distance from the "Tasmanian devil" drone.
Steve: I haven't been drawn to experimenting with drones myself, but I have friends who have been using them to create aerial light shows and other artistic displays. The supercharged 90 mph drone is both fascinating and terrifying. Watching the video, you clearly get the sense that controlling the use of those will be a lot more challenging than many people currently realize.
BFP: I've also seen a quadrotor with a machine gun.
Steve: Wow, that one is also quite scary. What's especially disturbing is that it doesn't appear to require huge amounts of engineering expertise to build this kind of system and yet it could obviously cause great harm. These kinds of systems will likely pose a challenge to our current social mechanisms for regulating technology.
# # # #
*Watch Steve's TEDx video from May 2012: Smart Technology for the Greater Good
Erik Moshe is BFP investigative journalist and analyst. He is an independent writer from Hollywood, Florida, and has worked as an editor of alternative news blog Media Monarchy and as an intern journalist with the History News Network. He served in the U.S. Air Force from 2009-2013. You can visit his site here.