Artificial Intelligence (AI) is becoming ubiquitous, impacting nearly every aspect of our daily lives. Although this technology is providing society with a number of useful, even critical applications, there are some really smart people that are concerned about AI. This includes such well known individuals as Bill Gates, Professor Stephen Hawking and Elon Musk, just to name a few. The European Parliament recently recommended that this technology be regulated, to ensure that the AI agents will always obey human commands. They also recommended the development of a "kill switch." Yes, that's right, a kill switch, something akin to a big red button which can be pushed in case of an emergency. In fact, researchers from Google's AI Division, Deep Mind and Oxford University are already working to develop just such a kill switch. They laid out their approach in a recent paper entitled Safely Interruptible Agents. The first line of the paper is a masterpiece of understatement. It reads, "Reinforcement learning agents interacting with a complex environment like the real world are unlikely to behave optimally all the time." The paper goes on to hypothesize that the AI agent may not want to be shut down, and could try to find a way to avoid being shut down. What? Can they do that?
If you're wondering why researchers would be concerned that an AI agent might not "behave optimally all the time", let's consider two recent examples:
- When an AI agent was programmed to play and win Tetris, it discovered that the only way not to lose was to pause the game indefinitely
- When Microsoft launched a chatbot named Tay, they had to shut down the program after less than 24 hours because Tay started sending out racist and misogynistic tweets
In summary, let's all hope the engineers can develop an effective kill switch before our computers go rogue. Now if you'll excuse me, I think I'll go and watch 2001: A Space Odyssey one more time.