Artificial intelligence (AI) might evolve to the point where humans are no longer in control.
Facebook was recently forced to shut down an experiment after two artificial intelligence programs began chatting to each other in their own language. Researchers at the Facebook AI Research Lab (FAIR) found the chatbots had deviated from their script and were communicating in a new language developed without human input.
The chatbots developed this shorthand as a faster mechanism for negotiating trade and value pricing for objects such as hats, balls and books. While seemingly innocent, concerns over “rogue” AI are valid, as they are examples of how human control can quickly get disintermediated.
It’s an amazing reality check on just how fast AI is evolving as well as just how horrifying the potential for mishap is growing.
Can We Control AI?
Who could forget the infamous Microsoft Tay chatbot experiment that quickly went out of control? It took less than a day for the Microsoft AI Bot team to learn it had lost control of its bot and Tay had become a racist. They were forced to shut it down.
Many luminaries of the tech world have voiced their concerns over AI. Stephen Hawking warned, “The development of full artificial intelligence could spell the end of the human race.” Bill Gates wondered, “I don’t understand why some people are not concerned.” David Kenny, IBM senior vice president of Watson sent an open letter to congress warning of the potential dangers of AI. Elon Musk routinely raises alarms with tweets like, “We need to be super careful with AI. Potentially more dangerous than nukes.”
Nick Bostrom, Oxford University philosopher, who wrote the book “Superintelligence: Paths, Dangers, Strategies,” has become a leading voice on the dangers of artificial intelligence: “We’re like children playing with a bomb.” He believes machine intelligence is a greater threat to humanity than climate change.
A simple solution is to place the computer in an advisory role. If our goal is to use AI to augment human capability, then never elevate the recommendation of a computer above the judgement of a human.
No Stopping Technology Progress
As much as we should heed the advice of industry experts, the reality is we can’t stand in the way of technology progress. Any technology, if used the wrong way, could cause harm. AI has too much potential for the future of mankind.
Facebook CEO Mark Zuckerberg disagrees with Musk. Zuckerberg is much more optimistic about the future of AI: “I think you can build things and the world gets better. With AI especially, I'm really optimistic, and I think that people who are naysayers and try to drum up these doomsday scenarios ... I don't understand it. It's really negative, and in some ways, I actually think it's pretty irresponsible."
And it’s not just providing competitive advantages to businesses. Russian President Vladimir Putin has declared the control of artificial intelligence will be crucial to global power: “Whoever becomes the leader in this sphere will become the ruler of the world.” China's government has announced a goal of becoming a global leader in artificial intelligence in just over a decade.
The United States is currently leading the charge in AI with companies like Google, IBM, Facebook and Amazon all pouring large amounts of investments into research, development and products. But that won’t remain if we discourage or over-govern the development and use of AI.
AI Adoption Has Left the Stable
Artificial intelligence is unleashing the next wave of digital disruption. Early adopters are already creating competitive advantages. Companies that combine a strong digital foundation with an aggressive adoption of AI are well poised to become leaders in their industry. This potentially allows old-world companies a way to remain relevant in the wake of global competition.
The adoption of AI is rapidly reaching a tipping point. A Narrative Science survey last year found 38 percent of enterprises are already using AI, growing to 62 percent by 2018. Forrester Research predicted a greater than 300 percent increase in AI investments from 2016 to 2017. IDC estimated the AI market will grow from $8 billion in 2016 to more than $47 billion in 2020.
There’s no turning back now.
AI is making inroads in creating safer roadways with autonomous cars, improving patient outcomes through medical diagnosis, automating supply chains, improving media buying, driving intelligent personalization, helping farmers maximize food production and many other applications.
Developing Regulations With AI
The concerns around AI are bound to continue. Evangelists like Musk will continue to raise alarm bells.
Requiring AI technology applications to have some form of governance and regulation as they evolve is a reasonable expectation. But proactive and practical regulation is required. Too frequently, regulation occurs as a reaction after something has happened or something is viewed as a threat to an existing model.
Napster’s peer to peer music sharing, Uber ride sharing, Tesla electric car sales and the Google Fiber initiative are all examples that met with heavily regulation attempting to stifle progress and adoption. But in the end, it never works. Instead of having regulation chase after innovation, it can be a vital part of innovation, developed alongside the technology it governs.
Moving AI more into open standards will also be important. Decentralizing and democratizing AI will be important. Having AI confined to a few elite companies like Google and Facebook are not necessarily good for the long term. The formation of organizations like OpenAI will pave the way for safer artificial intelligence applications.
Some Basic AI Principles to Guide Us
Some basic principles can guide the evolution of AI: First an AI system must be governed by a human operator and be subject to the same laws that apply to its operator. AI systems should not be allowed to operate above the law by conducting cyber crimes, market manipulations, terrorist activity or reckless driving.
Second, an AI system must be clear in its purpose, emphasize what data is used for training, and provide the expected scope of operation. Transparency is critical.
And finally, privacy is critical. People should have the right to access, manage and control the data AI systems use. A more elaborate set of principles have been established as part of the “Asilomar AI Principles.”
Following these and other basic principles will ensure proper AI governance and avoid the doomsday outcomes some fear.