Tao Sheng
November 17, 2021
“And mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight?[1]” Elon Musk asked this of artificial intelligence (AI) in 2018. The late theoretical physicist Stephen Hawking, who also expressed concerns over artificial intelligence (AI), noted that “the development of full artificial intelligence could spell the end of the human race.”[2]
We, the consumers, may experience AI technologies in the form of videos YouTube recommends to us on our front pages, chat bots on certain websites, and Apple’s personal assistant, Siri. To many, these are simply convenient innovations with little monetary cost. These common applications of AI technology fall into the category of “narrow,” and sometimes, “weak.”[3] For reference, these categories may also include surveillance devices, weapon development, and self-driving vehicles, as well as medical imaging advancements, automated financial investing, or medical diagnosis accuracy improvements. However, there’s a completely different side to AI coined Artificial General Intelligence (AGI but will be referred to as AI for simplicity unless otherwise stated) that is likely what Musk and Hawking were referring to. These technologies, also coined “strong-AI”, introduce the idea that a machine’s intellect can surpass that of humans.
Reactions to Musk’s concerns were a mixed bag. One of the most respected and accomplished individuals on the opposing side is Harvard professor Steven Pinker.[4] He claimed in a podcast that Musk was overreacting and used Musk’s own Tesla autonomous cars as an example to demonstrate that AI is not a worry. He noted that if Musk was serious about the threat, “he’d stop building those self-driving cars, which are the first kind of advanced AI that we’re going to see.” He also wrote about the dangers of thinking of doomsday scenarios with AI and claims that “false alarms to catastrophic risk can themselves be catastrophic” and that humanity literally cannot worry about everything. He believes that AI should be lower priority, and most importantly, that the psychological deficit from believing an inevitable end to our way of life is dangerous.[5]
While there is debate on whether strong-AI can truly be achieved, I argue that the mere development of these technologies, unregulated, poses a risk to all nations. Musk argues that “[People] say, ‘What harm can a deep intelligence in the network do?’ Well, it can start a war by doing fake news and spoofing email accounts and doing fake press releases and by manipulating information,”’.[4] If there are concerns about whether machines can surpass the intelligence of humans, this technology already exists. DeepMind’s AlphaZero, just by playing thousands of games against itself, achieved an incredibly high Elo in chess and skill level in Go in a few days. As it incrementally learns how to play better, one can imagine that that growth occurs almost exponentially.
While both sides of the argument make valid points on the future of AI technology, it is certain that it not only is coming, but is already here. Doomsday due to AI is unlikely to be a mere few years away, but we can be certain that if we are ill-equipped to deal with societal changes due to these technologies, there will be unforeseen consequences. AI can engender job automation, privacy and security concerns, algorithmic biases (systematic and repeated errors that create unfair outcomes for different demographics like socioeconomic status), and weapons and arms races.
Musk argues that while government regulation and excessive oversight isn’t what he typically advocates for (as it comes with several downsides such as high costs, potential to hinder fair competition, and is resource heavy), he believes it is worth it to curb the risk AI poses. While I also believe regulation talks should be in the forefront of the conversation about how to deal with AI technology, it should not be a regulation of the research or innovation itself, but rather its application, as Martin Ford puts it.[6] A common argument against regulation of American AI technologies is that the US would lose its spot as the leader in AI technologies to China. With Ford’s style of regulation, it seems the US could retain its spot while also maintaining oversight.
Of course, all this would be an incredibly high effort task requiring deep domain knowledge from subject matter experts from across the field. For this reason, I believe AI researchers (like physicians in the COVID-19 pandemic for example) have a responsibility to clearly communicate their opinions to the American people and bipartisan government. It is also for this reason that I believe the United States Congress is unfit for this task. Acquiring the domain knowledge required to make informed policy decisions on cutting-edge technology would be daunting and unproductive. If Congress were to pool subject matter experts in AI technology (and other sectors such as law, medicine, sociology, etc.) to determine the validity of a certain AI development or implementation on a case-by-case basis, the effort required would be monumental. In addition, Congress should not be entrusted with maintaining fast-paced policy, as social media continues to rampage unregulated.
Indeed, Rob Toews, in his opinion piece about how the US should regulate AI, is correct in understanding that the most productive change should come from a federal agency (i.e: FDA, EPA, SEC etc.).[7] Historically, these have been created either after major disasters or after a sharp deterioration in certain area. What we can push for now is a break from tradition and create an effective federal agency before disaster.
Regulation in my eyes would entail transparent communication from the developers of that technology. More specifically, I believe in the ideas brought forth by Oren Etzioni, who believes that these developers ought to disclose development decisions, clearly state whenever non-human AI is implemented, and clearly define whether this technology is considered “strong-AI” so that it may be subjected to the same regulations of its developers.[8] I see thoroughly disclosing development decisions as the current most effective way to confirm that the AI technology is being developed with justifiable reason and can be implemented without harm. On the same note about transparency, AI technologies should announce themselves as non-human, as chat bots are becoming increasingly human-like at an exponential rate. Lastly, it would be illogical to give AI technologies that are considerably intellectual a free pass from the law. For example, self-driving cars still have a responsibility to obey traffic laws. This provides incentive for the company to thoroughly test its technologies, as the consequences would lie on them and the developers. This would also incentivize companies to have high standard quality assurance program, like how clothing is “inspected”, for any strong AI technology. Liability suits for companies accused to have breached “unfair implementation” of strong-AI could be worthwhile to explore.
A governing federal agency working in collaboration with other divisions and with subject matter experts from all fields, regulating mostly the implementation of AI rather than its innovation, seems like the best fit compromise. While AI or AGI may not be the end of humanity as we know it as Musk stated, it certainly will alter the relationship between technology and humanity as we know it, should it go unregulated. Change intrinsically is not harmful; however, we should be prepared and equipped to adapt and use this novel technology as a chance to better our lives and improve the conditions for the next generation.
[1] Clifford, Catherine. “Elon Musk: ‘Mark My Words — A.I. Is Far More Dangerous than Nukes.’” CNBC, March 13, 2018. https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear-weapons.html.
[2] Stephen Hawking Warns Artificial Intelligence Could End Mankind.” BBC News, December 2, 2014, sec. Technology. https://www.bbc.com/news/technology-30290540.
[3] DeepAI. “Narrow AI,” May 17, 2019. https://deepai.org/machine-learning-glossary-and-terms/narrow-ai.
[4] Clifford, Catherine. “Elon Musk Responds to Harvard Professor Steven Pinker’s Comments on A.I.: ‘Humanity Is in Deep Trouble.’” CNBC, March 1, 2018. https://www.cnbc.com/2018/03/01/elon-musk-responds-to-harvard-professor-steven-pinkers-a-i-comments.html.
[5] “Doomsday Is (Not) Coming: The Dangers of Worrying about the Apocalypse – The Globe and Mail.” Accessed October 26, 2021. https://www.theglobeandmail.com/opinion/the-dangers-of-worrying-about-doomsday/article38062215/?utm_medium=Referrer:+Social+Network+/+Media&utm_campaign=Shared+Web+Article+Links.
[6] “7 Risks Of Artificial Intelligence You Should Know | Built In.” Accessed October 26, 2021. https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence.
[7] Toews, Rob. “Here Is How The United States Should Regulate Artificial Intelligence.” Forbes. Accessed October 26, 2021. https://www.forbes.com/sites/robtoews/2020/06/28/here-is-how-the-united-states-should-regulate-artificial-intelligence/.
[8] Etzioni, Oren. “Opinion | How to Regulate Artificial Intelligence.” The New York Times, September 2, 2017, sec. Opinion. https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html.
Photo by Tara Winstead on Pexels.com