Treat AI like a biological weapon, not a nuclear one

Humans today are developing perhaps the most powerful technology in our history: artificial intelligence. The societal harms of AI, including discrimination, threats to democracy and concentration of influence, are already well documented. Yet major AI companies are engaged in an arms race to build increasingly powerful AI systems that will increase these risks at a rate we have not seen in human history.

As our leaders grapple with how to contain and control the development of AI and associated risks, they should consider how regulations and standards have enabled humanity to capitalize on innovations in the past. Regulation and innovation can coexist and, especially when lives are at stake, it is imperative that they do.

Nuclear technology provides a cautionary tale. Although nuclear power is more than 600 times safer than oil in terms of human mortality and capable of huge accomplishments, few countries will touch it because the public has met the wrong family member first.

We were introduced to nuclear technology in the form of atomic and hydrogen bombs. These weapons, which represent the first time in human history that man has developed a technology capable of ending human civilization, were the product of an arms race that has favored speed and innovation over safety and to control. Subsequent failures of proper safety engineering and risk management, famously leading to the nuclear disasters of Chernobyl and Fukushima, destroyed any chance of widespread acceptance of nuclear power.

Despite the overall risk assessment of nuclear power remaining very favorable and decades of effort to convince the world of its feasibility, the word nuclear remains tainted. When a technology causes harm in its nascent stages, societal perception and regulatory overreaction can permanently reduce the potential benefits of that technology. Due to a handful of initial missteps with nuclear power, we have been unable to capitalize on its clean and safe energy, and carbon neutrality and energy stability remain a pipe dream.

But in some areas we have succeeded. Biotechnology is a field with incentives to move quickly: patients suffer and die every day from diseases that lack cure or treatment. Yet the ethos of this research is not to move fast and break things, but to innovate as fast and as safely as possible. The speed limit of innovation in this field is determined by a system of prohibitions, regulations, ethics and norms that guarantee the well-being of society and individuals. It also protects the industry from being crippled by the backlash of a catastrophe.

In banning biological weapons at the Biological Weapons Convention during the Cold War, the opposing superpowers were able to come together and agree that the creation of these weapons was in no one’s best interest. Leaders saw that these uncontrollable but highly accessible technologies should not be treated as a mechanism for winning an arms race, but as a threat to humanity itself.

This lull in the biological weapons arms race has allowed research to develop at a responsible pace, and scientists and regulators have been able to implement rigorous standards for any new innovation capable of causing human harm. These regulations have not come at the expense of innovation. Instead, the scientific community has established a bioeconomy, with applications ranging from clean energy to agriculture. During the COVID-19 pandemic, biologists translated a new type of technology, mRNA, into a safe and effective vaccine at a rate unprecedented in human history. When significant harm to individuals and society is at stake, regulation does not impede progress; it enables it.

This was revealed by a recent survey of artificial intelligence researchers 36% believe that artificial intelligence could cause a nuclear-level catastrophe. Despite this, government response and movement towards regulation has been slow at best. This pace is no match for the increased adoption of the technology, with ChatGPT now exceeding 100 million users.

This rapidly escalating AI risk landscape has resulted 1,800 CEOs and 1,500 professors recently signed a letter calling for a six-month break on developing even more powerful AI and urgently start the process of regulation and risk mitigation. This pause would give the global community time to reduce the damage already caused by AI and avoid potentially catastrophic and irreversible impacts on our society.

As we work towards a risk assessment of the potential harms of AI, the loss of positive potential should be included in the calculation. If we take steps now to develop AI responsibly, we could reap tremendous benefits from the technology.

For example, we have already seen glimpses of AI transforming drug discovery and development, improving the quality and cost of healthcare, and increasing access to doctors and medical care. Google’s DeepMind has demonstrated that AI can solve fundamental problems in biology that have long eluded human minds. AND research has shown that artificial intelligence could accelerate the achievement of each of the United Nations Sustainable Development Goalsmoving humanity towards a future of better health, equity, prosperity and peace.

This is a time when the global community must come together, just as we did fifty years ago at the Biological Weapons Convention to ensure the safe and responsible development of AI. If we don’t act soon, we could doom a bright future with AI and with it our current society.

Want to learn more about AI, chatbots and the future of machine learning? Check out our full coverage of artificial intelligenceor browse our guides at The best free AI art generators AND Everything we know about OpenAIs ChatGPT.

Emilia Javorsky, MD, MPH, is a physician-scientist and director of Multistakeholder Engagements at the Future of Life Institute, who has published recent open letters warning that AI poses an extinction risk to humanity and arguing for a six-month break on AI development.

#Treat #biological #weapon #nuclear

Leave a Comment