New York — Ambassador Anwarul K. Chowdhury is Former Under-Secretary-General and High Representative of the United Nations and Founder of the Global Movement for The Culture of Peace.
Recently when I was asked to offer my thoughts on the phenomenal advances of artificial intelligence (AI) and whether the United Nations play a role in its global governance, I was reminded of the Three Laws of Robotics which are a set of rules devised by science fiction author Isaac Asimov and introduced in his1942 short story.
I told myself that Sci-Fi has now met real life. The first law lays down the most fundamental principle by emphasizing that “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” The 80-year-old norm would be handy for the present-day scenario for the world of AI.
AI in control:
AI is exciting and at the same time frightening. The implications and potential evolution of AI are enormous, to say the least. We have reached a turning point in human history telling us that even at this point of time, AI is pretty much smarter than humans.
Already, even the “primitive” AI controls so many aspects and activities of our daily lives irrespective of where we are living on this planet. Our global connectivity at personal levels – emails, calendars, transportation like uber, GPS, shopping and many other activities are now run by AI.
Then, think of social media and how it influences our thinking and our interactive nature which have injected an obvious dangerous uncertainty that already caused considerable problem for social order and mental stress.
AI dependent humanity:
Humankind is almost fully AI dependent in one way or the other. Think how helpless humans would be without an AI-influenced smartphone in our hands. AI is the fastest growing tech sector and are expected to add USD 15 trillion to the world economy in the next 5 to 7 years.
Even at its current stage of development of various AI chatbots led by OpenAI, Google and others in recent months have alarmed the well-meaning experts. Experts when asked about the future of AI came out with the honest answer: “We do not know”.
They are of the opinion that at this point one can envisage the developments for the next 5 years only, beyond that nothing could be predicted. People talk about ChatGPT-4 as an upcoming next level AI, but it may be already here.
AI’s limitless, unregulated potential:
AI’s potential is so limitless that it has been compared to the arms race in which nations are engaged in an endless quest for security and power by acquiring more and varied armaments in numbers and effectiveness.
For AI, however, the main actors are the tech giants with enormous resources and without being ethically driven. They are in this AI race for profit – only profit and, as a corollary, unexplained power to dominate human activities.
Shockingly, there is no rules, no regulations, no laws that govern the AI sector. It is free for all, can be compared to “wild wild west”.
Nukes and AI:
Experts have compared AI with the advent of nuclear technology, which could be put to good use for humanity benefits or used for its annihilation. They have even gone to the extent of calling AI a potent weapon of mass destruction more than nuclear weapons. Nukes cannot produce more powerful nukes. But AI can generate more powerful AI – it is self-empowering so to say.
The worry is that as AI becomes more powerful by itself it cannot be controlled, rather it would have the capability of controlling humans. Like nuclear technology, we cannot “uninvent AI”. So, the yet-not-fully-known risk from these cutting-edge technologies continues.
While recognizing the many possible beneficial use of AI in the medical areas, for weather predictions, mitigating impacts of the climate change and many other areas, experts are sounding the alarm bell that the super intelligence of AI would be an “existential threat”, possibly much more catastrophic, more imminent than the ongoing, ever-challenging climate crisis.
Main worry is that in the absence of a global governance and regulatory arrangements, the bad actors can engage AI for motivation other than what is good for society, good for individuals and good for our planet in general. As we know, the tech giants are not driven by these positive objectives.
AI could have serious disruptive effects. This May, for the first time in history, the US unemployment figures cited AI as a reason for job loss.
Bad actors without guardrails:
Bad actors without any guardrails can abuse the power of AI to generate an avalanche of misinformation to negatively influence the opinions of big segments of humanity thereby disrupting, say the electoral processes and destroying democracy and democratic institutions. AI technology, say in the area of chemical knowledge, can be used to make chemical weapons without a regulatory system.
We need to realize that AI is remarkably good at making convincing narratives on any subject. Anybody can be can fooled by that kind of stuff. As humans are not always rational, their use of AI can therefore not be rational and positive. Bad actors have to be controlled so that AI does not pose a threat to humanity.
United Nations to lead AI global governance:
All these points weigh very much in favour of a global governance. If I am asked who should take the lead on this, my emphatic reply would be “the United Nations, of course!”
UN’s expertise, credibility and universality as a global norm setting organization obviously has a role in the regulatory norm-setting for AI and its evolution.
Moral and ethical issue as well as fundamental global principles need to be protected from the onslaught of AI – like human rights, particularly the third generation of human rights – the culture of peace – peacebuilding – conflict resolutions – good governance – democratic institutions – free and fair elections and many more.
Also, it is equally important to examine and address the implications for national governments from global use of AI, affecting the sovereignty of nations. It would be worth exploring whether AI can influence intergovernmental negotiating processes, now or in the future.
UN agencies and implications of their AI-related activities:
Two UN agencies recently announced AI-related activities. UNESCO informed that it hosted a Ministerial level virtual meeting at the end of May with selected participants while sharing the statistics that less than 10 percent of educational institutions were using AI. UNESCO described the software tool ChatGPT as “wildly popular”. A UN entity should not have made such an endorsement of a tech giant product.
Calling itself “UN tech agency”, International Telecommunications Union (ITU) announced that it is convening an “AI for Good Global Summit” early July to “showcase AI and robot technology as part of a global dialogue on how artificial intelligence and robotics can serve as forces for good”.
The so-called UN tech agency took credit for hosting “the UN’s first robot press conference”, alongside “events with industry executives, government officials, and thought leaders on AI and tech.”
There is a need for a UN system-wide alert providing guidelines for interactions with the tech giants and entering into collaborative arrangements with those. AI technology is developing so fast that there has to be an awareness about possible missteps by one or another UN entity.
Even at its current level of development, AI has moved much ahead of ChatGPT and robotics advancing the profit motivations of the tech giants and that is a huge worry for all well-meaning people.
These UN entities have overlooked or even ignored the part of the Declaration on the commemoration of the seventy-fifth anniversary of the United Nations adopted as resolution 75/1 by the UN General Assembly on 21 September 2021 which alerted that “…When improperly or maliciously used, they can fuel divisions within and between countries, increase insecurity, undermine human rights and exacerbate inequality.” These words of warning should be adhered to fully by all with all seriousness.
UN Secretary-General’s Our Common Agenda (OCA) refers to AI:
UN Secretary-General in his report titled Our Common Agenda (OCA) issued in September 2021 promises, “to work with Member States to establish an Emergency Platform to respond to complex global crises. The platform would not be a new permanent or standing body or institution. It would be triggered automatically in crises of sufficient scale and magnitude, regardless of the type or nature of the crisis involved.”
AI is undoubtedly one of such “complex global crises” and it is high time now for the Secretary-General to formally share his thinking on how he plans to address the challenge.
It will be too late for the Summit of the Future convened by the Secretary-General in September 2024 to discuss a global regulatory regime for AI under UN authority. In that timeframe, AI technology would manifest itself in a way that no global governance would be possible.
AI genie is out of the bottle:
AI genie is already out of the bottle – the UN needs to ensure that AI genie serves the best interests of humankind and our planet.
AI impact is so wide-spread and so comprehensive that it is relevant and pertinent for all areas covered in OCA. It so much on us that the Secretary-General should come out with his own recommendations as to what should be done without waiting for next year’s Summit of the Future.
Our future being impacted by AI needs to be addressed NOW. AI is spreading at an inconceivable speed and spread. The Secretary-General as the global leader heading the United Nations should not downplay the seriousness of the challenge. He needs to set the ball rolling without waiting for a negotiated consensus among Member States.
UN to regulate AI and ensure its effective and efficient global governance:
OCA-identified key proposals across its 12 commitments include “Promote regulation of artificial intelligence” to “ensure that this is aligned with shared global values.”
In OCA, the Secretary-General has asserted that “Our success in finding solutions to the interlinked problems we face hinges on our ability to anticipate, prevent and prepare for major risks to come.
This puts a revitalized, comprehensive, and overarching prevention agenda front and centre in all that we do…. Where global public goods are not provided, we have their opposite: global public “bads” in the form of serious risks and threats to human welfare.
These risks are now increasingly global and have greater potential impact. Some are even existential …. Being prepared to prevent and respond to these risks is an essential counterpoint to better managing the global commons and global public goods.”
The global community should be comforted knowing that the leadership of the United Nations already knows well what steps are to be taken at this juncture.