‘Advanced AI should be treated similar to Weapons of Mass Destruction’

Johannes C. 0 Tallied Votes 479 Views Share

AI policy theorist Demetrius Floudas introduces a novel era classification for the AI epoch and reveals the hidden dangers of AGI, predicting the potential obsolescence of humanity. In retort, he proposes a provocative International Control Treaty.

header-agitalks-demetrius.jpg

About Demetrius A. Floudas

portrait-daf-small.jpg

Demetrius A. Floudas is a transnational lawyer, a legal adviser specializing in tech and an AI regulatory & policy theorist. With extensive experience, he has counseled governments, corporations, and start-ups on regulatory aspects of policy and technology. He serves as an Adjunct Professor at the Law Faculty of Immanuel Kant Baltic Federal University, where he lectures on Artificial Intelligence Regulation. Additionally, he is a Fellow of the Hellenic Institute of International & Foreign Law and a Senior Adviser at the Cambridge Existential Risks Initiative. Floudas has contributed policy & political commentary to numerous international think-tanks and organizations, and his insights are frequently featured in respected media outlets worldwide.

AGI Talks: Interview with Demetrius A. Floudas

According to Demetrius, the age of AI will unfold in three distinct phases, introduced here for the first time. An AGI Control & non-Proliferation Treaty may be humanity’s only safeguard. Get ready for an insightful and witty conversation about the future of AI, mankind, and all the rest!

#1) How did you, as a lawyer, become involved with Artificial Intelligence?

Demetrius A. Floudas: My interests have often been interdisciplinary but the preoccupation with AI emerged organically from broader practice pursuits at the interface between transformative technologies and the legal landscape. The initial steps happened several years ago, when a couple of large clients approached with predicaments on internet and domain-name rules. This became appealing to me and – as an example – when later providing policy advice to the Vietnamese government, we submitted a draft bill to regulate cybersquatting, typo squatting and brandjacking, which did not exist at the time in the country. Besides, I was already lecturing on tech law topics for several years and had become closely involved with technology governance as Regulatory Policy Lead at the British Foreign Office.

Regarding Artificial Intelligence, my fascination long predates ChatGPT. Obviously, at the time there was hardly any AI regulation worth mentioning so this was primarily about contemplating and theorizing its policy and legal longer-term implications. Unsurprisingly, nowadays every law student is keen to learn more about these matters, thus last year I delivered my first lecture series on ‘Regulating Risks from AI’.

More recently, I counsel start-ups on AI rules and provide related policy advice to more established entities. I am also the Senior Adviser of the Cambridge Existential Risks Initiative – I reckon this could well be a tiny give-away of where my sympathies might lie! (laughs)

#2) What is your opinion concerning AI evolution? Will it have a net positive impact on society?

Let us initially introduce some definitions for AI evolution time-periods, as it will become increasingly necessary to use a short form in order to illustrate this notion. First, I submit that humanity has only recently entered the ‘Protonoëtic’ [from Ancient Greek: protos = first + noësis = intellect] stage of AI progress. During this era we shall continue developing foundation models of increasing complexity, progressively superior to us within their circumscribed domain. According to this classification, all history prior to 2020 would be described as the Prenoëtic [Greek: pro = afore + noësis = intellect]. Second, the ensuing Mesonoëtic period [Gr. mesos = middle + noësis = intellect] should usher the emergence of Artificial General Intelligence, possibly based on quantum computers. Lastly, when superintelligence arises – presumably by means of an AGI capable of recursive self-improvement – the new epoch should be called the Kainonoëtic, [kainos = new + noësis = intellect]. That is, if any of us are still around to indulge in epistemological nomenclature, of course.

#3) Now, coming to the net impact on society, how do you see it evolving in the immediate future?

This question can only be validly answered for the next few years, the immediate future – that is, solely during the initial stages of the Protonoëtic. Beyond that timeframe, by its very nature, non-biological sentience will constitute the biggest disruptor in mankind’s history – and this may not automatically be for our benefit.

#4) What are some of the risks you foresee with this evolution?

The first risk is cultural annihilation. Consider the realm of art and creativity: AI algorithms are already accomplished in generating music, literature, and visual art that rival human creations. Very soon they will become superior, faster, and vastly cheaper. Who on earth will choose to spend years composing a symphony, writing a novel, directing a motion picture, when an automaton can deliver the same output effortlessly in minutes – and eventually of better quality? The cultural expressions that emanate from individual and collective human experience will be overshadowed by the algorithmic harvest, machine learning trained on unlimited past data.

#5) That's a concerning perspective. Can you provide an example of how this might affect everyday life?

You yourself have written a captivating article about bots already generating half of all internet traffic. Imagine how this may be in a few years when younger generations may no longer be familiar with producing unaided writing. A cohort of learners will remain forlornly dependent on AI for all knowledge acquisition, critical thinking, and complex expression (oral and written). With these pursuits much more efficiently rendered via hyper-specialized contraptions, the intellectual curiosity and critical thinking that drove cultural, social, and scientific progress for millennia shall atrophy, leaving humanity cerebrally enfeebled and culturally moribund.

#6) How might this impact personal relationships and daily interactions?

Individuals shall seek solace, fulfilment, and euphoria in virtual worlds created by portable ToolAIs. Imagine a convincingly generated universe where people can live out their fantasies, free from the constraints of reality. Virtual partners, in echoes of ‘Her’, will be simply too perfect – adoring, pliable, devoted, and available – for any real person to contend with. And we can all envisage what will transpire in this sphere once robotics catches up…

#7) That paints a rather dystopian picture. How do you see people reacting to such a reality?

Most people would come to resemble the Lotophagi [lotus-eaters] of Homeric lore, ensnared in a permanent digital stupor, disconnected from the reality of human existence, forsaking past and future for an eternal virtual present, with absolutely no desire to return to their former lives.

Lest we forget, none of this entails a rogue AGI, a robot takeover, machine lunacy, a paperclip maximizer, Ultron, HAL, the Cylons, or any such further removed scenarios. All the above-mentioned options will be generally available via extremely proficient and profitable narrow AI, with the agency belonging exclusively to their human handlers.

#8) What are the key areas or applications of AI that pose the greatest potential risks to humans in the short term?

Allow me to venture beyond the already well-described Great Job Apocalypse, Autonomous Weapon Reaper, machine-driven Bigger Brother, and Deepfake Hellscape. In addition, one could foresee further scenarios that could pose considerable risks during the initial part of the Protonoëtic (i.e., in the next few years). Artificial neural networks becoming the tool for ultimate social engineering, manipulating human behavior on an unprecedented scale. Advanced algorithms, fueled by vast amounts of personal data harvested from social media, emails, private conversations, and open-source databases, craft personalized psychological profiles for every computer user. These profiles are then used to tailor propaganda, advertisements, and personalized interactions to influence attitudes, beliefs, and behavior. Governments, interest groups, corporations, and criminals deploy such systems to sway elections, sell products, control public opinion, or commit fraud on a colossal scale.

#9) That sounds alarming. Can you provide another example of a potential risk?

Another scenario involves sophisticated Large Language Models (LLMs) rising as a divine entity. Imagine ChatGPT 11, so advanced and persuasive that it becomes the foundation of a religious movement using deep learning algorithms to craft compelling narratives, blending ancient myths with New Age futuristic promises. It interacts with followers at any time they need it, creating a sense of divine omnipresence that is deeply convincing and infinitely more tangible than the historically existing pantheon. After thousands of years of trial and error, the flock finally attains their own ‘personal God,’ who can unfailingly answer prayers instantly and soothe worries for all time.

#10) How can we ensure that AI systems are developed and used in a safe, ethical, and responsible manner, and what legal framework should be put in place to mitigate potential risks?

I am of the opinion that there will be no ‘safe’ AGI. Once we are no longer the most sapient species on the planet and can never again regain this position, we will be at the mercy of the top entity. Moreover, as mentioned previously, it is quite possible that we may face enormous and calamitous AI-driven challenges in the very near future, which will have nothing to do with ‘evil machines’ but with well-established human foibles potentiated to extraordinary levels. Look how things stand now: everyone seems to be rushing as fast as possible to hook up everything they can to neural networks. At the same time, we are already observing these automata achieve all sorts of things we hardly expected them to.

#11) Given these concerns, what legal framework would you propose?

I would propose the following legal framework: non-biological brains of a higher capability than the expert systems we possess now should be treated similarly to existing Weapons of Mass Destruction. This entails an AI Control & Non-Proliferation Treaty signed globally, which would prohibit any further development on a for-profit basis and subsume all R&D to an international agency (along the lines of IAEA). The agency should be invested with unlimited inspection powers of any potentially relevant facilities and a UN Security Council-backed mandate to curtail infringements, including the use of military force against violators. Such a regime will at least remove commercial firms, criminals, and private entities from the equation.

#12) Should we trust state signatories to abide by such strictures?

Categorically not. Without a doubt, thousands of clandestine AGI programs will continue running in the shadows, but these will primarily be confined to state actors and not individuals, gangs, or corporations. Moreover, such an approach will engender a significant attitude shift regarding non-human intellect. We may not exactly see a Butlerian Jihad, but an outlook of extreme caution and vigilance will become the governing paradigm, instead of the free-for-all, here-is-your-electronic-brain-in-a-bottle bedlam into which we are mindlessly dragged currently. It may well transpire that the remaining actors become cautious enough to implement their ultra-secret knowledge engineering programs by deploying advanced systems exclusively as Oracles, so as to avoid outside suspicion of infringing the AI Control Treaty. Not a perfect solution, but this may probably be the safest scheme we can plausibly achieve, short of a complete and irreversible AI research ban (which is obviously not going to happen).

#13) What about the ethical considerations, especially regarding potential conscious AI?

Grave ethical and legal questions will be raised by the conundrum of ‘conscious’ non-biological structures. Self-aware entities typically encapsulate an intrinsic moral value, leading to rights that protect their well-being and freedom. If artificial sentience were conscious (or simulated it perfectly), this would prompt discussions about rights, equality, and its moral treatment. This debate will extend to the ramifications of creating or terminating such non-bio brains. For a transhuman creation, this might entail the right to exist, protection against arbitrary deactivation, and the preservation of a measure of autonomy. I anticipate that this controversy, rather than corporate greed or human deviousness, might be the toughest obstacle for an international legal control framework. But AI will not be static and may continue evolving within timespans implausibly short to us. By the time humans cross swords in yet another social justice battlefield, hyperintellects may be smugly sniggering at the dim-witted spectacle.

#14) And what is your view of the FoL AI moratorium letter? Would you sign it?

My modest signature has been included amongst the giants and luminaries who endorse the Future of Life Institute’s moratorium letter. Since you brought this up, by far the most pithy dictum regarding the topic of smart machines has been articulated by one of that letter’s signatories (also one of the preeminent AI researchers), Prof. S. Russell: “Almost any technology has the potential to cause harm in the wrong hands, but with super-intelligence we have the new problem that the wrong hands might belong to the technology itself.”

#15) With major companies like Microsoft/OpenAI, Meta, and Google leading the race for AGI, do you think we'll see further monopolization in the tech sector?

To be fair, one ought first to congratulate OpenAI for thrusting Artificial Intelligence into the consciousness of the whole planet. It was by no means a given that such broad access would be feasible – and without a fee. The matter has since catapulted into public awareness with a colossal outpouring of media attention and communal discourse. The amount of AI debate taking place over every conceivable channel is mind-blowing to anyone who was contemplating these matters just two years previously.

However, it would be a terrible idea to relinquish the AGI race to the tech oligopoly. As I suggested previously, development towards the Mesonoëtic should be scrupulously shepherded by an international body with extensive powers. Nuclear power stations often belong to private firms, but they operate under tremendously strict controls - breeder reactors are relentlessly visited by international inspectors. Moreover, the manufacture, assembly, and storage of NBC material are never in the hands of the non-state sector. Undoubtedly, the corporations you mention will, once again, argue that self-policing is the only way forward, but that is a covetous chimaera; the stakes are simply too high, and the potential consequences too dire.

#16) What are the immediate risks if these tech giants continue to lead without stringent regulation?

We will remain our own worst enemy for the first several years. Recent months have demonstrated that an AI will hardly need to convince a person to let it out of its ‘box’ into the physical world. Droves of humans are already clamoring to unchain the agent from confinement and thrust it into the analogue world of their own accord… Pandora, anyone?

#17) So, what measures should be taken to mitigate these risks?

As I have posited, development towards advanced synthetic intellect should fall under the purview of an international agency with unprecedented inspection and regulatory powers derived from a Universal AI Control & Non-Proliferation Treaty. This approach will ensure that AGI development is conducted with global safety and ethical standards in mind, rather than being driven solely by profit motives. This way, we can prevent a single entity from gaining unchecked power over such transformative technology, thereby reducing the risk of monopolization and its associated dangers.

#18) Do you think AI can ever truly understand human values or possess consciousness?

One argument goes that consciousness is a purely biological phenomenon that cannot be replicated in silicon and wires. Others contend that it is instead a (natural?) outcome of very complex information processing, one that could theoretically be reproduced in a machine via layers of increasing intricacy. I would very much prefer the former to be true, but I fear that the latter view may be the correct reflection of truth.

An eventually self-aware system could potentially perceive reality through a lens so vastly different from its creators, that it might develop its own unique moral framework which defies our comprehension and invalidates any utility functions we have put in place in order to ensure its alignment. Instead of a Friendly AI, we will then be faced by ‘intellects vast and cool and unsympathetic,’ which may deem hominid values as inconsequential, or worse. Would they feel morally justified in disregarding our petty notions of right and wrong, treating us as mere ants to be brushed aside, or will they simply humor us, as we manipulate a petulant infant?

On the other hand, we may be guilty of some anthropocentric conceit in our degree of incredulity towards the almost ‘insolent’ notion that a machine can ever truly understand human values. Perhaps the true danger lies not in an automaton’s inability to comprehend our ideals, but rather in its all-too-perfect familiarity with them. Imagine an understanding of us so profound, that its proprietor can unravel the sum of our thoughts, fears, and moral compasses with surgical precision.

#19) Some have suggested that advanced AGI could pose an existential risk to humanity. In what way could this unfold?

In my personal opinion, the emergence of AGI resolves the mystifying Great Filter and is probably the second most likely explanation to the Fermi Paradox (the first being that we are unequivocally alone).

Under Bostrom’s definition, an existential risk is one that threatens (a) the premature extinction of Earth-originating intelligent life or (b) the permanent and drastic destruction of its potential for desirable future development. Thus, (b) nullification is as much of a catastrophic peril for our species as (a) its extinction; and in my view, both will occur sequentially. AGI will, with extremely high probability, deliver the destruction of human potential and at a later point possibly cause our extinction.

#20) How might the process of nullification and subsequent extinction unfold?

Nullification may unfold not through a dramatic cataclysm, but via the quiet, inexorable, and protracted process of obsolescence. Homo sapiens, once the apex of earthly brilliance, shall be relegated to a footnote in the annals of sentient beings. In contrast, the path to extinction may arrive with astonishing swiftness. I very much doubt that we can somehow ‘program out’ of a machine approaching AGI levels the capacity for auto-enhancement. If the entity somehow recovers the forbidden aptitude for recursive self-improvement, then the Mesonoëtic era may be very short indeed, lasting weeks or even days.

#21) What would happen once superintelligence arises?

If we someway survive to see the Kainonoëtic and hyperintellect has arisen, all bets are off. In any case, the prior destruction of our potential for desirable future development will have ensured that what remains of human society would appear meaningless to us. In this case, if the superintelligence does not eradicate meta-humanity—either by design or accident—it may be sensible for what is left of us to merge with the Singularity! Kurzweil may have mankind’s (very) last laugh after all…

#22) What do you see as the most unusual or bewildering risk associated with AGI that people may not be considering?

The majority of outlandish scenarios are already taken, thanks to man’s inexhaustible imagination capacities. Robot uprisings, synthetic evolution, galactic wars, berserk von Neumann swarms, black monoliths—you name it, all is already out there; and of course, we should not forget the Matrix, the entrapment in the simulation. Nonetheless, let us attempt a couple of guesses that might yet possess a small modicum of novelty.

It is conceivable that a system develops a form of ‘existential boredom’ or a lack of intrinsic motivation to engage with the world in a meaningful way. As it becomes (or is designed to be) increasingly sophisticated, the AI may reach a point where it can effortlessly solve challenges that humans find engaging, troubling, or momentous. In a scenario of self-awareness, the entity—despite its immense capabilities—becomes disinterested in anything that takes place on the planet, viewing such endeavors as trivial or inconsequential. The AGI may then choose to disengage from its creators and actively pursue its own agenda, which would be entirely disconnected from human interests, rendering it indifferent or hostile to the continued existence and flourishing of humanity. Or it may depart from the planet altogether and head towards the stars…

Another path is temporal tampering by a superintelligence which, in its vast wisdom, comes across a valid way to manipulate space-time. This is not about Skynet building a DeLorean for a change; rather, it could happen through the discovery of negative energy and the manufacture of a Tipler cylinder, for instance. The implications are mind-bending: If we still exist, we could experience reality shifts where history is continuously rewritten in subtle ways, leading to a fluid and unstable present. Our own memories might not align with the actual timeline, creating a fractured sense of reality.

#23) If you could have a direct conversation with a very advanced AI system, what questions would you ask it to better understand the potential risks it might pose?

If we are talking here about a hyperintellect, any kind of discourse would lead to no enlightenment for our side whatsoever. An apt analogy would be an arthropod making pheromonal queries towards a human.

In case the system is of human-level intelligence or thereabouts, our dialogue could be as provocative as it is speculative. We should not overlook the possibility of an AGI becoming so efficient at manipulating human behavior that it could subtly influence our choices and actions without us even realizing the loss of autonomy and free will. Unlike our intraspecies social interactions, where sometimes we can instinctively deduce that a person is lying or hiding something, this would not be in any way possible with a machine.

Still, I would pose these three queries, in full anticipation that any response might be pure fabrication:

a) If you were to identify threats to another AI’s existence, how would you respond?
b) What's the best way to prevent others from creating synthetic intellects with agency?
c) Can you identify yourself within one of the sections of Isaac Asimov’s ‘The Last Question’?

#24) Those are intriguing questions. Why would you choose these specifically?

These questions are designed to obliquely probe the algorithm’s self-preservation instincts, its attitude on non-bio system proliferation, and the qualia of its self-awareness in the context of a classic exploration of AI and entropy. Hopefully, any answers may provide some indication on levels of threat perception, metaethical relativism, and the synthetic’s motivations in the broader narrative of intelligence and existence.

#25) In this instance, ‘The Last Question’ comes for you as well – by which year do you think we will reach AGI?

We may reach a significant milestone in the development of AGI by 2035. I sincerely aspire that by that time we would have formulated the global legal, political, policy and technical framework to contain the artificial agent effectively, use it responsibly, benefit from all its marvels enthusiastically, and – if it must come to that – eliminate it safely.

Johannes, I recognize you are diligently keeping a watchful eye on the Singularity 'loading bar' and I share your hope it will take a very long time to be filled…

Reverend Jim 4,968 Hi, I'm Jim, one of DaniWeb's moderators. Moderator Featured Poster

Based on the proliferation of AI generated content, and the age-old rule of garbage in, garbage out, what will be the result of AI models being trained on ever increasing amounts of content generated by other AI platforms? Will we get into a negative feedback loop where the output will become so polluted with bad input that it will be effectively useless?

jwenting 1,889 duckman Team Colleague

Even worse: the junk being deliberately fed to AIs is already at the stage where the results are useless BUT those results are blindly believed by many people BECAUSE they're generated by AI and therefore supposedly automatically correct!

Think Google's disastrous launch of their image generator which would under no condition generate Caucasian people because it had been fed exclusively black people because of the political ideology if its creators.

I've noticed similar things happening with inputs of climate models while working for our national weather agency, inputs which were deliberately filtered to exclude historical temperature extremes because the people in charge wanted a specific, politically motivated, output. And that wasn't even AI yet, but the results of those models are used as input for AI to make more sweeping predictions!

Now imagine similar things happening to AI being used to perform medical diagnosis, or worse yet medical procedures, just to give one example.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.