Questions on AI regulation, by Edgar Alán Arroyo Cisneros

“Science fiction is an immense metaphor.”, Ursula K. Le Guin

A headline caught the attention of the public on March 22nd: Various technology leaders, led by Elon Musk (president of Tesla, SpaceX, Twitter), and other figures like Steve Wozniak (co-founder of Apple), signed a letter published on the website of the Future of Life Institute, in which they called for a pause in the rapid developments of artificial intelligence (AI).

The letter was issued at a particular juncture, represented by the launch of the most powerful version of ChatGPT – GPT-4 – a tool that may be destined to change, transform, and revolutionize numerous aspects of everyday life from now on, with all that it implies. A new era of freedoms and rights, but also of possible restrictions on them, lies ahead of us.

Musk and company’s epistle, if we speculate and succumb to “suspicionism,” may initially sound like a protest over the lag in the technological race of AI chatbots. However, when we observe that it is also subscribed to by renowned professors from some of the most prestigious universities in the world, notably including Yuval Noah Harari – one of the most important intellectuals of our time, who has dedicated a significant part of his work to the analysis of the potentials and risks of AI – we can perceive that it is not a mere publicity or marketing stratagem, but a document that must be placed and properly weighed on the table of digital public discussion, taking into account the opinions of its various participants.

According to the undersigned, AI in its most advanced state, and even more powerful than what ChatGPT currently offers, lacks security systems strong enough beyond any reasonable doubt. Administration, planning, and resource care are three priority aspects before there continue to be AI platforms like GPT-4 since there are also numerous unanswered questions when it comes to reliable information or consumer privacy, to cite just a couple of examples.

In this scenario, what is the role of the Law as a stabilizing instrument of control and social order? Can global governance on AI be established beyond ideological and other factors? Can global public policies on AI be considered in light of a separation of the various ongoing technological races, such as the one being fought between China and the United States, for example?

The above questions do not appear simple. The possible regulation of artificial intelligence is a normative challenge that seems to have no parallel in recent times; however, there is no better example to illustrate the ambiguities, gaps, and vagueness that such a company represents than the continuing lack of regulation regarding the Internet.

Until today, and since it emerged domestically in the late 20th century, the adequate regulation of the Internet remains elusive. What makes us think that in the case of AI – in which, of course, there are a great number of political, economic, armament, military, and even terrorism-related interests at stake – there are better prospects and things could work better? Only time will tell, but one thing is certain: AI is and will continue to be one of the major topics that we will have to deal with, whether we like it or not.

constitutionandhumanrights@gmail.com