The Inter-American Court of Human Rights (IACHR) handed down a ruling in the Case of Tzompaxtle Tecpile et al., in which it determined that the Mexican government is responsible for violating the human rights of the accused, by depriving them of their liberty with the precautionary measures of arraigo (arrest) and preventive detention, for which they urged the country to eliminate the figure of pre-procedural arraigo from its legislation and modify its internal legal system on preventive detention. Since said international organization verified that these measures violate Human Rights in the case, specifically, the rights to personal integrity, personal freedom, the guarantees of due process and hearing against the defendants during their trial between the years 2006 and 2008. In 2006, the martyrs were held incommunicado and confined for three months when their arraigo were decreed. After the opening of the criminal process, the victims were in preventive detention for approximately two and a half years, being released in October 2008 when they were acquitted of the crime of terrorism. The Court analyzed the arraigo and preventive detention. The first is contrary to the American Convention on Human Rights, by violating the rights to due process, personal freedom and the presumption of innocence, among others. Regarding preventive detention, it is also contrary in itself to the Convention by not mentioning its purposes, nor do the procedural dangers that it would seek to prevent. Nor is it necessary to analyze the need for the measure in the face of other less harmful measures, such as alternative measures to deprivation of liberty. The Court sentenced the Mexican State: 1) To annul pre-procedural arraigo in legislation; 2) Adapt its legislation on preventive detention; 3) Publish the Judgment and its official summary; 4) Recognize your international responsibility in a public act; 5) Provide medical, psychological, and psychiatric treatment to the victims; and 6) Pay the amounts established in the Judgment for costs and expenses. For the first two points, he gave a period of six months from January 27, 2023 and to supervise that the sentence is carried out within a period of one year. To date, the first two points have not been accomplishment.
It can be stated that in a generic way, the patrimonial responsibility of the State arising from the jurisdictional activity identified in the so-called judicial errors, emanates from the acts that those in charge of the administration of justice can cause as a consequence of an improper resolution, when carrying out any criminal process against a person whose innocence is subsequently proven, declaring the sentence acquittal.
In Mexico, the lack of specific regulation in this regard produces a condition of uncertainty for people, who after submitting to the criminal justice system and ultimately issuing a resolution declaring their innocence, do not have the legal recourse to claim payment of the damages and losses that were caused to them by said judicial error, due to the lack of legislation that contains the procedure to determine the corresponding compensation, for this reason the victims of said mistakes are left in a total state of defenselessness and in flagrante violation of their human rights.
The Mexican State is responsible for guaranteeing all human rights enshrined in its General Constitution and in the International Human Rights Treaties that it has signed and ratified. The lack of guarantee of these rights implies their potential violation, which is why they must be regulated.
It is urgent that Mexico locate through comparative law, analyzing the legislation that exists on the matter in the legal systems of the countries that contain the patrimonial responsibility of the State for judicial error, as well as in the various forums that on this matter are held in congresses and conventions worldwide, the necessary foundations so that it can quickly assume an internal and coherent proposal regarding said institution, which allows it to initially achieve a correct attitude towards its natives and then, towards the different nations of the international community. It must be taken into account that without justice there is no peace and without peace there is no freedom.
Migration consists of people moving from one territory to another, crossing the geographical limits of countries or regions. People’s changes of residence from one geographic location to another can be for various reasons: personal, family, political, security, economic , climatological and others, which influence in different ways the willingness to migrate to another place; geographic distance, the scenarios of the territories of origin and destination, among others, also intervene.
Migration links countries, cities and communities. It continues enduring mobility patterns or generates different ones as political, social and economic scenarios change. Migration formulates shared stories, shows economic requirements and promotes cultural ties. It outlines challenges and provides opportunities for both migrants and societies.
It should be considered that in general, migration is safe, regular and orderly, and is not only unavoidable but also beneficial, since it has improved the lives of many people. Energy should not be wasted on containing migration, but rather on creating scenarios in which it is an option and not a necessity, carrying it out through regular channels so that it operates by stimulating development.
Migration is an inevitable action, since one in eight people is the number that migrate in the world, which is why it must be seen as an integral part of the solution. Properly processed, it is essential for sustainable development and progress. It is unquestionable that it induces productivity, provokes innovation and creates more diverse societies, among other benefits. For those who decide to migrate, it opens new opportunities, but at the same time, they face risks and enormous challenges, such as discrimination, uncertainty, and the challenge of integration.
The Court of Arbitration for Sport has been in the headlines of sports journalism in Mexico in recent weeks. This is because it recently deducted points from the professional soccer team Xolos de Tijuana, competing in the first division, and awarded them to the Puebla squad. The reason behind this decision was an alleged improper alignment, contrary to the regulations.
Puebla was penalized for not registering one of its coaching staff members in the match roster against Tijuana in the seventh week of the Apertura 2023 tournament, resulting in the loss of three points. After the sanction, Puebla appealed the decision of the Mexican Football Federation, but the appeal was denied. Consequently, the team decided to turn to the Court of Arbitration for Sport, obtaining a favorable resolution.
In light of this, professional football in Mexico is entering a new stage where there is a likelihood of an increasing reliance on the jurisdiction of the Court of Arbitration for Sport. This jurisdiction is based in Lausanne, Switzerland, with additional courts in New York and Sydney. Cases related to doping, player transfers, among others, have been brought before it. This trend reflects a need and even a demand for the acts and decisions occurring in the decision-making bodies of professional leagues in major sports worldwide to undergo review by independent and autonomous third-party bodies.
This could potentially bring the rule of law closer to sports, especially in the so-called “spectacle sports,” benefiting society as a whole and the millions of followers of these leagues around the world. These leagues generate a significant amount of money daily.
The spectacle must align with justice and the guiding principles of Sports Law. Therefore, the involvement of the Court of Arbitration for Sport is good news amid speculations, whether isolated or not, of match-fixing, corruption, improper betting, and other issues that tarnish the most popular sport in Mexico and the world. Hopefully, this tribunal will have an even more significant presence in the future.
Although international peace and security have been established as the two central objectives of the United Nations since its creation after World War II, armed conflicts continue to arise everywhere, as demonstrated by the historic tension between Israel and Hamas. The international community has not done much to unequivocally prohibit war and uphold the idea of peace, as Luigi Ferrajoli has argued.
War, according to the distinguished Italian professor, is the negation of law and, therefore, also of constitutional rights. As long as there are armed conflicts and a significant loss of human lives, globalization and global governance are failing as significant human constructs. But even more, their designers, operators, and implementers are failing.
The globalization of law and human rights should be the most eloquent dimension of global integration. The process of globalization has historically been characterized as a linear, one-directional process that privileges its economic and financial aspects over its legal, political, cultural, and environmental nature.
International pacts, treaties, and agreements on human rights should be not only documentary but practical epicenters of a global institutionality reflected in authentic guarantee systems for the effective protection of rights and freedoms. War injures humanity as a whole and demonstrates that solidarity is beyond the cultural reach necessary for social progress.
Now more than ever, cooperative governance must be pursued as a mechanism for ensuring that the exercise of power takes into account the opinion of civil society and that actions attacking peace and the social system are severely punished. Otherwise, war will continue to reproduce itself, rendering constitutional rights mere ornaments in the discourse of politicians rather than a reality.
“Artificial intelligence is almost a discipline of the humanities”, Sebastian Thrun
I
In a landmark development concerning the eventual regulation of artificial intelligence (AI), the European Union embarked on this complex endeavor in June of this year. The European Parliament, responsible for examining, discussing, and adopting European legislation, approved an AI law that deserves various considerations.
This legal framework is set to be fully effective from 2024, making it the first of its kind to have such significance on this critical topic in the global public agenda. However, there are still several steps to be taken for this to happen. If all goes as planned and is realized, it could serve as a model legislation for other regions and countries approaching AI regulation. There has already been pressure within the United Nations (UN), with its Security Council recently convening to establish certain foundations on the topic at hand.
We have previously discussed in these pages that AI regulation is not an option but a genuine necessity in a time when the potential consequences of advanced AI models encompass legal, ethical, political, and even cultural dimensions that are far-reaching. That is why it is natural, as we have also noted and commented on before, that world leaders from various fields, such as Elon Musk and Yuval Noah Harari, have spoken out in recent months about the possible dangers of AI.
At this initial stage, it is necessary to address the intention to ban AI with higher risks, which requires starting from the ground of definitions. Europe aims to distinguish AI technologies and categorize them into four possible categories based on the risk each of them would pose. From those that can undergo reviews and audits, or certain clarifications, to those that would be strictly prohibited for exploiting the vulnerability of groups, assessing the reputation and integrity of individuals, or manipulating behavior, the issue of risk is fundamental as a starting point.
With this in mind, and based on a previous document prepared by the European Commission, the old continent presents a proposal whose central premise is the aforementioned issue of risk, for which, as mentioned earlier, there are four levels: unacceptable risk, high risk, limited risk, and minimal risk. Categorizing each AI product or service into these categories already appears, from this moment, as one of the central challenges that the eventual regulation will face. It should not be forgotten that the sophisticated corporations developing AI software and hardware are not just any kind of business consortium but real power players with lobbyists capable of influencing parliaments and government decision-making bodies worldwide. Their name alone carries significant weight in contemporary markets.
It seems that the coming months will be marked by intense debates in the European public opinion, but even more importantly, around the world, including major powers like the United States, economic giants in the Asia-Pacific region, and emerging countries like Mexico, Brazil, or India. It is crucial that reason and awareness prevail in the deliberation of a future that has already arrived.
II
In addition to the focus on risk, the European Union places emphasis on other specific aspects in the potential and upcoming regulation of AI. Firstly, it directly states that such regulation seeks two essential matters: the protection of fundamental rights and the safety of users. This duality carries an underlying objective: to establish complete trust when it comes to the development and adoption of AI.
The protection of fundamental rights is of the utmost importance when discussing AI regulation, both concerning traditional, what we could call “analog” rights, as well as those that can be identified as “digital” in the same argumentative line. This is not to mention “neuro-rights” as rights of a future that is already beginning to affect us. It’s a positive development that these are prioritized as a precondition for AI. However, it’s even more important that this is not just rhetoric but a clear reality.
The significant issue of big data and everything it represents in terms of personal data protection in the age of AI is critical to ensuring the full safeguarding of fundamental rights. Data is one of the most valuable assets we have in these times where we primarily operate in digital realms and spaces. If AI is to have any level of invasiveness into this data, it is essential to build containment barriers to prevent erosion of our prerogatives and public freedoms due to the actions of malicious vested interests and the wild forces of the digital market, both those with presumed legitimacy and those clearly lacking it.
Meanwhile, the safety of users is also emphasized, considering that AI operates on networks, hardware, and software susceptible to multiple attacks by hackers and cybercriminals who pursue their own murky interests to the detriment of the common good as a fundamental collective aspiration.
It’s no coincidence that the topic of cybersecurity has been gaining increasing prominence and unprecedented market value, becoming one of the sectors with the highest demand, growth, and transformation possibilities in the face of the emergence of new information and communication technologies in general and AI in particular. This is why its significance for Europe is not surprising.
Furthermore, trust in terms of the development and adoption of AI is associated with the idea of certainty that must exist when dealing with such advanced technological developments as those in question. And that certainty, at the end of the day, can only be provided by Law, starting with Constitutions and international treaties. Only a robust legal system can provide the confidence needed when interacting with such innovative but potentially dangerous tools as AI. It’s important to remember that in addition to providing order and social control, the law provides certainty to all practices carried out in a given society, at least in an ideal sense.
III
Another aspect that Europe emphasizes when designing regulation on AI is that of territoriality. There is no doubt that the application of laws across time and space remains one of the unfinished and unfulfilled challenges when it comes to information and communication technologies, especially in the context of cyberspace, modern virtual arenas, social networks, and other areas. If this is extended to the realm of AI, the level of complexity definitely increases because this type of innovation involves emerging knowledge that can be equally applicable in different spaces and levels.
In this regard, the legal framework envisioned for the coming months is intended for both public and private actors, both within the European Union and beyond if the AI system in question operates in the European market or its use impacts the sphere of someone located in Europe. If in the “analog” dimension of collective life, so to speak, there are blurry lines between the public, private, and intimate spheres, differentiation between these areas becomes even more complicated when it comes to digital life and digital rights. Therefore, great care must be taken when establishing categories that are not always as clear-cut as one might think at first.
What currently happens with a topic that can serve as a parallel, such as cybercrimes? There are several challenges due to the very nature of the offenses that occur. Consider an issue like electronic fraud, identity theft, or illegal access to networks and devices, where the unlawful act can begin in Shanghai, but the hackers, cybercriminals, and pirates may have servers in Prague, and the potential victims could be in the San Francisco Bay Area, in the United States, home to several major technology companies today. In this example, three continents would be involved, although there have been cases where all five continents are somehow implicated.
Territorial jurisdiction conflicts would quickly become evident due to the lack of an international convention and multilateral cooperation treaties in this matter. This can lead to a situation where one country criminalizes certain behaviors while another does not, hindering the investigation, prosecution, and punishment of these actions. This is not a minor issue.
This serves to illustrate that when it comes to AI, the problem arises automatically. That’s why Europe reached a consensus that when AI affects providers or users of high-risk systems, the regulatory system must be applied. When put into practice, it will be interesting to observe the dynamics by which potentially criminal behavior by agents outside the European Union is investigated.
Non-professional private uses are excluded from the regulation, although this may raise questions about the potentially harmful modus operandi of individuals or entities that may fall into this category. If we speculate, given that AI is a double-edged sword, there may be justified concerns about this exclusion.
IV
Europa’s aim in regulating AI is to maximize the principle of legal certainty, which is of utmost importance when discussing information and communication technologies in general, considering their widespread use and penetration into countless households, businesses, governments, and both public and private offices on a global scale.
While it is true that companies and economic actors may have the most significant interest in effectively implementing this, it’s also necessary to emphasize that legal certainty as a principle benefits society as a whole, including the millions of domestic AI users.
Indeed, certainty is key to instilling confidence in the highly sophisticated advancements that have occurred in AI in recent years, which will undoubtedly continue to multiply rapidly. Without certainty and trust, there can be no reciprocity when government agents impose certain duties or obligations in this field because leading by example is essential.
Furthermore, compliance with regulations is largely shaped in the realm of public policies that each nation will have to develop internally. Stating these principles is not enough; they must be accompanied by political will and a firm commitment to ensure that ethical codes are respected to the fullest extent and not reduced to mere decorative elements.
Both the legal and ethical aspects must be treated with equal objectivity if positive results are to be achieved. The proper application of legal frameworks and a conscientious consideration of both rights and obligations in an ethical context are crucial to ensuring that developments remain compliant with the rule of law, which is more necessary than ever in this ambiguous era of AI.
In this context, at the community level, the European Committee on Artificial Intelligence will emerge as a body responsible for overseeing and facilitating the implementation of the legal framework concerning AI. It aims for fluidity, effectiveness, and harmonization. Within its scope of authority, it can recommend and assess high-risk systems, becoming a center of expertise that EU members can consult when necessary.
If we aspire to establish a digital rule of law, it is crucial that legal certainty, compliance with regulations, and the proper anchoring of institutional guidelines are carried out authentically, not just on paper. AI requires clear, precise, transparent, understandable, and minimally interpretable regulation; anything less would lead to ambiguity, vagueness, and suspicions that benefit no one. This digital rule of law must always be robust and resilient against the threats posed by wild powers from the digital realm while not neglecting the analog world in which we continue to operate. It’s a monumental challenge, but one that must be confronted with determination.
V
Institutional financing to address the regulation of AI is another key aspect that cannot be overlooked in the face of the new horizons ahead. Without sufficient resources, any effort may simply remain on paper. This is why there is a provision to invest one billion euros annually in AI, which represents a good starting point given the complexity and innovation that this type of technology brings.
Another significant issue is that of machines themselves, which, thanks to science fiction, Hollywood, and dystopian and futuristic TV series, are a source of fear for many people, as they may eventually take over the world. Beyond these speculative concerns, the fact is that controlling machines not only through buttons but also through laws becomes imperative. User and consumer safety, as well as the promotion of innovation, will be top priorities in machine regulations, considering that these machines have a wide range of applications, including professional and consumer products, robots, construction machines, 3D printers, industrial production lines, lawnmowers, to name a few specific cases.
Another crucial aspect is the prohibition of biases based on gender or race, which highlights the dimensions of the fundamental right to non-discrimination. Equitable and equal treatment emerges as one of the fundamental principles in AI regulation, reflecting a commitment to rights and freedoms in general.
From another perspective, the detailed evaluation of AI systems before they enter the market is essential. This includes elements such as human oversight, transparency, robustness, accuracy, traceability, documentation, and data quality. The application of quality systems and risk management is not optional but mandatory.
Biometrics is also a topic that appears in European AI projects, and this is understandable given the advanced technology it represents, which can also be vulnerable to cybercrime. Real-time biometric identification, for example, is one of the aspects that draw attention in order to protect dignity, privacy, and personal data.
As Anders Sörman-Nilsson has stated, combining human intelligence and AI can make the world a better and more empathetic place. The steps that the European Union has taken to regulate AI comprehensively, objectively, and optimally seem to be on the right path, although it’s true that they are not perfect and may require various forms of supplementation. It will be important to see how the United Nations (UN) also takes action and plays a role in the matter, so that its efforts lead to a comprehensive international convention characterized by consensus, rationality, and a forward-looking approach. Of course, each national state must assume firm commitments from both a political and budgetary perspective, and that’s where we can assess progress in the short and medium term in relation to a topic that has already reached us.
It took some time – for some sectors, much longer than necessary – but finally, the United Nations Security Council (UNSC) held a formal session on July 18th regarding the risks, threats, scientific, and social benefits of artificial intelligence (AI). The UN headquarters in New York was the stage where a considerable number of diplomats, entrepreneurs, and experts in the field expressed their views on AI, its potential regulation, and the ethical principles that would in some way have to govern its development, functionalities, and specific frameworks.
An important step has already been taken by António Guterres, the Secretary-General of the UN, who proposed creating a new United Nations body in the same logic and dynamics as entities like the International Atomic Energy Agency and the Intergovernmental Panel on Climate Change, which could potentially take charge of AI governance by 2026. Guterres himself posted the following on the social network still known as Twitter and currently in the process of transitioning to X: ‘Today, I urged the Security Council to address Artificial Intelligence with a sense of urgency, a global perspective, and a learning mindset. We must work together to adopt common measures of transparency, accountability, and oversight of AI systems.’
The UN’s reaction comes a few weeks after the European Parliament approved a set of more or less comprehensive regulations on AI. Despite the potential lateness of this response, it undoubtedly marks a milestone in a landscape that was beginning to darken when it came to AI, especially since issues such as social responsibility, ethics in its various developments, and scruples of all kinds were not taking root, now that the future has begun to catch up with us, and where we are not certain about the type of impact AI can have, not only on international peace and security but also on a range of our fundamental rights and public freedoms, which have been eroding for quite some time.
It is clear that there are still many unanswered questions when it comes to a possible agenda for regulating, controlling, and optimizing AI for the benefit of society, and not the other way around, as many experts have predicted in their forecasts about the technological, scientific, and public future, where innovation has reached unprecedented heights.
Essentially, we need to bring to the debate table a much-needed convention that leads to an international treaty on AI, in which rights are recognized but also obligations, duties, and specific sanctions are defined, provided with a robust institutional framework. Soft law or soft rights cannot be an alternative in this regard.
A particular case shows that no matter how many multilateral meetings, legal norms, and treaties there are, without political will, we cannot make any progress: that of climate change and global warming, where we find multiple reluctances from the United States and other powers to comply with international law. For our own good, it is to be hoped that with AI, everything will be guided by order rather than pretexts.
AI has risks and legal challenges, taking into account that around it orbit technological, economic, and power struggles in general, waged not only by major superpowers but also by wild forces -starting with criminality- real sources of power or de facto powers throughout the world.
On the occasion of a letter signed on March 22 by various technological, academic, and other leaders at the international level, we saw that this epistle sought to halt the rapidly accelerating and revolutionary developments that AI has undergone in recent years, before it’s too late.
Well, another public communication, in May, filled the headlines of the leading media outlets around the globe: over 350 executives, engineers, workers, and researchers working at some of the most renowned AI corporations warn the following in a concise statement: “mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks, such as pandemics and nuclear war.” Among the signatories are CEOs of companies like OpenAI, Google DeepMind, and Anthropic, which are currently some of the most relevant in the field at hand.
This accumulation of letters to the global public opinion must be taken seriously enough by those who make decisions, especially in developed countries, as it is in these countries that AI has evolved in an astonishing way very recently. And just like in any scientific and technological advancement, there are aspects that have two sides, just like a coin: one completely positive that contributes to social progress, but another full of clouds and gray areas that can not only be negative but, as the mentioned leaders warn, could jeopardize the survival of the human species.
Two aspects of the late May declaration stand out: first, the fact that the word “extinction” is used with all that it implies. Apologists of disaster may find in this a powerful breeding ground to structure all kinds of conspiracy theories, but the truth is that glimpsing an anticipated end of civilization speaks to the level of concern that AI carries.
Secondly, it is included in the list of planetary emergencies that we must deal with in the short term, very much in the terminology of some of the most renowned philosophers and thinkers in the world, such as Luigi Ferrajoli; indeed, according to the emeritus professor of the University of Roma Tre, at present we are experiencing emergencies and catastrophes in ecological, nuclear, armament, labor, and migratory fields, to which AI must be added.
In conclusion, if the main developers of AI advocate directly or indirectly for a broad deliberation on its regulation, we would have to get to work. The future is today; the future of homo sapiens could be hanging in the balance if we ignore it.
“The only limit for artificial intelligence is human imagination.”, Chris Duffey
In our previous article, we brought up the public letter that various leaders in technology, such as Elon Musk, and even esteemed intellectuals like Yuval Noah Harari, signed on March 22nd of this year. Through this letter, they emphasized the need to halt the rapid advancements of artificial intelligence (AI) due to the level of threat these developments pose in the context of our civilization and its current trajectory.
We were also discussing how the regulation of AI is an extremely complex issue that needs to be brought to the table for discussion nowadays. With this as a starting point, we want to draw the kind reader’s attention to the fact that regulating any recent technological phenomenon is a multifactorial task that begins—or rather, should begin—with a prerequisite condition: concise and clear definitions.
Indeed, before regulating AI, we must generate a consensus regarding what it is, its scope, alternatives and unique circumstances. Those making political decisions worldwide, apart from the technological giants that dominate the corporate sphere and are the main developers of AI, hardly have a clear enough panorama of it. The political class, as is common with many matters of public nature, tends to resort to demagoguery, populism, and nonsense. As a result, there is an essential imperative: to generate a global summit where the widest social sectors participate, led by governmental, technological, scientific, and innovation sectors, without neglecting others such as education or human rights. From this summit, strategies would unfold to articulate a common international policy regarding AI. That would be the genesis of a possible regulation of AI because from there onwards, each nation-state, in tune with the influence of the aforementioned technological giants—but also of wild powers such as organized crime and terrorist groups—would conceive its own model, which would lead to an impractical reality that requires cooperation, integration, and continuous dialogue.
AI represents a crossroads for humanity: either we take advantage of it to take us to the next level, or we allow its use for harmful purposes that jeopardize our future on the planet.
As Tim Cook, CEO of Apple, has said, “What we all have to do is make sure we are using AI in a way that is beneficial to humanity, not detrimental.” It is clear that criminality, cyber pirates, hackers, electronic criminals, or terrorists care little about what the international community does or does not do in order to regulate AI properly. However, that does not mean that decision-makers around the world should give up on the immense range of obligations in terms of solidarity they have. Its regulation is undoubtedly essential, but what is even more important is that the first steps in this regard are taken with certainty, clear objectives, and transparent processes.
“Science fiction is an immense metaphor.”, Ursula K. Le Guin
A headline caught the attention of the public on March 22nd: Various technology leaders, led by Elon Musk (president of Tesla, SpaceX, Twitter), and other figures like Steve Wozniak (co-founder of Apple), signed a letter published on the website of the Future of Life Institute, in which they called for a pause in the rapid developments of artificial intelligence (AI).
The letter was issued at a particular juncture, represented by the launch of the most powerful version of ChatGPT – GPT-4 – a tool that may be destined to change, transform, and revolutionize numerous aspects of everyday life from now on, with all that it implies. A new era of freedoms and rights, but also of possible restrictions on them, lies ahead of us.
Musk and company’s epistle, if we speculate and succumb to “suspicionism,” may initially sound like a protest over the lag in the technological race of AI chatbots. However, when we observe that it is also subscribed to by renowned professors from some of the most prestigious universities in the world, notably including Yuval Noah Harari – one of the most important intellectuals of our time, who has dedicated a significant part of his work to the analysis of the potentials and risks of AI – we can perceive that it is not a mere publicity or marketing stratagem, but a document that must be placed and properly weighed on the table of digital public discussion, taking into account the opinions of its various participants.
According to the undersigned, AI in its most advanced state, and even more powerful than what ChatGPT currently offers, lacks security systems strong enough beyond any reasonable doubt. Administration, planning, and resource care are three priority aspects before there continue to be AI platforms like GPT-4 since there are also numerous unanswered questions when it comes to reliable information or consumer privacy, to cite just a couple of examples.
In this scenario, what is the role of the Law as a stabilizing instrument of control and social order? Can global governance on AI be established beyond ideological and other factors? Can global public policies on AI be considered in light of a separation of the various ongoing technological races, such as the one being fought between China and the United States, for example?
The above questions do not appear simple. The possible regulation of artificial intelligence is a normative challenge that seems to have no parallel in recent times; however, there is no better example to illustrate the ambiguities, gaps, and vagueness that such a company represents than the continuing lack of regulation regarding the Internet.
Until today, and since it emerged domestically in the late 20th century, the adequate regulation of the Internet remains elusive. What makes us think that in the case of AI – in which, of course, there are a great number of political, economic, armament, military, and even terrorism-related interests at stake – there are better prospects and things could work better? Only time will tell, but one thing is certain: AI is and will continue to be one of the major topics that we will have to deal with, whether we like it or not.