The UN Agrees on Global AI Rules

The United Nations General Assembly recently passed a resolution aimed at encouraging reliable AI. This is the first time the U.N. has officially set rules for AI.


The resolution, put forth by the United States and supported by over 120 member nations, including key players like South Korea, the UK and Germany, highlights the importance of developing AI systems that prioritise safety, security, and trustworthiness. Central to the resolution is the commitment to safeguarding human rights both offline and online, emphasising the need for AI systems that do not pose undue risks to individuals' fundamental freedoms.

At its core, the resolution advocates for equitable access to AI technologies, recognising the potential of AI to drive digital transformation while ensuring that its benefits are accessible to all. The UN also calls for increased funding for AI research aligned with the U.N.'s Sustainable Development Goals to address global challenges and contribute to sustainable development.

The U.N.'s resolution aligns with broader global efforts to regulate AI, including the recent enactment of the EU's AI Act, which sets strict rules to protect citizens' rights. While approaches to AI regulation vary across regions, the overarching goal remains consistent, to establish governance frameworks that promote the responsible use of AI and mitigate potential risks.

As we debate the opportunities and challenges posed by AI, initiatives like the U.N.'s resolution and the EU's AI Act represent crucial steps towards fostering a more ethical, inclusive, and sustainable AI ecosystem. By working collaboratively across borders and sectors, stakeholders can ensure that AI technologies serve as tools for progress while supporting the values of humanity and dignity.

Why Is It Important to Regulate AI?

While AI holds huge potential to address societal challenges and drive innovation, it also presents unique risks that must be addressed to ensure the trust and safety of individuals and communities. 

One of the key concerns surrounding AI is the secrecy of decision-making processes within AI systems. Unlike traditional systems where decisions are transparent and traceable, AI algorithms often operate as black boxes, making it challenging to understand the rationale behind their decisions or predictions. This lack of transparency can lead to instances where individuals are unfairly disadvantaged, such as in hiring decisions or access to public benefit schemes. Regulations are necessary to address these concerns and establish mechanisms for accountability and transparency in AI decision-making processes.

Existing legislation provides some level of protection, but it is insufficient to address the specific challenges posed by AI systems. The proposed rules outlined in the EU’s AI Act aim to fill this gap by specifically targeting risks associated with AI applications. By prohibiting AI practices that pose unacceptable risks and establishing clear requirements and obligations for high-risk AI systems, the regulations seek to mitigate potential harms and ensure the responsible development and use of AI technologies.

The EU’s AI Act introduces measures to enforce compliance with regulations, including conformity assessments before AI systems are put into service or placed on the market. This ensures that AI systems undergo thorough evaluation to assess their safety, reliability, and adherence to regulatory standards. Additionally, the establishment of a governance structure at both European and national levels provides oversight and accountability, facilitating the effective implementation and enforcement of AI regulations.

In summary, regulations on AI are essential to address the unique risks posed by AI technologies. By having clear rules, making sure people follow them, and being open about how AI is used, regulations help make sure AI is used the right way and that people trust it.

AI and Human Rights

The rapid advancement of artificial intelligence (AI) undoubtedly presents numerous opportunities for innovation and progress. However, it is essential not to turn a blind eye to the potential risks that AI tools pose when misused or applied in ways that infringe on human rights. Despite the promises of efficiency and accuracy, AI systems often perpetuate societal injustices and deepen existing inequalities.

One of the most pressing concerns surrounding AI is its potential for biased outcomes, stemming from the training data used to develop these systems. Often, AI algorithms are trained on vast amounts of data that reflect underlying societal biases and prejudices. As a result, AI tools, such as predictive policing systems or automated decision-making algorithms in public sectors, can inadvertently reinforce discrimination against certain communities. For instance, these tools may disproportionately target certain demographic groups or focus on systemic inequalities, further marginalising already vulnerable populations.

AI applications in areas such as fraud detection have been found to disproportionately impact ethnic minorities, leading to devastating financial consequences for individuals who are already disadvantaged. The lack of transparency and accountability in the development and deployment of AI systems exacerbates these issues, making it difficult to address and rectify instances of discrimination or harm caused by AI tools.

Furthermore, the use of AI for mass surveillance and societal control poses a threat to human rights. From monitoring the movements of migrants and refugees to tracking individuals' online activities, AI-enabled surveillance technologies infringe on privacy rights and erode civil liberties. Using AI to spy on people in this way may create concerns about certain leaders misusing their power for instance.

While AI holds immense potential for societal advancement, it is imperative to recognise and address the inherent risks it poses to human rights. Efforts to develop and deploy AI must be accompanied by robust safeguards, transparency measures, and ethical guidelines to ensure that AI technologies promote fairness, equality, and respect for human rights.

How Can Gerrish Legal Help?

Gerrish Legal is a dynamic digital law firm. We pride ourselves on giving high-quality and expert legal advice to our valued clients. We specialise in many aspects of digital law such as GDPR, data privacy, digital and technology law, commercial law, and intellectual property. 

We give companies the support they need to successfully and confidently run their businesses whilst complying with legal regulations without the burdens of keeping up with ever-changing digital requirements. 

We are here to help you, get in contact with us today for more information.

Previous
Previous

The ICO Publishes Guidance on How It Calculates Data Protection Fines

Next
Next

Data Protection Considerations for the Paris Olympic and Paralympic Games