Is There Legislation Against AI?
While there is no law banning AI, many jurisdictions are introducing comprehensive frameworks to ensure its responsible use. The EU’s AI Act is a leading example and the first comprehensive AI regulation globally. It applies a risk-based approach, classifying AI systems as unacceptable, high-risk, or low-risk. High-risk applications such as biometric identification and credit scoring must meet strict requirements for transparency, risk assessment, human oversight, and data governance. Providers are also required to conduct conformity assessments and ongoing post-market monitoring to ensure safe and trustworthy AI.
Peru has introduced its AI Regulation (Law No. 31814 and Supreme Decree No. 115-2025-PCM), which applies a phased, risk-based framework that encourages AI adoption for social and economic development while protecting fundamental rights. Certain uses, including biometric ID, employment decisions, credit scoring, social program eligibility, and critical infrastructure, are classified as high-risk. The regulation prohibits manipulative and intrusive practices, such as subliminal behavioural techniques and unlawful mass surveillance. It also requires transparency, human oversight, and standardised AI management systems, with a national authority overseeing compliance and a public AI sandbox supporting innovation.
Other countries, including the UK, US, China, Brazil, and Canada, are developing similar frameworks focusing on transparency, fairness, and ethical use. International standards, such as ISO/IEC 42001, are increasingly shaping expectations for AI governance, auditing, and risk management, creating more consistency across borders.