What Can We Learn From South Korea's New AI Act?
On 22nd January 2026, South Korea implemented its AI Basic Act, formally known as the Act on the Development of Artificial Intelligence and Establishment of Trust. The law joins the European Union AI Act in establishing one of the world’s most comprehensive AI regulatory frameworks. For companies using or providing AI, domestically or internationally, the legislation shows a shift toward stricter accountability, transparency, and safety standards in AI operations.
For businesses operating in South Korea or planning to enter its market, understanding the Act is essential. It applies broadly to AI developers (those creating AI systems) and AI users (those embedding AI in products or services), and it has extraterritorial reach, meaning foreign companies providing AI to South Korean users fall under its scope.
Key Requirements for AI Operators
The AI Basic Act introduces obligations that vary depending on the type and impact of AI systems, including the following:
Transparency
Operators of generative AI systems producing text, images, audio, or video that mimic human outputs must clearly label AI-generated content. Both generative and high-impact AI systems require prior notification to users that the product or service incorporates AI. The goal is to ensure that users can distinguish between human-created and machine-generated outputs.
High-Impact AI
High-impact AI systems are defined as those with significant consequences for human life, safety, or fundamental rights, including applications in healthcare, energy, transportation, hiring, or biometric analysis. Operators of such systems must assess high-impact status before deployment and consult South Korea’s Ministry of Science and ICT (MSIT) if needed.
In addition, high-impact systems must provide explainability by offering meaningful descriptions of outcomes, key decision-making criteria, and training data summaries and develop user protection plans and maintain mechanisms for human oversight. Risk mitigation measures must be documented and, where applicable, perform impact assessments on fundamental rights.
High-Performance AI
Systems exceeding a certain computational threshold, specifically those trained with over 10²⁶ floating-point operations, are considered high-performance AI. Operators must implement life-cycle risk management plans, establish user protection measures, and report compliance outcomes to MSIT. Additional technical requirements are expected as the Ministry finalises enforcement regulations.
Non-compliance with the AI Basic Act carries real consequences. The legislation allows fines of up to KRW 30 million (~$20,870) and potential imprisonment for serious violations. While detailed enforcement procedures are still being developed, companies can expect regulatory oversight and audits, particularly for high-impact or high-performance AI systems.
Key Takeaways for Businesses
South Korea’s AI Basic Act suggests that stricter AI oversight is becoming a global standard, and businesses using AI must take proactive steps to stay ahead. Companies should begin by evaluating their existing and planned AI systems to determine which may fall under the high-impact or high-performance categories, and then carry out thorough assessments of potential risks, including safety, fairness, and data privacy concerns.
Transparency is absolutely key and organisations need to ensure AI-generated content is clearly identified and users are provided with meaningful explanations of how AI outputs are produced. Implementing ongoing risk management procedures across the AI lifecycle, alongside careful documentation of safety measures and mitigation efforts, will help support regulatory compliance.
Oversight and governance structures are equally important such as assigning responsibility internally, appointing local representatives where necessary, and establishing ethics committees can guide responsible AI use. Finally, companies should stay informed about updates from South Korea’s Ministry of Science and ICT and leverage governance tools and solutions to ensure their AI practices remain compliant as regulations evolve.
How Can Gerrish Legal Help?
Gerrish Legal is a dynamic digital law firm. We pride ourselves on giving high-quality and expert legal advice to our valued clients. We specialise in many aspects of digital law such as GDPR, data privacy, digital and technology law, commercial law, and intellectual property.
We give companies the support they need to successfully and confidently run their businesses whilst complying with legal regulations without the burdens of keeping up with ever-changing digital requirements.
We are here to help you, get in contact with us today for more information.