New US AI Policy Framework Prioritises Online Safety
The U.S. federal government has recently outlined a comprehensive approach to regulating artificial intelligence, with a focus on balancing innovation, public safety, and national competitiveness. On 20th March 2026, the White House released a set of legislative recommendations intended to guide Congress in establishing a uniform national AI framework. While not yet law, these recommendations show us the direction that federal policymakers are likely to take and offer businesses early insight into potential regulatory expectations.
Children and Public Safety Focused Approach
At the heart of the recommendations is a strong emphasis on protecting children. The administration highlights the need for AI platforms to implement robust measures safeguarding minors from harm, while also empowering parents with tools to manage privacy settings, screen time, content exposure, and account controls. Proposals include age verification and commercially reasonable parental attestation mechanisms for AI platforms likely to be accessed by children, as well as features to reduce risks of sexual exploitation and self-harm. The framework also reinforces that existing child privacy protections apply to AI systems, including limits on data collection for model training and targeted advertising.
These recommendations demonstrate that businesses offering AI services to minors will need to implement protective features and privacy safeguards from the outset. Even though enforcement specifics are not yet defined, companies should begin evaluating their platforms, parental control functionalities, and data-handling practices to ensure compliance once legislation or sectoral guidance is issued.
Preemption and a Light-Touch Federal Approach
The White House recommendations propose a uniform federal standard for AI, including preemption of state laws that could impose conflicting requirements on developers. However, the recommendations carefully preserve states’ “traditional police powers,” allowing them to enforce laws against child abuse, fraud, consumer protection, and zoning matters. This approach aims to prevent a patchwork of state regulations while avoiding unnecessary burdens on AI development.
For businesses, this indicates that while federal rules may streamline national compliance, state-level obligations protecting public safety will remain relevant. Companies should monitor developments to identify which standards will be mandatory and how compliance with both federal and state safeguards will be coordinated.
Promoting Innovation Through Sandboxes and Infrastructure Support
A key feature of the federal recommendations is the encouragement of AI “regulatory sandboxes.” These programs would allow companies to test and deploy AI applications under reduced regulatory constraints, supporting innovation while the broader regulatory framework is finalised. This is complemented by measures to enhance AI infrastructure, including permission for new data centres and access to federal datasets in AI-ready formats. Federal support for small businesses and technical assistance programs is also emphasised, ensuring that AI development benefits a broad range of stakeholders.
Businesses engaged in AI research and deployment should consider leveraging sandbox opportunities to test products and demonstrate responsible innovation. Early participation could also position companies favourably as sector-specific regulations evolve.
Intellectual Property, Free Speech, and Workforce Considerations
The recommendations also address intellectual property, emphasising that creators and innovators must be protected from unauthorised AI-generated content while maintaining fair use and freedom of expression. In parallel, the recommendations affirm that AI should not be used to censor lawful political expression, and the government must respect First Amendment protections.
Workforce development is another priority. Congress is encouraged to integrate AI training into existing education programs, apprenticeships, and workforce support initiatives. Businesses can anticipate collaboration with federal and educational institutions to ensure their employees are AI-literate and prepared for the evolving economy.
Implications for Businesses
While these recommendations do not impose immediate legal obligations, they outline the regulatory priorities that are likely to shape U.S. AI law in the near future. Companies should start preparing by reviewing data collection practices, parental and user controls, risk management measures, and compliance frameworks for AI systems. Engagement with industry standards bodies and monitoring federal guidance on sandbox programs and sector-specific regulations will also be crucial.
How Can Gerrish Legal Help?
Gerrish Legal is a dynamic digital law firm. We pride ourselves on giving high-quality and expert legal advice to our valued clients. We specialise in many aspects of digital law such as GDPR, data privacy, digital and technology law, commercial law, and intellectual property.
We give companies the support they need to successfully and confidently run their businesses whilst complying with legal regulations without the burdens of keeping up with ever-changing digital requirements.
We are here to help you, get in contact with us today for more information.