AI Safety Out; AI Innovation In

Re-branding the AI Safety Institute signals a shift toward speed, innovation, and a greater tolerance for risk.

Introduction

In March 2025, the U.S. The Department of Commerce rebranded the Artificial Intelligence (AI) Safety Institute as the Center for AI Standards and Innovation (CAISI). This change signaled a strategic shift from a broad focus on AI safety to a more targeted emphasis on national security risks and reducing international regulatory constraints. The rebranding aligns with the Trump administration’s broader agenda to bolster American AI leadership and minimize regulatory barriers. 

AI Safety Out; Innovation In

Originally established in 2023 under President Biden, the AI Safety Institute aimed to develop safety protocols in collaboration with major AI firms, addressing risks such as biological weapon creation and harmful online content. 

The new CAISI will concentrate on tangible threats like cybersecurity, biosecurity, chemical weapons, and foreign influence from AI systems like China’s DeepSeek. This policy shift follows the revocation of a Biden-era executive order on AI safety and a push for increased AI adoption and less regulation, including a proposed 10-year moratorium on state-level AI rules.

Debate

This reorientation has sparked debate among policymakers, industry leaders, and civil society organizations. Proponents argue that the focus on national security will enhance the US’s competitive edge in AI development. 

Critics, however, express concern that deprioritizing general AI safety could lead to insufficient oversight of AI technologies’ broader societal impacts. The shift underscores the ongoing tension between fostering innovation and ensuring responsible AI governance.

Small and Medium-Sized Enterprises

For small and medium-sized enterprises (SMEs), this development highlights the importance of proactively establishing internal AI governance frameworks. With the US federal focus shifting towards national security and less regulation, SMEs must take greater responsibility for ensuring their AI systems are implemented ethically and in compliance with applicable laws if they plan to engage in non-US markets, where regulations are more mature.

That said, American SMEs focused on the US domestic market may find it easier to “run fast and break things”. American SMEs that develop AI solutions for the national security sector may also find eager customers in Europe, where the US is pressuring NATO Allies to spend more on defense and where the EU AI Act explicitly carves out exemptions for defense applications. The US and EU find themselves in a race to counter Chinese threats in cyberspace and in AI, and SMEs may find fertile ground to grow.

Conclusion

As AI governance standards evolve and regulatory landscapes shift, small and mid-sized businesses cannot afford to fall behind. Whether you’re just starting to implement AI or looking to align with new data protection laws and regulations, 1 Global Data Protection Advisors (1GDPA) is here to help. Our services are tailored for companies without large legal or IT teams, but who still need world-class compliance, strategy, and peace of mind.

Contact Us

If you want to learn more about how 1 Global Data Protection Advisors can help your business, please reach out for a free consultation. 1GDPA helps public, private, and non-profit organizations to leverage their data and AI systems in a responsible and legally compliant manner. 1GDPA stands ready to help you create, update, and mature your data protection, privacy, and AI governance, risk, and compliance programs.


Previous
Previous

AI Is Still Underhyped

Next
Next

New Executive Order on Artificial Intelligence