top of page

When AI Governance Becomes a Zero-Sum Game

This blog is a Part II follow up to the last blog on AI governance, featured in Technical.ly 
This blog is a Part II follow up to the last blog on AI governance, featured in Technical.ly 

At the personal level, sloppy AI governance

shows up in deepfakes, revenge porn, and algorithmic discrimination. Yet, even at the interpersonal level, these unregulated capabilities of products like Grok can damage reputations, spread misinformation, and critically- harm children. At the global level, regulation discrepancies and hasty decisions influence AI sovereignty tactics, trade leverage, and power concentration. Writing for Brookings, Cameron F Kerry quips, “An all-or-nothing sales pitch from the U.S. will make it hard to find this balance and heighten concerns about dependence on U.S. technology.”

Back to domestic regulation: between 2019 and 2024, U.S. states rushed to regulate election-related deepfakes. Picture a “whack a mole” situation, which is why this can be challenging, when satire and politics are as ubiquitous as salt & pepper. Current politics point to even less regulation, and more of a surrender mentality- all while AI gets more sophisticated. Yet, on the positive side, the US is getting more serious on a national level when it comes to criminalizing non-consensual deepfake porn. In 2025, Congress passed the Take It Down Act to criminalize non-consensual deepfake pornography. 

The bill succeeded in part because it avoided Section 230, preserving platform protections. As Time Time reported,“One of the key reasons tech companies supported the bill was because it did not involve Section 230…With anything involving Section 230, there's a worry on the tech company side that you are slowly going to chip away at their protections…” How litigation interactions with this legislation will be interesting, because policy vs. practice is one of the oldest dilemmas. Grok’s so called “spicy mode” has simply been made paywall protected, which means this tool can very much still harm people.

Legislation that protects individuals is more likely to advance when it does not threaten the foundational liability shields of technology companies- which partially explains what we are seeing. Yet there are also positive regulation developments that are innovative like the AI they are tasked with controlling. Similar to what Representative Summer Lee is proposing in the US, Denmark gained global attention when they announced in June 2025 the concept of digital autonomy enforced by the law. According to The Good Lobby, this is “one of the first attempts in the world to treat identity rights as copyright-protected assets,” giving citizens rights that include: the right of removal, compensation for damage, and platform liability. 

Back to the global aspect of AI regulation, adding AI sovereignty to the governance conversation raises the stakes from simple market competition to a zero sum game. Achieving technological independence is one thing, but having all or nothing export packages- especially when technological development and resources are not uniform globally- shifts the entire governance conversation. As it currently stands, the US and EU have vastly different AI governance models. How does it work for import/ export tech relationships and the current global marketplace when not only value systems and policy differ extremely- but the conversation is also this “zero sum game”? 


Whether we are talking about a teenager targeted by a deepfake, a professional harmed by revenge porn, or a nation navigating high-stakes AI trade and sovereignty negotiations, the throughline is the same: the consequences of weak governance are compounding, fast. What once felt like “edge cases” are now systemic risks, and what used to be reputational damage can now mean legal exposure, financial loss, and long-term psychological harm. This is the moment for institutions, companies, and policymakers to move beyond reactive compliance and toward intentional stewardship. 


That means stress-testing policies against real-world misuse, investing in governance capacity alongside product development, centering human impact in technical decisions, and treating digital rights as seriously as physical ones. The question is no longer whether AI governance matters, it is whether we are willing to recalibrate our systems, incentives, and leadership models quickly enough to meet the scale of what is unfolding. In an era where one model update can ripple from private lives to geopolitical power, responsible governance is no longer optional. It is infrastructure.


Photo credit: freepik.com 

 
 
 

Comments


bottom of page