Contributing Authors
Table of Contents
For the past two years, AI governance practitioners have been playing a multi-board chess game. Track Colorado. Watch California. Stay current on Texas. Monitor the EU AI Act enforcement calendar. Build programs that satisfy ISO 42001 while aligning to the NIST AI RMF. It has been complicated, expensive, and exhausting and it is about to get more complicated.
Senator Blackburn’s newly introduced discussion draft, formally the “Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act” (yes, that spells RUMPELMAERICAA) is the clearest signal yet that the federal vacuum is closing. The bill is sprawling, imperfect, and politically uncertain. But for governance practitioners, it is not something to wait and see on. It reveals where this is going.
Here is what it says, how it compares to the frameworks you are already managing, and what it means for the tools and capabilities your program will need.
What the Draft Actually Does and Doesn’t Do
The Blackburn draft is not an AI governance framework in the way the EU AI Act or ISO 42001 are. It is an omnibus bill that sweeps several long-stalled pieces of legislation—the Kids Online Safety Act, Filter Bubble Transparency Act, No Fakes Act, and others—into a single vehicle, alongside new AI-specific provisions.
What it notably does not do: it does not create mandatory risk classification tiers for enterprise AI systems, does not require impact assessments before deployment, and does not impose the kind of systematic documentation obligations the EU AI Act places on high-risk systems.
What Actually Changes for Governance Practitioners
The instinct when a new bill drops is to map its requirements against your current program and identify gaps. That is the right instinct. But the more important question the Blackburn draft forces is structural: the obligations coming from this direction require capabilities that most governance programs were not built to provide. Here is where the gaps are widest.
- Workforce impact tracking is now a governance obligation
Title II’s quarterly DOL disclosure requirement is unlike anything currently in your program. It requires your organization to attribute specific hiring, termination, and backfill decisions to AI causation. Most organizations currently track this anecdotally, if at all. Building the cross-functional infrastructure to do this accurately requires earlier involvement from governance teams in operational AI decisions than most programs currently have.
- Synthetic content provenance is a new technical requirement
Title XIV pushes organizations toward C2PA-compliant watermarking and provenance metadata for AI-generated content. This is an engineering requirement. Any organization deploying generative AI for customer-facing output will need to implement content provenance infrastructure. Governance teams that do not have direct lines into the engineering organizations building and deploying AI will not be able to meet this obligation through policy alone.
- The shift from attestation to continuous evidence
Annual ethics reports and periodic risk assessments were adequate when AI governance was primarily a policy and reputation function. Under the emerging regulatory environment, quarterly DOL disclosures, real-time bias monitoring under Colorado’s framework, ongoing EU AI Act compliance, continuous audit trails for NIST RMF safe harbors, governance needs to be instrumented, not documented after the fact. The systems producing AI outputs need to generate the evidence your program needs, automatically. Manual processes do not scale to this cadence.
- Vendor contracts need to reflect the developer/deployer split
Title VII establishes separate liability for developers and deployers. This mirrors the EU AI Act’s provider/deployer structure and Colorado’s framework. Your AI vendor contracts almost certainly do not yet reflect this allocation clearly. Governance teams need to work with legal to build contract frameworks that specify intended uses, representations about testing and safeguards, indemnification provisions, and the obligations each party bears under relevant regulations. Every AI vendor relationship will need to be revisited.
What To Do Now
The Blackburn draft is not yet law. It may not pass in its current form. But the direction it signals tells governance practitioners what they need to build. Five priorities are clear:
- Get your AI use-case inventory current and classified. You cannot map regulatory triggers to AI systems you do not know you have. Inventory comes first—before any other governance work.
- Stand up cross-functional governance structures. The workforce disclosure requirement in Title II alone requires HR, legal, finance, and AI governance to be working from the same data. Governance cannot sit in a silo and produce these outputs.
- Revise your AI vendor contract templates. Start by mapping which of your AI vendor relationships lack clear developer/deployer liability allocation, intended use representations, and regulatory compliance obligations. Prioritize the highest-risk systems first.
- Move from periodic documentation to continuous monitoring. If your governance program runs on annual reviews and quarterly snapshots, you are not architected for the cadence these regulations require. Model observability, real-time audit trails, and automated disclosure generation are infrastructure requirements, not nice-to-haves.
- Build to the EU AI Act standard and adapt down. It is significantly easier to satisfy a less demanding framework when you have already built for a more demanding one than the reverse. The EU’s requirements for high-risk systems—technical documentation, conformity assessments, human oversight mechanisms—represent the highest bar currently in force. Use that as the ceiling and configure for other jurisdictions from there.
The era of AI governance as a voluntary, principles-based function is over. The era of AI governance as an operationally embedded, continuously monitored, multi-jurisdictional compliance function is beginning. The Blackburn draft, whatever its eventual fate in Congress, makes that transition timeline shorter—not longer.