Mar 23, 2026

The White House’s National AI Legislative Framework: What Employers and Business Leaders Need to Know

On March 20, 2026, the White House released a National AI Legislative Framework, urging Congress to establish a uniform federal standard for artificial intelligence and preempt a growing patchwork of state AI laws. The framework is the latest step in a coordinated executive branch strategy that began with President Trump’s December 2025 Executive Order and has been building momentum ever since.

Meanwhile, Senator Marsha Blackburn of Tennessee recently introduced a parallel legislative vehicle, the “TRUMP AMERICA AI Act,” a 291-page discussion draft that goes significantly further than the White House framework in ways that matter directly to employers. Her bill proposes annual bias audits for AI used in employment decisions, quarterly workforce displacement reporting to the Department of Labor, a new federal liability regime for AI developers and deployers, and a legislative resolution of the copyright training data question that would cut sharply against the AI industry.

The White House framework and the Blackburn bill together represent the most serious federal push to regulate AI in American history. The fact they conflict on core questions, copyright, employer obligations, and preemption reveals how far Congress and the executive branch still are from a unified position. For employers, the details of both documents carry real compliance consequences that cannot be deferred until the legislative dust settles.

How We Got Here

In December 2025, President Trump signed an executive order, “Ensuring a National Policy Framework for Artificial Intelligence,” directing federal agencies to challenge state AI laws that burden innovation and fragment the national regulatory landscape. The order created an AI Litigation Task Force within the Department of Justice to actively challenge state AI laws in court, instructed the Department of Commerce to identify and publicly evaluate burdensome state statutes, and authorized federal agencies to use discretionary funding as leverage against states with aggressive AI regulations.

The December order did not invalidate any existing state law or create new federal compliance requirements. But it put businesses on notice that the executive branch intended to disrupt the state-by-state regulatory environment that has been developing rapidly since 2023.

A growing number of states, including California, Colorado, Texas, and Utah, have already enacted AI-specific laws covering the private sector. Many more are actively legislating. The White House’s March framework is its effort to replace that patchwork with a single federal standard. But as Senator Blackburn’s proposed TRUMP AMERICA AI Bill reveals, the details of that standard are unsettled.

What the White House Framework Proposes

The White House framework is built around six policy objectives. While each carries implications for employers, three deserve particular attention.

Federal Preemption of State AI Laws

The framework’s most consequential proposal is a call for Congress to preempt state AI laws that impose undue burdens on innovation or conflict with federal policy. The administration’s position is that AI development is inherently an interstate activity with national security implications, and that states should not be permitted to regulate it.

The preemption, however, is not absolute. The framework explicitly preserves state authority to enforce generally applicable consumer protection, fraud, and child safety laws against AI developers and users. States would also retain control over where data centers are located and how state governments procure and use AI tools in areas like law enforcement and public education.

Copyright and AI Training Data

The administration takes the position that training AI models on copyrighted material does not violate copyright law but acknowledges that the question is contested and calls for letting the courts resolve it. Congress is instructed not to take any legislative action that would interfere with the judiciary’s resolution of fair use questions.

For businesses that develop or procure AI tools, this means the copyright liability question remains alive and will be resolved through litigation, not legislation, for the foreseeable future. Vendor contracts should address this exposure directly.

Workforce Development

The framework addresses AI’s impact on workers through voluntary upskilling and training programs, not mandates. There are no employer reporting requirements and no obligations tied to AI-driven job replacement. This is a deliberate choice, reflecting the administration’s broader philosophy: innovation comes first, and market forces, not government mandates, should govern how businesses manage the workforce consequences of AI adoption. Whether that philosophy survives contact with Congress is another question.

Senator Blackburn’s Bill: A More Sweeping Parallel Track

Congress, however, is not simply implementing the White House framework. Members are responding to constituent concerns that vary widely, and the distance between those pressures and the White House’s preferred approach is already visible in Senator Blackburn’s proposed bill, which the Senator calls a “discussion draft.” For employers, five of its provisions demand close attention.

  1. Employer Reporting on AI-Driven Job Displacement

The bill would require publicly traded companies, federal agencies, and potentially large private companies to report to the Department of Labor within 30 days after each quarter on any AI-related job effects, including:

  • The number of employees laid off or contractor agreements terminated due, in whole or in part, to AI automation or replacement;
  • The number of positions the company decided not to fill because of AI;
  • The number of employees or contractors retained due to AI; and
  • New hires or contracts entered into because of AI.

Violations carry civil penalties of up to $1,000,000 per violation, and private parties, as well as the Secretary of Labor, can bring civil enforcement actions. Prevailing plaintiffs are entitled to attorneys’ fees. The bill also directs the Department of Labor to publish quarterly reports to Congress and the public summarizing the data.

This is a significant employer compliance obligation. The definition of “AI-related job effect” is broad, covering any workforce change “attributable to” AI. Companies that use AI to improve efficiency, not just to eliminate positions outright, need to assess whether those productivity gains are reportable when they affect headcount decisions.

  1. Annual Bias Audits for High-Risk AI Systems

The bill would require annual independent, third-party audits of any “high-risk” AI system. A high-risk system is defined as one that poses a significant risk to health, safety, rights, or economic security, with explicit examples including AI used in education, employment, law enforcement, and critical infrastructure.

For employers, this provision captures a wide range of AI tools, including systems used for hiring, performance management, scheduling, promotion decisions, and workforce monitoring. The audit must assess viewpoint discrimination and discrimination based on political affiliation. Results must be reported to the FTC within 180 days of each audit’s completion.

It is worth nothing what the bill does not do on this front. As written, the bias audit requirement is narrow: audits are required specifically to detect viewpoint discrimination and discrimination based on political affiliation, not discrimination based on race, sex, national origin, or other categories protected under Title VII. Political affiliation and viewpoint discrimination are not protected characteristics under Title VII, and Blackburn’s bill does not make them so. This bias audit provision reflects the White House and congressional Republicans’ shared priority of guarding against ideological bias in AI systems. Enforcement runs exclusively through the FTC, with no role for the EEOC and no amendments to existing employment discrimination laws. Employers should understand, however, that this creates no shelter from Title VII exposure. AI used in hiring, promotion, and other employment decisions carries independent discrimination risk under existing law regardless of what the Blackburn bill requires or does not require.

  1. Developer and Deployer Liability

The bill creates a federal cause of action against both AI developers and deployers for harm caused by covered AI products. Deployers can be held liable as developers if they substantially modify the product or intentionally misuse it.

Critically, the bill prohibits developers from including contract language or terms of service provisions that waive, limit, or restrict liability in ways that are unforeseeable. This directly affects how AI vendors structure their agreements with business customers. Employers that have signed vendor agreements with broad AI liability disclaimers should treat those provisions as potentially unenforceable under this framework.

  1. Copyright and Training Data: A Direct Conflict with the White House

Here the Blackburn bill and the White House framework diverge sharply. While the White House framework would leave fair use questions to the courts, the Blackburn bill would legislatively resolve the question, and it does so in favor of copyright holders.

Under the bill, the unauthorized reproduction or computational processing of copyrighted works for the purpose of training, fine-tuning, developing, or creating AI would not constitute fair use. AI-generated derivative works produced without authorization would be deemed infringing and ineligible for copyright protection.

The bill also adds a new subpoena mechanism to the Copyright Act, allowing copyright holders to compel AI developers to disclose training data records. Failure to comply would create a rebuttable presumption that the developer copied the copyrighted work.

  1. Repeal of Section 230

The bill proposes to fully repeal Section 230 of the Communications Act, which currently shields online platforms from liability for third-party content. The repeal would take effect two years after enactment.

For employers that operate online platforms, use AI to moderate or generate content, or deploy customer-facing chatbots and AI assistants, the removal of Section 230 immunity would dramatically increase legal exposure. AI-generated content that a platform hosts or distributes could become the platform’s legal liability.

Preemption in the Blackburn Bill: A Narrower Scope

On preemption, the Blackburn bill takes a more limited approach than the White House framework. Section 1701 provides that nothing in the act preempts any generally applicable law, including state common law and sector-specific regulatory schemes that may address AI. This is a meaningful limitation. State employment discrimination laws, biometric privacy statutes, and consumer protection frameworks would remain intact.

Practical Steps for Employers and Business Leaders

The legislative path is uncertain and the timeline is unclear, but the direction is not. Employers that wait for a final answer will be reacting from behind. Here is where to focus now:

  1. Map your AI use across the organization. Know which tools are being used, in which functions, and what decisions they influence. Pay particular attention to AI that affects recruiting, hiring, promotion, performance management, scheduling, and workforce monitoring. The Blackburn bill’s quarterly DOL reporting obligation covers any workforce change attributable to AI, including headcount decisions influenced by AI-driven efficiency. If you cannot currently produce that data, the gap needs to close.
  2. Evaluate governance for high-risk AI. The Blackburn bill defines high-risk AI to include systems used in employment decisions, education, and critical infrastructure. Annual third-party bias audits are proposed for those systems. Regardless of whether that provision becomes law, regulators and courts across jurisdictions are converging on similar expectations. Employers that deploy AI in employment contexts should document their systems, assess bias risks, and build meaningful human review into consequential decisions now.
  3. Review AI vendor contracts. The Blackburn bill would render unenforceable any contract language that unreasonably limits an AI developer’s liability to deployers. Agreements should be reviewed with this in mind. They should also address testing and documentation standards, audit rights, incident response obligations, and the allocation of copyright and compliance liability. The training data and fair use question is legally contested, and that risk is real today regardless of legislative outcome.
  4. Assess your Section 230 exposure. If your business operates a platform, deploys a customer-facing AI chatbot, or uses AI to moderate or generate content, model the impact of Sections 230’s potential repeal. A two-year effective date, if enacted, provides a transition window, but planning needs to begin now.
  5. Audit your AI communications for accuracy. Internal and external statements about how AI tools work, their fairness, their accuracy, and their limitations should be verifiable and consistent with actual system performance. Both the FTC and state consumer protection agencies have signaled interest in AI-related misrepresentations, and the Blackburn bill’s bias audit and FTC reporting requirements would increase scrutiny in this area.
  6. Monitor developments without waiting for final answers. The DOJ Task force is in operation. Commerce is evaluating state laws. Courts are deciding fair use cases. Blackburn’s bill is moving through committee. Any of these could shift the landscape materially before Congress acts. Organizations that track developments in real time will adapt faster than those waiting for settled law.
  7. Begin preparing for employer reporting obligations. The quarterly DOL reporting requirement in the Blackburn bill may not survive in its current form, but the concept has traction, and similar requirements have appeared in multiple legislative proposals across both parties. Employers should assess what data their current systems capture and where gaps exist, both to manage compliance risks and to inform internal decision-making about AI governance.

The Bottom Line

Employers that treat AI governance as problem to be solved when the law settles are already behind. Those that build durable governance frameworks now, grounded in accountability, transparency, and documented human oversight, will be better positioned regardless of how the legislative picture resolves.

If you have questions about the White House AI framework, the Blackburn bill, how they affect your organization, or how to build an AI governance program suited for today’s regulatory environment, please contact Sam Mitchell at Smith, Gambrell & Russell, LLP.