Google has responded to the U.S. government’s request for a national “AI Action Plan” with a policy proposal that calls for looser copyright restrictions on AI training and balanced export controls. The tech giant emphasizes the importance of safeguarding national security while ensuring that U.S. businesses remain competitive in the global AI landscape.
In its document, Google argues that AI policymaking has long been overly focused on risks, often overlooking how excessive regulations could stifle innovation, weaken national competitiveness, and slow scientific progress. However, the company notes that the current administration is beginning to shift toward a more balanced approach.
Google Defends Use of Copyright Data in AI Training
A key aspect of Google’s proposal is its stance on intellectual property rights in AI development. The company asserts that “fair use and text-and-data mining exceptions” are crucial for advancing AI innovation. Like OpenAI, Google seeks legal recognition of its ability—and that of other AI developers—to train models using publicly available data, including copyrighted material, with minimal restrictions.
“These exceptions enable the use of copyrighted, publicly available material for AI training without significantly impacting rightsholders,” Google wrote. The company argues that this approach helps avoid complex and time-consuming negotiations with data holders, which can hinder AI research and development.
However, this position has sparked controversy. Google is currently facing lawsuits from content owners who claim the company used copyrighted materials without permission or compensation. U.S. courts have yet to determine whether AI developers are protected under fair use doctrine in such cases.
Google also raises concerns about certain export restrictions imposed under the Biden administration. The company warns that these measures could hinder economic growth by placing unnecessary burdens on U.S. cloud service providers. This stance differs from that of competitors like Microsoft, which has stated its confidence in complying with the new rules.
These export controls, aimed at limiting access to advanced AI chips in certain countries, include exemptions for trusted businesses requiring large-scale AI infrastructure. Google, however, remains critical of policies it believes could restrict U.S. AI companies’ global reach.
Calls for Increased AI Investment and Federal Legislation
In its proposal, Google urges the U.S. government to commit to “long-term, sustained” investments in domestic AI research and development. The company pushes back against recent efforts to reduce federal AI funding and cut research grants.
Google also recommends that the government:
- Release public datasets to aid AI model training.
- Invest in early-stage AI research and development.
- Ensure that AI computing resources are accessible to researchers and institutions.
The company further highlights the growing complexity of AI regulation in the U.S., citing the increasing number of state-level AI laws. With 781 AI-related bills currently pending in the U.S., Google calls for federal legislation to establish a unified privacy and security framework.
Debating AI Developer Liability and Transparency Rules
Google opposes regulations that would hold AI developers legally responsible for how their models are used, arguing that developers have “little to no visibility or control” over third-party applications of their technology. The company believes that liability should primarily fall on those deploying AI systems rather than those creating them.
Google previously opposed California’s failed SB 1047 bill, which aimed to define AI developer responsibilities before releasing models and establish liability for AI-related harm. The company contends that AI deployers, rather than developers, are better positioned to manage risks and monitor AI system usage.
On transparency, Google takes issue with disclosure requirements being considered by the EU and certain U.S. states. The company warns that broad transparency rules could expose trade secrets, allow competitors to replicate AI models, and create security risks.
For example, California’s AB 2013 mandates that AI developers disclose high-level summaries of training datasets. Meanwhile, the EU’s AI Act will require developers to provide detailed documentation about their models’ operation, risks, and limitations. Google argues that such regulations could unintentionally provide adversaries with insights on how to bypass AI safeguards.
Google’s AI policy proposal underscores its push for a regulatory environment that supports AI innovation while limiting restrictions on data usage, model transparency, and developer liability. As the global AI landscape continues to evolve, the debate over AI governance, copyright laws, and ethical safeguards is set to intensify.