Skip to main content Skip to secondary navigation
Main content start

SIEPR’s Daniel Ho testifies on Capitol Hill, gives input to lawmakers on AI policy

The SIEPR senior fellow testifies before federal lawmakers and joins colleagues in proposing steps to develop AI policies that foster innovation while managing risks.

Stanford law professor Daniel Ho recently testified in front of a House subcommittee and co-wrote two letters to the Office of Management and Budget (OMB) on how the government should best strengthen artificial intelligence governance, further innovation, and manage risk.

Ho, the William Benjamin Scott and Luna M. Scott professor of law and director of the Stanford Regulation, Evaluation, and Governance Lab (RegLab) at Stanford Law School, is also a senior fellow at the Stanford Institute for Economic Policy Research (SIEPR). He testified before the U.S. House Subcommittee on Cybersecurity, Information Technology, and Government Innovation on Dec. 6 on matters relating to President Biden’s recent executive order on AI and the OMB’s related draft policy. The draft policy provides direction to federal agencies on how to strengthen AI governance, innovation, and risk management. 

SIEPR Senior Fellow Daniel Ho, the William Benjamin Scott and Luna M. Scott professor of law at Stanford Law School, testifies before the U.S. House Subcommittee on Cybersecurity, Information Technology, and Government Innovation on Dec. 6.

In his testimony, Ho recommended six actions Congress should take in order to achieve a robust government AI policy that “protects Americans from bad actors and leverages AI to make lives better.”

Among his recommendations: Congress must support policies that give the agencies’ Chief AI Officers flexibility and resources to “not just put out fires, but craft long term strategic plans.” Additionally, Ho said, the government must enable policies, including public-private partnerships, that will allow it to attract, train, and retain AI talent and provide pathways into public service for people with advanced degrees in AI. 

View his full testimony here.

Ho, who is also a senior fellow at the Stanford Institute for Human Centered Artificial Intelligence (HAI), serves on the National Artificial Intelligence Advisory Committee (NAIAC). He and others at RegLab have worked extensively with government agencies around technology and data science. 

Ho had also testified earlier this year, in May, before the Senate Committee on Homeland Security and Governmental Affairs, providing key insights on AI in government.

His latest recommendations come less than two months after President Biden signed the executive order “Safe, Secure, and Trustworthy Artificial Intelligence,” which sets new standards for AI safety and security and aims to position the United States as a leader in the responsible use and development of AI in the federal government. In response, on Nov. 1, the OMB issued a call for comment on a draft policy titled Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. 

Ho and other prominent law and tech leaders — including other Stanford scholars — wrote two letters to the OMB, noting how critical the moment is for getting technology policy right and commending the OMB for its “thoughtful approach to balancing the benefits of AI innovation with responsible safeguards.” 

The first letter sent Nov. 30 to the OMB applauds the proposed guidance to create Chief AI Officer roles that provide AI leadership in federal agencies, increase technical hiring, conduct real-world AI testing, and allocate resources via the budget process. The letter outlines why some of the draft policy’s one-size-fits-all “minimum” procedures and practices — applied to all “government benefits or services” programs — may have negative unintended consequences. 

“Without further clarification from OMB and a clear mandate to tailor procedures to risks, agencies could find themselves tied up in red tape when trying to take advantage of non-controversial and increasingly commodity uses of AI, further widening the gap between public and private sector capabilities,” the letter authors wrote.

second letter to the OMB, sent on Dec. 4, focuses specifically on government policies relating to open source, a type of software whose source code is publicly available for individuals to view, use, modify, and distribute.

Citing “long-recognized benefits to open-source approaches” the letter authors urged the OMB to be clear that government agencies should default to open source when developing or acquiring code.

In the meantime, Ho and his colleagues at HAI and RegLab are also tracking the progress of the implementation of Biden’s executive order.

For more details on the two letters and Ho’s co-authors, read the full story originally published Dec. 7 by Stanford Law School.