Publication

Digilova, Goryunov and Choksi in Corporate Compliance Insights: New White House, New AI Rules

With Biden-era AI guidelines gone, the SEC and FTC have made it clear they’re watching (for now) how public companies use and talk about their AI capabilities. Haynes Boone Partners Alla Digilova, Eugene Goryunov and Alok Choksi authored an article for Corporate Compliance Insights explaining how companies can stay compliant in this shifting landscape.

Read an excerpt below.

The past few years have seen a significant rise in the popularity and influence of AI technologies. Many public companies in the US have either already implemented or are actively exploring the adoption of AI in their business.

AI tools are rapidly changing the market landscape, promising significant technological progress. Most recently, promulgation of generative AI (GenAI) tools, such as ChatGPT, has further enhanced interest among companies and the general public in such technology.

Adoption of GenAI tools is not without risks, with irresponsible use having been tied to fraud, discrimination and disinformation and posing risks to national security. The risks of misinformation and fraud are especially pertinent in the context of public companies, which should take care to ensure safe adoption of AI in their business practices.

Regulatory overview
AI policies and priorities have recently become a highly contested regulatory topic in the US, with President Donald Trump’s series of newly issued executive orders indicating upcoming policy changes. On his first day in office, Trump revoked former President Joe Biden’s prior AI directives and policies, issued in 2023 and 2025.

Two days after his inauguration, on Jan. 23, Trump issued a new executive order titled “Removing Barriers to American Leadership in Artificial Intelligence,” which sets forth a policy goal “to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security” and calls for members of his administration to develop a new AI plan within 180 days.

Guided by a set of principles, the 2023 Biden order had instructed federal agencies and the National Institute of Standards and Technology (NIST) to develop guidelines and best practices that would govern how the US government uses AI. Consistent with the order’s directive, NIST published its “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile” on July 26, 2024. The profile was designed to help organizations integrate trustworthiness considerations into the design, development, use and evaluation of GenAI systems. It also outlined the risks that are unique to GenAI, suggested corresponding actions to manage these risks and summarized operational considerations for effective risk management.

The 2025 order instructed the federal government, in collaboration with the private sector, to develop AI infrastructure within the US, with the goal of enabling the US government to continue harnessing AI in service of national-security missions while preventing the US from becoming dependent on other countries’ infrastructure to develop and operate powerful AI tools.

While White House moves play a major role in how AI develops in the US, other federal bodies have also played a role, though shakeups in leadership also could signal major changes in how these agencies approach enforcement of corporate AI use, too.

To read the full article from Corporate Compliance Insights, click here.

Media Contacts