Skip to main content

BOARD MEMO 2022

Board Consideration of AI and Other Autonomous Computer Systems

Mena Kaplan

Mena
Kaplan

Partner, New York

Doru Gavril

Doru
Gavril

Partner, Silicon Valley

This past year has shown that delegating decision-making to computer systems can result in unintended consequences. Courts, legislatures and government agencies are now focused on these issues. The board of a prominent aircraft manufacturer drew criticism in the Delaware Court of Chancery for failing to monitor the safety implications of an “intelligent” flight control system that was designed to aid pilots but ultimately compromised aircraft safety. During well-publicized hearings, Congress delved into how algorithms employed by tech companies can prioritize certain results due to learning biases rather than intentional programming. And a recent enforcement action by the SEC showed that trading software may be using material nonpublic information without the humans nominally in charge realizing it.

Increasing autonomy and complexity of systems raises questions for boards

As the degree of autonomy and complexity of such systems increase, directors may find it desirable to evaluate the implications for corporate liability, internal controls and board oversight. That is particularly true in light of the evolution of Delaware law on board oversight responsibilities over the last few years. Recent Delaware rulings have emphasized that boards must establish dedicated internal controls for “mission-critical” areas of the business and ensure board-level monitoring of such internal controls. This trend places greater emphasis on active board involvement, in what is arguably a departure from the traditional, procedure-driven Caremark (1996) standard.

With this in mind it is important to acknowledge the law will be in flux for decades to come. However, as it evolves, one question worth framing may be under what instances the actions of AI and autonomous systems may be imputed to their corporate owner. The simplest algorithms, which cannot function without direct human intervention, will be the most likely to lead to corporate liability, just like any tangible product would. But as AI and autonomous systems become more advanced and prevalent, the law will have to address questions of causation and state of mind. What if a system that is capable of learning and adopting new behaviors acts in a way that was not intended, or foreseeable, but is perfectly consistent with its programming? Would that qualify as a willful corporate action? What happens if restricted data (say, material nonpublic information) is used to train a system (say, trading software) that is then allowed to operate using public data? 

Key takeaways for boards

While attempting to predict how courts will rule on these matters is premature, companies and their boards should expect some hard-fought battles on defining this new legal frontier. Officers and directors of companies that use or anticipate using AI and autonomous systems in the near future can begin to take the following concrete steps to enhance preparedness and minimize liability.

  • Create a bespoke risk assessment of the various systems being considered or already in use.

  • If the systems will be “mission critical” for the company, build internal controls over them and ensure the controls have a clear path to the board.

  • Ensure the board is actively monitoring the implementation and functioning of the new systems and has the relevant technical literacy to evaluate relevant information.

  • Refresh this assessment periodically, with the assistance of counsel, to ensure the protocols in place are up to date.

As management and boards deal with the practicalities of operating such systems in real time, here are some questions they might consider.

  • What is the range of possible outcomes if a new system is deployed? Which ones are the likely ones?

  • Is the system capable of learning? What is it learning from? How are biases minimized or eliminated in the learning process?

  • What monitoring features are built into the system? Who is addressing those alerts?

  • Are human overrides available? If so, are such overrides desirable?

  • Do protocols exist for periodically evaluating the compliance of these systems with existing laws and norms? Should new controls be instituted?