It is not clear what boards of directors should be doing about AI. (The answer will be applicable to public companies with anticipated SEC and possible Stock Exchange promulgations and mandatory disclosures, but much the same conundrum will affect managers of private companies which will suffer the same business risks if not the same level of disclosure risk.)
The obvious answers: protect systems from hacking and intrusion; restrict use of AI on company platforms; alert the persons such as the Audit Committee or the Risk Committee evaluating ERM (enterprise risk management) and make sure all parts of the company are included to avoid missing some risk in a functional corporate “silo”: obtain outside consulting support, perhaps direct employment of experts, and include a knowledgeable board of directors member; as their efficacy improves, employ systems that can identify received input as machine-generated (current technology is spotty; ask any University trying to analyze term papers).
The fundamental problems for companies and the persons responsible for running them is that the risk is new, powerful and sourced from outside, at the same time subtle, and in part based on the quality of judgment relying on unreliable input. The arguable answers from a board of directors are i) don’t develop AI (too late, ii) don’t use it (largely too late and in any event uncontrollable), iii) make sure it is not corrupt at inception or corrupted in transition (good luck with that), (iv) rely on government regulation (whose government when and with what bias, v) ?
Solutions seem to lie beyond board governance actions, yet actions must and will be taken.
What will insurance against AI disruption or fraud look like and at what price point?
Can a board dare risk passing on the clear business benefits of AI in speed, efficiency and ability to eliminate some human overhead? What will the shareholders say?