Let’s consider two hypothetical scenarios.
First, an artist creates a new work. Copyright law protects the artist’s intellectual property, allowing the artist to release the work for public enjoyment. Copyright law also permits others to make derivative works based on the original piece. Society is enriched from the existence of both the original and the derivative works.
But what if another law held the artist legally liable for any offense or harm incurred by any of the derivatives created of the original? This would surely have a chilling effect on the number of new works created. Few would be released for public enjoyment. The creator would retain tight contractual controls on who could view or use the work, and the creation of derivatives would become the exception rather than the norm. Society would be very much culturally poorer as a result. Consequently, such derivative liability laws are extremely uncommon.
Second, a manufacturer makes a good intending it to be used for one specific purpose, say, a motor vehicle to be used for personal transportation. The manufacturer is required to take due care to ensure that the good is fit and safe for its intended purpose.
But what if a law existed, requiring the manufacturer also to be responsible for anticipating and preventing the use of the good for any number of purposes other than the original one? For example, being legally liable for not anticipating or preventing the use of a motor vehicle manufactured at the plant for transporting a suicide bomber and a bomb into a crowded marketplace? If such a law existed, then only when the manufacturer could maintain strict contractual control of all items once they left the production line would any be available for use. Creative beneficial uses as well as harmful ones would be eschewed; once again, society would be the loser. So, such laws don’t usually exist.
Yet precisely such provisions characterize actual and proposed regulations constraining the development and deployment of AI applications. They will have precisely the expected chilling effects on the AI sector and will deprive society of significant benefits if they remain in place—something anticipated by opponents of California SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Act.
Most AI regulatory regimes (e.g. the EU Artificial Intelligence Act) and voluntary standards (such as the National Institute of Science and Technology AI Risk Management Framework) require AI developers and deployers to anticipate potential risks arising from their applications, actively monitor their use, and, in the California case, to intervene and shut down the applications if a sufficiently adverse event occurrs. To satisfactorily meet this requirement, an application can never be let out of the original developer’s direct control. The developer cannot make the application publicly available in an open-source market for further innovation to be undertaken, for fear of being held liable for the consequences of another’s subsequent actions.
Nor can the producer confer a “free ownership stake” (as per Demsetz’ property rights) where the owner is free to exercise control choices as occurs in the sale of a manufactured vehicle for fear of it being used for a purpose not already considered in the risk management framework. Such regulations can succeed only if these applications are available within tightly closed communities, with limited entry or threats to control—hardly a vibrant environment for continual innovation. Maybe this is why some AI developers are not averse to increased regulation in their sector.
However, the AI application ecosystem has already moved far beyond a simple structural model of producing and selling of manufactured goods that can be controlled using risk management-based safety regulation. Many AI developers use open-source elements as inputs to their applications, and they supply their outputs back into the open-source software communities. They do so in full anticipation that the models will be adapted and improved to generate new variants to benefit society, like how the creators of copyrighted works anticipate that their works can be adapted to create new and different benefits. They do not control all elements of their input supply chain. Nor can they control all downstream uses and users, so it is futile to endeavour to hold them to such account.
Regulators and legislators must recognize that if AI is to deliver on its innovative and wealth-enhancing promise, developers alone cannot be held responsible for all consequences. If they are, then the chilling effects will not simply be hypothetical; they will be real.
Learn more: Revisiting Australia’s Facebook News Contracts Three Years On | Harm, Safety, Effective AI Risk Management, and Regulation | Competition in AI Regulation: Essential for an Emerging Industry | AI Regulation Increases Certainty—but for Whom?