Skip to main content
Post

Innovating AI for a Global Market Is Hard to Do

AEIdeas

May 13, 2024

Artificial Intelligence (AI) technologies pose risks to individuals and society. These vary from the mundane to the catastrophic. Will we be exposed to distasteful content on social media platforms? Will we be hit by out-of-control autonomous vehicles? Will AI classification models discriminately deny access to essential goods and services? Will large language models decimate white-collar workers’ jobs? And the big one: Will AI lead to the end of humanity and society as we know them?

The good news is the vast majority of AIs will not cause catastrophic harms. Rather, it will bring great benefits to humanity and society, albeit that there is still much to learn about individual applications. Harms will be minimized because it is in the interests of AI developers to be careful and rigorous in their development and testing processes. Almost always, what is good for end users will be good financially for the developers, so their risk management processes will be well-aligned with their customers’ interests. What’s more, regulation is intended to ensure these technologies are being developed and implemented while giving high priority to risk management, safety, and trustworthiness. 

Even with the best will in the world, though, “accidents” (unexpected or unpredicted and unpredictable outcomes) will happen. AI models are being deployed in extremely complex  environments, all the more so because many of the new applications—the “general purpose” ones—will be deployed in a world of hugely diverse individuals and organizations.   

Therefore, spare a thought for AI firms developing new technologies and bringing them to market. Complex applications are being deployed in a complex world, where unexpected outcomes are to be expected because it is impossible for all potential permutations and combinations of the applications and users to be anticipated and tested before release.

Furthermore, developers are releasing these applications in an extremely complex legal environment. Potentially, every jurisdiction into which they release the technology will have its own formal laws and rules, not to mention the “soft” constraints imposed by societal norms, conventions and mores. What is legal and “safe” in one jurisdiction might not be in another. Moreover, formal and informal rules are in constant flux, as societies themselves are dynamic. What is legal or acceptable today might not be tomorrow. 

AI developers face these real risks when designing and deploying their technologies. For example, take any social media platform and the expectation (nay, obligation) for it to monitor content and prevent users from being harmed by what they see. What is “harmful” to one consumer might be of no concern to another (presuming the content is legal, just selectively harmful—e.g., adult video content shown to children or food-related content shown to users with an eating disorder). How should content moderation be managed? Removing content protects one consumer but harms the other by denying access to legal content that would otherwise have been beneficial. And that is before we get to the matter of varying legality.  

The developer is caught. Knowing that AI will be used worldwide, what standards should be applied and tested against when determining the requisite level of “safety” or “legality”? 

Classical economics would suggest standards based on some sort of “average” or “representative” consumer. But how can such a consumer be identified in a world with such diverse populations as 1.4 billion Chinese, 1.4 billion Indians, 750 million Europeans and 333 million Americans? And what is the “average” legal situation? Sam Savage’s flaw of averages in contrast indicates what matters is the range of the distribution, not the average. 

Providers (especially the large ones likely to be pursued in the courts for any offence committed) therefore are incentivized to cater for the extremes. A risk-averse approach catering for the most sensitive consumers and most restrictive jurisdictions becomes the safest.  

However, this does not lead to the greatest benefit overall for consumers. Almost all consumers would have preferred a version customized for their personal tastes and laws—but this just overwhelms provider. They can’t please everyone all the time; at best, they aim to displease fewer people most of the time. All in a world where, even as new applications are developed, developers too are still learning what those very different tastes and laws are.

So, it behooves consumers and commentators to be aware of their place in making the complex world so difficult for these developers.  Developers are doing their best. They don’t go out seeking to cause harm—indeed, quite the opposite, mostly. There will be some unexpected outcomes; the apps might not work perfectly in every country around the globe or for every consumer demographic known to humankind. 

That’s just the price we all pay for the progress these apps will ultimately deliver. 

Learn more: Can AI Regulation Really Make Us Safe(r)? | The AI ecosystem is complex and dynamic: Its regulation should acknowledge that | Who Will Monitor the AI Monitors? And What Should They Watch? | European AI Regulations: Real Risk Reduction or Regulatory Theater?