The pioneering work of psychologists Daniel Kahneman and Amos Tversky, alongside behavioral economists Richard Thaler and Cass Sunstein and practitioners such as Lord Mervyn King, leaves little doubt that individuals—when making decisions in the face of uncertainty—act a little weirdly. Specifically, they exhibit a host of cognitive biases that lead to poor decisions. These biases include:
- framing bias, in which losses are more sharply felt than gains of the same size;
- a propensity to overestimate both the probabilities and costs of harm from low-probability, high-cost events;
- replacing complex question with a simpler one that has a known answer; and
- a propensity to act quickly rather than pause to gather more information.
Subsequently, work by Joseph Henrich and others has shown that, while cognitive biases are universal, there are important differences, especially in relation to trust, between individuals in WEIRD (Western, educated, industrialized, rich, and democratic) and non-WEIRD countries. Henrich hypothesizes that for the WEIRD countries to become EIRD (the W being a catchall summarizing the combined effects and geography) they have had to develop institutions that enabled relative safety and security when engaging and trading with strangers. Whereas in the non-WEIRD countries, individuals place high trust in relationships with close contacts (friends, family, neighbors, proximate communities, etc.), WEIRD individuals place less emphasis on interpersonal trust and much more on the formal institutions that have evolved alongside (and enabled) growth in material prosperity. While it is impossible to assign causality in this coevolving environment, the apparently greater success in democratic countries in industrializing and educating their populations to create greater wealth demarcates both the countries and—by experimental demonstration—the cognitive psychological programming of their citizens.
The effects of these differences play out starkly when attitudes toward artificial intelligence (AI) are sampled. The polling firm Ipsos, commissioned by Google, conducts regular surveys across a wide range of countries on attitudes toward AI. The 2023 report sampled individuals in 31 countries representing the developed and developing world and spanning Africa (South Africa), Asia (India, Singapore, Malaysia, Indonesia, Thailand, South Korea, and Japan), Oceania (Australia and New Zealand), South America (Chile, Argentina, Colombia, and Peru), Central America (Mexico) and North America (Canada and the US), Eurasia (Turkey), and Europe (both developed western and emerging eastern countries). A key finding of the survey was that trust in AI varies widely and is “generally much higher in emerging markets and among people under 40 than in high income countries and among Gen Xers and Boomers.”
Further research conducted on the Ipsos data showed that, while trust within each country was generally correlated with higher levels of education, income, and decision-making responsibility, the opposite applied on the global level. In countries with higher levels of wealth, education, and scores on The Economist’s Democracy Index, there were proportionately lower levels of trust, confidence in their knowledge of AI, and expectations that AI had or would change their lives. The most reliable predictor was the Democracy Index. The more democratic a country was, the lower its trust, confidence in, and expectations of AI were. Moreover, the, richer, urbanized (as a proxy for industrialized), more democratic, and more educated the country was, the more nervous its people were about AI.
This almost certainly indicates the effects of WEIRD psychology at play in the face of a new and quite uncertain technology. However, it also poses some very important questions about the role of institutions—and in particular, regulation—in the genesis of these observations. Are the WEIRD countries less trusting of AI because it has not yet been regulated (there were, in 2023, no formal regulatory institutions governing it—the EU AI Act was not ratified until 2024)? Are non-WEIRD countries more trusting because they are relying on their own experience (and that of close associates) when forming their views? This begs the further question of whether the WEIRD societies have reached a point where the confidence to use new technologies with uncertain effects is so low that regulation is needed before people will trust and use it—even before the effects are known. This suggests a very different role for regulation than in the past when it was used to correct for or prevent demonstrated harms.
Note then that the 2023 White House AI Executive Order was for the safe, secure, and trustworthy development and use of AI.
Learn more: WEIRD? Institutions and Consumers’ Perceptions of Artificial Intelligence in 31 Countries | Connecting the Dots on the Chips | Practical Steps Towards Data and Software Resilience | Resilience: The New Challenge for Digital Systems Policy