A common cognitive bias, in which decision-makers unconsciously substitute a complex problem with a simpler, related one, was first described in 2002 by Daniel Kahneman and Shane Frederick. The concept of attribute substitution explains that, when faced with a complex judgment (target attribute), some may replace it with a more accessible, simpler judgment (heuristic attribute) without realizing it—inadvertently responding only to the simpler problem. This process is automatic and often goes undetected by the individual, leading to systematic cognitive biases.
Similarly, policymakers’ tendencies to regulate new technologies based on their experiences with related legacy systems has become evident in recent history. New broadband technologies operating across multiple infrastructures have been regulated as if they were legacy telephony systems entrenched in infrastructure monopolies—the ill-fated unbundling and access regulation rules that would have worked well in the telephony systems have, in broadband, significantly deterred investment in rival fiber and cable infrastructures in the European Union. The United States avoided this fate only because real competition from unregulated cable operators already existed. The subsequent decision not to bind information services with the telephony regulatory legacy became the solution.
Going back a little further and considering regulations designed to keep people “safe” from the dangers of a new technology, we find the regulatory response to the emergence of a new general purpose technology—the locomotive (self-powered vehicle). The pioneering UK Locomotives Act 1865 required that a person,
while any Locomotive is in motion, shall precede such Locomotive on Foot by not less than Sixty Yards, and shall carry a Red Flag constantly displayed, and shall warn the Riders and Drivers of Horses of the Approach of such Locomotives, and shall signal the Driver thereof when it shall be necessary to stop, and shall assist Horses, and Carriages drawn by Horses, passing the same.

The effect was that the locomotive (and subsequently, horseless carriages—or cars) could go no faster than a person could walk. It was repealed in 1896, some considerable time after steam- and internal combustion engine-powered vehicles were on the road. As the image shows, Vermont passed a similar red flag law in 1894, but it was repealed just two years later.
Problems with these laws arose because horses and carriages had existing road use rights that regulators could not violate—including going fast. But the new locomotive technologies could be regulated, so rules were implemented that the regulators would have liked to impose on horse-drawn carriages—namely, limiting their speed to mitigate the costs of fast-travelling horses getting out of the control of the driver and harming pedestrians.
However, the red flag laws failed on three counts. First, they prevented society from benefiting from the features of the new technologies—in this case, faster travel. Second, they were regulating an issue that was far less likely to occur with a locomotive than a horse and carriage; unlike a horse, which has a mind of its own, the locomotive posed a much lower probability of escaping the driver’s control. Third, the regulations shaped the perceptions and behaviors of road users in regard to the new technology. Their understanding of the locomotive was developed under the tightly regulated oversight of the man with the flag. When the rules were repealed, they had no idea that the locomotive could go fast, and they did not take evasive action quickly enough. Many more people were harmed because the laws disincentivized necessary learning.
Likewise, a real risk exists in the rush to regulate new artificial intelligence (AI)—including generative pre-trained transformers (AI GPTs) such as ChatGPT and Llama. Decision-makers are substituting their understanding of constraining Good Old-Fashioned AI (GOFAI) big data tools—which respond well to risk management aligned with advancing engineering precision in computing—with the understanding necessary to govern applications in the face of uncertainty and near-infinite variety invoked by the AI GPTs. The complex intertwining of these AI GPTs, where even application developers and the AIs themselves cannot explain how or why they come to certain outcomes, with equally complex human commercial and social systems is unprecedented.
Regulating AI GPTs as if they were GOFAIs invokes all three risks of the red flag laws: New benefits from novel technologies will be lost, the potential harms are not necessarily the same, and people’s behavior will change in the presence of the new rules. The new world created with AI GPTs is certainly uncertain, so regulation should be guided by knowledge of its complexity and uncertainty—resisting the rush to regulate and, instead, learning through experiences of the new and not past fears of the old.