Post

AI Regulation Increases Certainty—but for Whom?

By Bronwyn Howell

AEIdeas

June 27, 2024

Benjamin Franklin’s reputed last great quote, in his November 1789 letter to French scientist Jean-Baptiste Le Roy, opined:

Our new Constitution is now established, everything seems to promise it will be durable, but in this world, nothing is certain except death and taxes.

One of these certainties pertains to the realities of the physical world; the other to the realms of regulation. The need to pay taxes is a near certainty; the form and quantum of the obligation, however, still leaves much room for debate for the liable individual.

artificial intelligence and chat GPT
Via Adobe

Nonetheless, tax laws and regulations allow individuals and firms to plan their affairs with some certainty as to the extent and nature of their obligatory contributions. Without them, and the social contracts giving rise to them, the world would surely devolve into Thomas Hobbes’ vision of a life without the Leviathan State: the “solitary, poor, nasty, brutish and short” state of nature.

The great Enlightenment philosophers—including John Locke, whose work paved the way for the American constitutionheld that citizens would willingly sacrifice some of their rights in order to be spared from the ravages of the state of nature that the government was better placed than the individual to manage.  A compelling problem with the state of nature is that one does not—and often cannot ever—know all of the challenges that will threaten one’s existence in the messy, chaotic real world where individuals and nature interact. Tax laws reduce this uncertainty by endeavouring to make clear the limits the state itself is prepared to go to in order to finance its activities.

It is apposite, therefore, to question why, last year, Sam Altman, Elon Musk and other tech moguls called on government worldwide to regulate Artificial Intelligence (AI) application development and use. There may be many reasons why the developers of these new technologies would be begging to be regulatedhowever bizarre this might appear at first blush. A cynic might suggest that having achieved an early mover advantage, these firms might seek to use regulation to haul the drawbridge up behind them, making it much more expensive for their potential rivals to compete. A realist, however, might suggest that government intervention just temporarily reduces the complexity and uncertainty of the environment into which these technologies are being developed and deployed.

In a previous blog, I identified that developers of AI applications with a global reach face multiple uncertainties due to global differences in consumer tastes, preferences and legal environments. Inter-government collaboration to reduce these uncertaintiesmaybe, achieved in the guise of AI regulation (for example, standardization of Privacy Laws)would go some way to reducing the manifest uncertainties. This would reduce the costs of bringing new applications to market, as the apps could be proved in a much smaller number of jurisdictional situations.

It is also noted that when any new technology comes to market, it creates a huge number of new uncertainties, as consumers, the firms themselves, and their rivals seldom know just how the new technologies will play out. Complex, dynamic AI systems, deployed in complex, dynamic environments without any specific ex ante controls likely evince something approaching the chaotic commercial states of nature envisaged by Hobbes. As no one knows quite how they will evolve, it is tempting for developers to ask for protections from the Leviathan.

But in this case the Leviathan state is as blindif not more soas the developers as to what will occur in this excursion into the unknown. The state could first create an artificial “safe space” with defined limits in which developers can implement their technologies, and consumers explore them. But this is not the “real world”it is simply a government-engineered “model” constrained by the limits of what its designers know (or think they know) of the way the world works. And it may be impotent against the forces of the real world, which will still inevitably “break in.”

The problem of regulating before it is fully known how the real world will respond to the new technology is that it constrains developments to fit the artificial environment, not the real world that must inevitably be faced. The new technologies risk becoming vulnerable not resilient, and reliant on continual government intervention to survive in the face of versions developed in the “real world.” We need look no further than the heavily regulated European tech market of the last 30 years to see how this pans out.

In a complex uncertain world, what is needed is a diversity of applications and environments in which they can be tested. Regulating that diversity too soon closes off options that may well be needed in the future.

Caute procedere.


Sign up for AEI’s Tech Policy Daily newsletter

The latest on technology policy from AEI in your inbox every morning