Why Ethical AI Governance must be Multi-stakeholder, not AI Safety Institutes (Paris, AI, NOC)

by | Feb 16, 2025 | Innovation and Regulation, Open Blog | 0 comments

KS Park’s comment at Network of Centers’ on February 9, 2025, Paris

AI Safety Institutes are government-sponsored research-based entities that investigate the harms of AI and possibly regulate AI. Research is fine but regulation is another matter. 

Governments can regulate food, drugs, automobiles, aviation, nuclear power, with their public expertise and resources along the lines of the unique harms of food, drugs, etc. That is why we have food and drug agencies, transportation safey agencies, aviation safety agencies, etc. What comparably unique harm is there of AI?  

AI is at the extreme end of the spectrum of software-based automation and we did not have software safety institutes. Reasoning, computing, arguing, deliberating, etc., and other “mental actions” are human activities that do not have unique harms or general activities that if at all have greater benefits than harms. Now, we are already aware of the risks of when governments interfere with how we reason or we make decisions. They should interfere only when the object being regulated has unique harms. Only when “mental actions” are coupled with foods, drugs, nuclear power, aviation, fire arms, etc., they pose unique harms, when percolate when the intelligence is applied to certain other dangerous activities (e.g., “overthinking” an escape route out of a burning house). 

The unique harms of AI are, as far as I can deduce from the machine-learning based nature of the current version of AI — among other things, monopoly on training data, algorithmic amplification of human unfairness contained in the training data, post-singularity control, etc. AI governance on the training data monopoly must be done through data protection authority. There are other harms such as electricity consumption and human labor displacement, which call for environmental or economic regulation. 

Now, unfairness in the training data must be addressed by ethical debates not by government regulation for the following reason: Governments, under pretext of AI governance, may end up interfering with how people think and reason. For instance, governments can start making a list of “fake news” and “false information” that they think should be rid of training data or AI output, e.g., Chinese AI that claims no knowledge of Tianamen Square. Governments, under pretext of imposing algorithmic fairness, will end up requiring the impossible neutrality in search results or content curation/presentation. The examples are the recent American examples: the Florida, Texas laws that prohibited platforms from deplatforming political candidates or otherwise violating viewpoint neutrality. Neutrality is not a mandate to be imposed on people but on the sovereigns governing the dealings among people. Even truth cannot be a mandate imposed on people, at least not at the risk of criminal penalty, as international human rights standards have shown in the case of ‘false news’ crimes.  

For that reason, AI governance on ethics must be truly multistakeholder, in other words, left beyond reach of government or joint government control. Otherwise, we may see government cartels giving one another greater control on people’s mental actions like what may happen with the UN Cybercrime Convention. It is not that AI should not be regulated but it should be regulated in a way that prevents government censorship.  

What remains as the regulatory role for AI Safety Institutes then, if any? : Post-singularity control, which I will explain another time.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *