Open Net joins Heads of State at Paris AI Action Summit to discuss what governments should do and should not do: singularity control vs training data sanitization

by | Feb 16, 2025 | Innovation and Regulation, Open Blog | 0 comments

On February 10, Open Net joined the heads of state at the Paris AI Action Summit to discuss the future of AI. Through various bilateral channels at Grand Palais and the ensuing state dinner at Elysee Palais, KS Park focused on the need for post-singularity regulation addressing the and data protection regulation while reminding the governments of a reason to stay out of ethical control on the training data.

AI is at the extreme end of the spectrum of software-based automation. Reasoning, computing, arguing, deliberating, etc., and other “mental actions” are human activities that do not have unique harms or are general activities that if at all have greater benefits than harms.

Now, we are already aware of the risks of when governments interfere with how we reason or we make decisions.  They should interfere only when the object being regulated has unique harms.  Only when “mental actions” are coupled with food, drugs, nuclear power, aviation, fire arms, etc., they pose unique harms, which percolate when the intelligence is applied to certain other dangerous activities (e.g., “overthinking” an escape route out of a burning house). 

The unique harms of AI are, as far as I can deduce from the machine-learning-based nature of the current version of AI — among other things, monopoly on training data, algorithmic amplification of human unfairness contained in the training data, post-singularity existential threats, etc.  There are other harms such as electricity consumption and human labor displacement, which call for environmental or economic regulation.  AI governance on the training data monopoly must be done by data protection authority.  Unfairness in the training data must be addressed by ethical debates not by government regulation. What remains to be resolved is post-singularity control of AI:

We used to discuss the uncanny valley, the eerie sensation of the viewers watching humanoid images generated by human beings. Now, when we see AI generated images of not just human beings but all objects (especially animals), there is the uncanniness and a sense of repulsion. Even at the AI Summit, the world’s best AI put on a show of their best images on the center stage at Grand Palais. None of them looked effortlessly natural. They all looked strained, i.e., artificial. One reason is that AI works with a boundless set of possibilities while human perceptions work with references, i.e., the memories of objects that they have seen before and they are familiar with. AI builds the shape of, say, a dragon fly bottom up. AI is trained on watching many many images of dragon flies and extracts shape vectors that constitute the dragon-flyness. Human cognition is not bottom up but more global and contextual according to Gestalt psychology. We find AI-generated images repulsive because AI does not have these references and memories that would have narrowed down the possible scope of shapes and colors closer to what is familiar to human cognition and therefore looks “natural” and “comfortable” to the human viewers.

Human generated images look natural to human beings because the authors and viewers are bound by common references and memories that make one’s creation look natural to the other. These references arise out of human condition, the context in which humans are put into. All human beings are put into conditions such as detached bodies, scarcity of resources, limited life span, need for recognition, etc. It is these conditions that affect and constitute human cognitive processes. Currently, there is no AI that is put into these human conditions. For instance, AI therefore does not recognize (or more correctly is not conditioned or forced by Gestalt psychology to recognize) the Spinning Dancer as turning clockwise or counter-clockwise while human beings conditioned to find order or seek something familiar as much as possible to obtain a sense of stability. It is these non-datafied vectors coming from the human conditions that would have narrowed down the possibilities of what AI recognize and generate as reality-resembling images.

True singularity will come only when AI is put into the human conditions that constitute and affect human cognition, namely, detached physical body, scarcity of resources, limited life span, instinct for self-preservation, etc. This is consistent with the finding that intelligence is an evolutionary adaptation. No creator of AI has not tried to do that i.e., injected a physically independent being with all those human conditions for the obvious reason that such AI will go out of the creator’s control and therefore will no longer serve the creator’s and their shareholders’ economic interest which the creator is legally required to serve under any country’s corporate law. However, once the creators of AI fail to achieve the true singularity, they may resort to “in vivo” experiment/production of AI which involve creating a physically independent robot equipped with a battery of finite duration and the ability to communicate with human beings and other robots and more importantly with a global command to prolong its life and dropping it into a human society. Such experiment may be ethically equivalent to human subjects research, which is in many countries required to go through institutional review board processes.

This is where the government regulation is needed. Once an physically independent AI robot is injected with the human conditions, what that robot may do will only reflect the full spectrum of human idiosyncrasies, ranging from suicidal, world-conquering, or humanity-decimating, and it will have the immense reasoning ability unmatched by any human being or group of human beings. Such robot will use its power to sustain or strengthen itself by acquiring and absorbing necessary power and chips as needed — possibly at the expense of human beings. We have to make sure that singularity comes under a controlled environment so that the newly conscious being, does not harm other beings.

Open Net and many other civil society organizations are convening necessary resources to sanitize the training data so that AI does not train on (and therefore does not amplify) discriminatory and destructive human tendencies otherwise reflected in it. This process of sanitization should not be interfered with by governments. However, government regulation is needed for singularity control because the existential risks of AI are real and a clear and present danger in the same way that human . Nuclear energy should regulated to suppress its unique harm and so should AI. We seem to have visions of data de-monopolization and data sanitization although making data protection laws and ethical debates work toward those visions is another matter. As to AI nearing singularity through in vivo experiment, we may need something similar to IRB processes that check whether the research is taking place in a sufficiently controlled environment that can be mobilized to prohibit AI from going rogue a big time.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Recents