Open Net joins global internet & society conference in Bangkok: Role of Universities in AI governance

by | Nov 11, 2024 | Innovation and Regulation, Open Blog | 0 comments

The Global Network of Centers of Internet and Society had its annual meeting in Bangkok, Thailand on October 17-18, 2024. You can see the full agenda here.

NoC-Annual-Meeting-2024-Public-Agenda

K.S. Park on behalf of Open Net spoke at the Role of Universities in AI session as follows (the statement made at another session Safeguarding Democracy can be found here):

The biggest problem with AI is that AI will amplify human prejudice inevitably embedded in the training data. University can play a very important substantive role because we need to educate a conscious being.  We don’t want to do it by outright censorship of the result of machine learning but by censorship on the training data. There are three different ways of addressing the problem: (1) censorship of output; (2) replacing machine with human beings; and (3) sanitizing the training data.

As to (3), we need to apply our own visions of ethics to identify bad data points we don’t want to be emulated by AI. Then, what better place is there if we want to build a machine faithful to Kant’s categorical imperative, Hegel’s ideological dialectic, Marx’s dialectical materialsm, John Rawls’ reflective equilibrium between the original position behind a veil of ignorance, etc.  MIT’s Moral Machine is already building the training data by collecting people’s answers to Jarvis’ trolley problems and other questions of ethics. When a car sees both an old person and a young person, it is bound to hit either whom should the car hit?

As to (2), the EU AI Act seems to be the leader. It categorizes human activities from low risk to high risk, and then requires human control on the high risk activities. However, such categorization is underestimating the fact that AI is an information system. It makes informed decisions for us. It does not actually execute harms or generate risks. The AI system that can tell civilians from military personnel may be high-risk or low-risk depending on whether the humans are using it to target or avoid civilians. In South Korea, the biggest risk posed by AI is deepfakes, activities not categorized as high risk under EU AI Act. The failure to appreciate that AI and its products (information) can power an infinitely broad spectrum of human behaviors results in a fallacy that supports network slicing– the idea that self-driving cars and remote surgeries are more important and therefore the related traffic needs be prioritized over the general traffic. When disasters like earthquakes or storms hit, what apps do we use to share the life-saving messages? Are there specialized apps for disaster response, and if so, do we use them? No, we use general purpose apps like Whatsapp. The same thing with AI. AI produces information not actions and there is no high-risk AI as opposed to low-risk AI. Requiring human control on certain activities is not really regulation of AI but just regulation of the underlying activities. The exercise of categorizing AI usage into low risk and high risk is none other than categorizing all human activities into low risk and high risk. The exercise is really missing the mark in terms of how we should be prepared for AI.

Early, there was a question: Can AI be punished? No because punishment is conceivable only when we presuppose a separate being like a balloon that has independent existence.  We proved that it is a wrong question to ask. Similarly, EU AI Act is asking the wrong question. It does not tackle how to remove human bias from the training data. Without that, we can never trust AI to do anything including sending adverts to us, helping voter choices, low risk or high risk. . . The challenge is to sanitize the training data.  Sanitizing and creating the training data requires ethics because we need to vet ethically all the data points of human behavior in the training data.  We need ethics.  Universities are the ideal places for discussing that.

Korean version text

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *