Public AI Ethics and Non-Public AI Ethics

Minseok Jung
University of Illinois at Urbana-Champaign, Department of Philosophy, Senior

This is a reply paper to Ethics for Sale: AI Ethics and Industrial Dimension[1] written by HyeJeong Han, a doctoral candidate in Science and Technology Policy (STP) at KAIST. I recommend reading Han’s paper to read this paper comprehensively.

Introduction: Artificial Intelligence and Companies

It is impossible to disregard the impacts of Artificial Intelligence (AI). Since it touches on everything from school curriculums to national strategies, AI will become part and parcel of society. Along with its advancements, many ethical issues are rising (see the first paper, AI Principles and Philosophical Foundations of AI[2], for details). Leading AI institutions, including governmental institutions and companies, already declared AI principles to care about ethical aspects prior to the development and organized teams to manage ethical issues.

As HyeJeong Han pointed out in her paper, the leading group of AI ethics is private companies. According to research by the AI Ethics Lab, 40% of AI guidelines were published by companies; the portion of companies is highest among three types of institutions: private companies, governmental or intergovernmental agencies, and research or professional organizations.[3]  Although some people interpret companies’ participation as a moral obligation, others interpret them as a delusion. This is because companies can ignore the code of ethics after a declaration, and institutional principles can be just a saying without acting. 

But are companies making efforts for ethical AI systems merely for their benefits? Which ethical norm can be a common ground that all stakeholders can reach an agreement? In this paper, I will argue that leading AI companies are working for AI ethics with responsibility and a principle that can be common ground is ‘fairness’. To be specific, the second and third sections will contrast negative and positive viewpoints to the participation of companies in AI ethics; the fourth section will discuss why companies’ participation is necessary. The fifth section will speculate why fairness is cared for and how institutions are discussing fairness.

AI Ethics Should be in the Public Sphere

Some people are worrying that AI ethics can be a ‘market for principles’, that “stakeholders may be tempted to ‘shop’ for the most appealing ones.”[4] Companies can just choose codes of ethics for an advertisement and shirk the responsibility. Prof. Luciano Floridi at the University of Oxford criticized these phenomena and named them ‘bluewashing’, which is “the malpractice of making unsubstantiated or misleading claims about (. . .) the ethical values and benefits of digital processes, products, services, or other solutions in order to appear more digitally ethical than one is.”[5]

The term ‘bluewashing’ is derived from the word ‘greenwashing’ that criticizes companies’ deceptive green policies that focus on their advantages. It “concentrate[s] on mere marketing, advertising, or other public relations activities (e.g. sponsoring)”[6] and does not care about actual environmental impacts. For example, McDonald’s advertised that they are using eco-friendly straws to give a good impression to consumers. However, McDonald’s straws were unrecyclable. Their policy was criticized because they tended to be ethical and tried to get benefits from malpractice although they did not care about the actual environmental impacts.[7]

Notably, unlike governmental institutions and NGOs, companies tend not to mention ‘privacy’ and ‘security’ in their ethical guidelines. According to Prof. Yi Zeng, an Ad Hoc Expert of UNESCO who represents China, “[p]rivacy and security are sensitive issues for corporations (. . .) maybe that is why corporations would not like to mention them.”[8] In the Linking Artificial Intelligence Principles (LAIP) project, see <Figure 1>, he proved that the word frequency regarding privacy and security is notably less than other institutions. Since companies cannot develop a beneficial AI model without user data, they do not mention sensitive issues for their benefit. For example, the YouTube recommendation algorithm traces users’ clicks to provide further contents that users may like and click. These kinds of systems: autonomous recommendations, trend analysis, and sorting of search engines et al. use personal information.[9] This shows that personal data is used in companies’ AI for their benefit; companies are willing to exploit personal data as much as possible rather than let them be secured.

<Figure 1>[10]

Elon Musk, CEO of Tesla, tweeted “All orgs developing advanced AI should be regulated, including Tesla” in February 2020.[11] This implies that non-private (i.e. public) institutions should lead AI ethics rather than letting companies lead them. As the LAIP proved, companies’ AI ethics can be interpreted as a ‘bluewashing’ that prioritizes private profits rather than pursuing public goods.

AI Ethics Should be Consulted with All Stakeholders

Nevertheless, some people disagree that private institutions should not lead AI ethics and argue that companies should lead AI ethics. They note that there are considerable limits in the lead of public institutions. Reid Blackman, CEO of Virtue and former professor at the University of North Carolina at Chapel Hill, criticized centralized approaches to AI ethics. He argued public guidelines do not reflect actual technology and development process. To be specific, the public sphere focuses on abstract concepts like justice, social goods, and human flourishing. In contrast, companies focus on business and engineering, making a question like “[g]iven that we are going to do this, how can we do it without making ourselves vulnerable to ethical risks?” In other words, governmental AI principles are useless for the actual decision-making process of companies. Moreover, he pointed out that the centralized guideline is inapplicable to all private sectors. Since each company’s AI and its manipulations are not identical, ethical guidelines should be tailored. For instance, companies that use AI vision should focus on racism and discrimination; institutions that use recommendation algorithm should focus on privacy.

Unlike worries that companies use AI principles for ethics-washing, leading IT companies are working to care about their own ethical principles. Google launched the Responsible AI practices project[12] to handle issues that they are facing. The product lead developed a “[r]esponsible AI toolkit in the TensorFlow ecosystem so that developers everywhere can better integrate [Google’s AI] principles.”[13] IBM made a library named IBM Fairness 360[14] to check the fair representation of the dataset to work aligned with their principles. Notably, the companies worked in accordance with their principles. This case shows that the companies can not only catch ethical issues of their business but also act to solve the problems.

Opinion: Autonomy for Automated Systems

I think AI Ethics should be consulted with diverse viewpoints that reflect actual issues which each agent is facing rather than restricted in the centralized sphere. Companies are making considerable efforts to handle ethical issues and recognizing their Corporate Social Responsibility (CSR). For instance, the responsible AI toolkit of Google explicitly mentioned that the company must assure that “AI [to be] deployed responsibly: preserving trust and putting each individual user’s well-being first. (. . .) [In the development process,] it has always been our highest priority to build products that are inclusive, ethical, and accountable to our communities.”[15] Furthermore, Microsoft launched three administrative organizations to manage the ethical issues of AI. In detail, “[Microsoft] put [their] responsible AI principles into practice through the Office of Responsible AI (ORA), the AI, Ethics, and Effects in Engineering and Research (Aether) Committee, and Responsible AI Strategy in Engineering (RAISE).  The Aether Committee advises [Microsoft’s] leadership on the challenges and opportunities presented by AI innovations. ORA sets [Microsoft’s] rules and governance processes, working closely with teams across the company to enable the effort. RAISE is a team that enables the implementation of Microsoft responsible AI rules across engineering groups.”[16]

Also, governmental institutions are noting a collaborative approach for ethical principles. The White House ordered, “agencies shall explore opportunities for collaboration with non-Federal entities, including: the private sector; academia; [and] non-profit organizations (. . .) so all collaborators can benefit.”[17] in the Executive Office of the President on February 14, 2019. The Korean government, Ministry of Science and ICT, declared AI ethics should be not restrictive law or guideline but moral responsibility and rules. It should respect and promote companies’ autonomy; interact flexibly with social changes.[18] These governmental policies show that public institutions are also going to include the non-public spheres and opinions.

A Common Ground: Fairness

But what is an ethical principle that all institutions can consent to? It might be hard to find a common ground if all stakeholders’ viewpoints are different. In this paper, I will argue ‘Fairness’ is an agreeable foundation that most stakeholders can work together with. Among many principles like privacy, human control, or explainability et al., ‘Fairness’ is the most notable principle. According to the Center for Equity, Gender & Leadership at Berkeley Haas, “[f]airness is a ubiquitous term in the artificial intelligence (AI) and machine learning (ML) space. Most principles for responsible and ethical AI include ‘fairness’.”[19] Actually, a research of the Berkman Klein Center at Harvard University proved that “[f]airness and non-discrimination theme is the most highly represented theme”[20] among declared AI principles by using natural language analysis.

‘Fairness’ is cared for by most stakeholders because fair treatment is easily violated by systems. This section will take the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) to illustrate the point. COMPAS was a system that predicted recidivism, a tendency of a convicted criminal to reoffend, based on past data; it was used in judicial decisions. In 2016, COMPAS was issued for unfair treatment and scrapped. Julia Angwin and her team in ProPublica found that it tends to predict African-Americans (i.e., people who have darker skins) criminals than other groups of people. Actually, “blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend.”[21]  The critically issued value was Presumption of Innocence (PoI), that “[o]ne of the most sacred principles in the American criminal justice system, holding that a defendant is innocent until proven guilty. In other words, the prosecution must prove, beyond a reasonable doubt, each essential element of the crime charged.”[22]  In short, although all people should be presumed equally innocent, a prediction of COMPAS was unequal. Lucia M. Sommerer, Fellow at Yale Law School’s Information Society Project, stated COMPAS violated PoI. In The Presumption of Innocence’s Janus Head in Data Driven Government, she stated “[t]he likelihoods [of COMPAS] turn into ‘legal truth’ for defendants when a judge at a bail hearing is presented with a high-risk classification (which generally neglects to mention the underlying statistics), and when defendants as a direct or partial consequence are then denied bail.”[23] In other words, a prediction of the system impacts a legal decision and also leads to a violation of PoI.

Similar cases are mentioned in AI Principles and Philosophical Foundations of AI.[24] One AI was issued for racism and another was issued for a gender biased decision. Both of them violated fair treatment and led to serious issues.


As the paper noted in the introduction, this paper replied to questions raised in Ethics for Sale: AI Ethics and Industrial Dimension written by Han.[25] I agreed with Han’s opinion that companies are leading ethical discussions for AI, and we have to notice standpoints of non-public spheres. However, I disagreed that a cautionary approach is necessary to prevent commercial usage of Ethics. This is because companies are prioritizing public interests over private interests. To illustrate these points, the paper contrasted two viewpoints on the participation of the private sphere into a discussion of AI ethics. Although some people criticize the company’s ethical policies for bluewashing, institutional ethical guidelines are necessary since it reflects actual problems and issues that each agent faces. Also, the paper showed ‘fairness’ is one of the important principles that most stakeholders commonly note; discussed its importance by taking COMPAS as an example and pointed out that the unfair systemic judicial presumption violates Presumption of Innocence (PoI).

However, what is ‘fairness’ in this context? The fairness in a technical design may not be identical to fairness in mind. But it may be hard to say that they are completely different. Some might be the same; some might be different. What is it? How can we figure it out? What should we do in accordance with it? There is no outstanding answer, yet.

[1] Han, H. (2021), “Ethics for Sale: AI Ethics and Industrial Dimension”, Behind Sciences, vol. 11.

[2] Jung, M. (2021), “AI Principles and Philosophical Foundations of AI”, Behind Sciences, vol. 11.

[3] Canca, C. (2020), “Operationalizing AI Ethics Principles”, Communications of the ACM,

[4] Floridi, L & Cowls, J. (2019) “A unified framework of five principles for AI in society.” Harvard Data Science Review, no. 1, p. 2.

[5]  Floridi, L. (2019) “Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical.” Philosophy & Technology 32, no. 2: 185-193. (p. 187)

[6] Ibid. p.187.

[7] BBC, (2019), “McDonald’s paper straws cannot be recycled”,

[8] Zeng, Y., Lu, E., & Huangfu, C. (2018), “Linking Artificial Intelligence Principles”, AAAI Workshop on Artificial Intelligence Safety, p.3.

[9] Jeckmans, A. J., Beye, M., Erkin, Z., Hartel, P., Lagendijk, R. L., & Tang, Q. (2013), “Privacy in Recommender Systems”, Social Media Retrieval, p.8.

[10]  Zeng, Y. et al. op. cit. p.3.



[13] Doshi, T. & Zaldivar, A., (2020),“Responsible AI with TensorFlow”,


[15] Doshi, T. & Zaldivar, A. op. cit.


[17] The White House. (2019), “Maintaining American Leadership in Artificial Intelligence”, A Presidential Document by the Executive Office of the President, Executive Order 13859.

[18]  Ministry of Science and ICT (of South Korea). (2020), “인공지능(AI) 윤리기준 (Korean Government AI Principles)”,

[19] Berkely Haas. (2020), “What does ‘fairness’ mean for machine learning systems?”,

[20] Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). “Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI”, Berkman Klein Center Research Publication, p.47.

[21] Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016), “Machine Bias”, ProPublica,


[23] Sommerer, L. M. (2018). “The presumption of innocence’s Janus head in data-driven government”, Amsterdam University Press, p.2.

[24] Jung, M. op. cit.

[25] Han, H. op. cit.

답글 남기기

아래 항목을 채우거나 오른쪽 아이콘 중 하나를 클릭하여 로그 인 하세요: 로고

WordPress.com의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

Google photo

Google의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

Twitter 사진

Twitter의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

Facebook 사진

Facebook의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

%s에 연결하는 중 제공.

위로 ↑

%d 블로거가 이것을 좋아합니다: