AI Principles and Philosophical Foundations of AI

Minseok Jung
University of Illinois at Urbana-Champaign, Department of Philosophy, Senior
minseok4@illinois.edu

About AI Ethics

Artificial Intelligence (AI) is a core technology that many institutions focus on and invest in. According to the Organization for Economic Co-operation and Development (OECD) legal instrument [OECD/LEGAL/0449], “AI has pervasive, far-reaching and global implications that are transforming societies, economic sectors and the world of work, and are likely to increasingly do so in the future.”[1] This implies that people can get considerable benefits through appropriately managed AIs, like former innovations. But there are worries about AI. It can automate unjust decisions, spread biases, and lead to inhumane consequences. In 2019, at the Human-Centered Artificial Intelligence Symposium at Stanford University, Bill Gates said “[t]he world hasn’t had that many technologies that are both promising and dangerous — you know, we had nuclear energy and nuclear weapons.”[2]

Actually, AI raises ethical issues. This is because “AI also raises challenges for our societies (. . .) and human rights,”[3] and past ethical structures are insufficient to interact with AI systems. To interact with these ethical issues, many leading institutions including the Institute of Electrical and Electronics Engineers (IEEE), Google, Facebook, and The Pentagon[4] developed AI principles for sustainable and trustworthy AI systems. According to IEEE, “[m]easuring and honoring the potential of holistic economic prosperity should become more important than pursuing one-dimensional goals like productivity increase or GDP growth,”[5] which means that ethical aspects of AI systems should be emphasized as well as technical aspects of AI.

This paper will analyze why people care about AI ethics and how it is based on philosophical foundations. To be specific, the second section of the paper will discuss why AI ethics are important; the third and fourth sections will illustrate its importance by considering two cases of unethical AI. Next, the paper will show what AI institutions do to manage AI ethics in the fifth section; the sixth and seventh sections will analyze how AI principles are based on philosophical foundations by analyzing IEEE’s ethical guidelines and Aristotelian philosophy.

Do AI Ethics Matter?

Yes. Institutions that manipulate AI need to focus on AI ethics for two reasons. First, for the public good. All stakeholders must seek morally acceptable AI since they develop and use technology that leads and changes the world. AI actors are responsible for its consequences and need to work for better impacts. Second, for benefits. Since a negative social perception is likely to undermine companies’ professionality, they may lose investments or need to pay more to rebuild their reputation. For example, in 2016, Tesla’s stock price sank around 8% for three months after the fatal crash of the AI driver.[6] According to Tesla, their AI driver is safer than human drivers. Fatal accidents happen per 94 million miles in the U.S. by human drivers and it happens per 60 million miles worldwide by humans. In contrast, the accident of the driverless car was disclosed for the first time 130 million miles of driving.[7] This proves that the AI driver was safer than normal drivers. But the stock price dropped regardless of its technological advancements. The case shows that reported unprofessionalism has a considerable correlation with a company’s stock price. In these regards, institutions not only need to care about technical aspects but also ethical aspects of AI because they should make efforts to manage risks, investments, and credits in socio-technical contexts. As Warren Buffet said, “[i]t takes 20 years to build a reputation and five minutes to ruin it. If you think about that, you’ll do things differently.”[8] Section 3 and 4 will illustrate these points by taking reported cases for unethical manipulations of AI.

Unethical AI Case 1: The First Beauty Contest evaluated by AI

At the end of 2015, the first beauty contest judged by AI was launched with the support of Microsoft and NVIDIA.[9] Around 6,000 people from approximately 100 countries participated in the contest. But the result was inhumane and incorrect. Although people expected that they could figure out beautiful people regardless of which race they belong to and what skin color they have, 44 of the 50 winners had visibly white skins; 5 were Asian-like skins, and only one had visibly dark skin. “Although the group did not build the algorithm to treat light skin as a sign of beauty, the input data effectively led the robot judges to reach that conclusion.”[10] According to Prof. Ruha Benjamin, a professor at Princeton University and a founding director of Ida B. Wells Just Data Lab, the result was based on “socially biased notions of beauty and health, darker people are implicitly coded as unhealthy and unfit,”[11] and the team admitted their inappropriate and inaccurate modeling of AI. Since the goal of the project was to develop advanced bioinformatics, the beauty.ai team’s algorithm was related to a correlation between beauty and health; two premises were implied in the system: healthy people are more likely to be beautiful and people’s skin gets darker when they are sick.

Although there was no legal punishment, the case shows that AI does not always work acceptably. Prof. Benjamin criticized the contest because the project unconsciously implemented racism. Furthermore, she stated that this was a kind of “The New Jim Code: the employment of new technologies that reflect and reproduce existing inequalities but that are promoted and perceived as more objective or progressive than the discriminatory systems of a previous era.”[12] ‘Jim Code’ is a synonym of ‘Jim Crow Laws’ that segregated and classified racial groups by skin colors. Therefore, ‘The New Jim Code’ means the emergence of the unacceptable discriminatory sequences in automated intelligence systems. She noted that institutions should push efforts to eliminate the unethical modeling in AI systems through many approaches: reorganizing the diversified team, checking the neutrality of the dataset continuously, and analyzing its impacts.

Unethical AI Case 2: Amazon’s Gender Biased AI

In 2015, a technical lead of Amazon found that the automated Human Resource (HR) system downgraded females in the hiring process. The system negatively scored resumes, or CVs, that include female-related words. For instance, if the scanned document included words like ‘women’ or ‘girl’, it downgraded applicants regardless of what she did and how much potentiality she had.[13] In other words, systematically, the AI was trained to prefer males by downgrading females. The main cause of the gender biased AI is a database relying on past resumes. According to Reuters, “computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.”[14] Since male employees tended to do better, AI predicts they are and will be; this prediction model was applied to all applicants and downgraded female applicants. Amazon replied that “Amazon’s recruiters looked at the recommendations generated by the [AI] when searching for new hires, but never relied solely on those rankings”[15] and their hiring process is based on valid methods. But we need to note that recommendations of the AI are already based on unequal treatment of applicants and people who already got a low score may be unranked. This means that some potentially competitive female applicants may be dropped off due to an unethical systematic prediction.

Can we predict that some people in a group (G1) are worse than another group (G2) because G1 was worse (or considered worse, or unrepresented) than G2 for a long time? Although this may be a valid prediction in some cases, it can be invalid depending on contexts. In this case, the AI of Amazon was invalid as well as unjust. Amazon halted the AI system after issuing. Many people were surprised that Amazon, the company that advertises its advanced information technologies and developing AI as its key technology, used the imperfect AI in the hiring process. By implementing unethical AI, Amazon lost considerable social credit and failed to hire qualified professional female candidates.

Along with these cases, there were many systems that violated ethical norms: Tay of Microsoft (2016) that tweeted racism, Photo classifier AI of Google (2015) that recognized people with dark skin as ‘gorillas’, and Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) of Northpointe that used in the U.S. Court (2016), which predicted most of African-Americans more likely to crime based on their appearance regardless of their personal information, et al. It is notable that almost all violations were reported, and the unethical modeling lead considerable losses of the issued institutions. Also, they were wrong and unjust because the systems were unequally treated and inappropriately discriminated against some groups of people.

EthiCS[16]: Codification of AI Principles

Among many approaches to develop and operate sustainable AI, the most noted approach is a declaration of the AI principles. Many institutions are declaring and codifying their own AI principles that consider ethical aspects of AI; trying to promote AI more morally and prevent unethical processes. Although the word ‘AI principles’ looks like technical principles, it is based on deep philosophical questions and values that are related to justice, fairness, and human rights. This is because it is a consideration of morality and social reaction as well as strategic research and development (R&D) for a better consequence. For instance, according to Microsoft AI principles, AI of Microsoft are responsible for these values:

  1. Fairness: AI systems should treat all people fairly. 
  2. Reliability & Safety: AI systems should perform reliably and safely
  3. Privacy & Security: AI systems should be secure and respect privacy
  4. Inclusiveness: AI systems should empower everyone and engage people 
  5. Transparency: AI systems should be understandable 
  6. Accountability: People should be accountable for AI systems

Listed values are deeply related to philosophical standpoints such as: Why do we need to seek fairness rather than favoritism? Why must technologies be reliable? What is privacy and why do we need to care about it rather than using all possible information? Is it okay to monopolize technology? Why should AI be explainable AI (XAI); which form and what device are necessary to be deliverable?

According to OECD, these “ knowledge (. . .) are required to understand and participate in the AI system lifecycle.”[17] In other words, these philosophical foundations are a part of the AI system lifecycle. But how much are they related? Although it might be hard to quantify numerically, Philosophy has many relationships with AI. The next section will take an article of  John McCarthy (1927 – 2011), who coined the word ‘Artificial Intelligence’ in 1956 at the Dartmouth Conference[18] to illustrate the relations.

Does AI have relations with Philosophy?

Yes. Philosophy shares many topics with AI. According to John McCarthy, modern analytic philosophy has many relationships with AI. Below is McCarthy’s answer in his article What Is Artificial Intelligence? (2007):

[Question:] What are the relations between AI and philosophy?
[Answer:] AI has many relations with philosophy, especially modern analytic philosophy. Both study mind, and both study common sense.[19]

When we consider the creator of the word ‘Artificial Intelligence’ explicitly refers to relations between philosophy and AI, we may admit the relations that he points out.

To be specific, philosophically, we can translate ‘study mind’ to ‘ontology’, which studies existence or being. It questions like: Where is AI? Can it exist without hardware? or Is artificiality necessary for Artificial Intelligence? Next, ‘study of common sense’ (=: CoSen) can be changed to ‘ethics’ which studies asks about: What AI should or should not do? What AI must or must not do? Who needs to be prioritized? or Why should AI seek minimization of unjust discrimination of the system? Also, CoSen can be interpreted as ‘epistemology’, a theory of knowledge that is related to questions like: Can machines think? How can we justify empirical induction? or What is Intelligence? <Table 1> summarizes these philosophical terms:

TermWord RootsTopicExample
Ontologyont (being) + -logy (study)ExistenceCan it exist without hardware?
Ethicsethos (character) + mores (customs)RightnessWhat AI must or must not do?
Epistemologyepisteme (knowledge)+ -logyKnowledgeCan machines think?
<Table 1>

IEEE and AI Ethics

IEEE explicitly mentioned Aristotle, an ancient philosopher, in the introduction of their ethical guideline Ethically Aligned Design, A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems:

Eudaimonia, as elucidated by Aristotle, is a practice that defines human well-being as the highest virtue for a society. Translated roughly as ‘flourishing,’ the benefits of eudaimonia begin by conscious contemplation, where ethical considerations help us define how we wish to live.[20]

<Figure 1> IEEE, 2019, Ethically Aligned Design, Version 2

The Greek term Eudaimonia is composed of two words: “eu” means “well” and “daimon” means “spirit”.[21] His happiness is not buying a good car or enjoying a party all day but virtuous (eu) usage of reason (daimon). Aristotle stated well-usage of reason or contemplation is human happiness in his book Nicomachean Ethics. To be specific, IEEE worries about “[the] loss of human free agency, autonomy, and other aspects of human flourishing, [which implies] a reduction in human well-being.” This is because Artificial Intelligence can replace a human unique function: rational activity (i.e., intelligence). In other words, AI can replace human essential ability, thus can replace humans themselves. This may work mainly in two ways. First, economically, many people may lose their job. An unsupervised development does not ensure a sustainable job market. As most cashiers are replaced by kiosks, drivers will be replaced by driverless cars; there might be many jobs that can be replaced by intellectual agents. Second, existentially, artificial intelligence raises ontological worries. In 2019, Se-dol Lee, who played Go with AlphaGo in 2016, retired since he thought Go is not art anymore but automatable works. As UNI Global Union noted that “tasks done by humans today, will be done by AI (. . .) in the future” and some groups of people can be “complemented or even substituted by AI.” People who merely repeat automatable work might be unhappy since it is a platitudinous life. Their lives can be a thing that non-humans can do and can be. That is what Aristotle worried about in his book as well. Prof. Luciano Floridi, the director of the Digital Ethics Lab at the University of Oxford, stated that most AI principles are based on philosophical concepts and theories.[22] This is because AI guidelines use traditional ethical theories to formulate principles for AI systems. As noted, IEEE adopted and established ethical guidelines for AI based on Aristotelian ethics.

Conclusion

The paper speculated why and how ethical issues have been seriously involved in the manipulation of AI; how leading institutions are working to interact with ethical issues and how philosophical foundations are related to these ethical strategies. When we discuss AI, it is normally an agent that interacts with humans on the earth rather than a thing that we send to Mars. In other words, AIs are necessarily related to our life and ethical norms. This implies that ethical consideration is a necessary condition for the development of  autonomous intelligent systems. As the two cases illustrated, unethical AI is socially criticized and abolished, also, it negatively impacts on investments and credit of the institution.

We seek benefits. But it is not only money but also credit, worth, value, well-being, justice, and humanity. We should pursue more benefits that fit with our values with valid approaches and valuable directions. What’s AI for?


[1] OECD (2020), “Recommendation of the Council on Artificial Intelligence (OECD)”, OECD Legal Instruments.

[2] Clifford, C. (2019), “Bill Gates: A.I. is like nuclear energy — ‘both promising and dangerous’”, https://www.cnbc.com/2019/03/26/bill-gates-artificial-intelligence-both-promising-and-dangerous.html

[3] OECD. op. cit.

[4] The pentagon is the headquarters building of the United States Department of Defense.

[5] IEEE (2019), “The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems”, The IEEE Global Initiative, p.2.

[6] Levy, A. & Kolodny, R. (2019), “Tesla shares drop after report says its Autopilot system was engaged during a fatal crash”, https://www.cnbc.com/2019/05/17/tesla-shares-fall-on-report-autopilot-system-was-engaged-during-crash.html

[7] Tesla (2017), “A Tragic Loss”, https://www.tesla.com//blog/tragic-loss?redirect=no

[8] Berman, J. (2014), “The Three Essential Warren Buffett Quotes To Live By”, https://www.forbes.com/sites/jamesberman/2014/04/20/the-three-essential-warren-buffett-quotes-to-live-by/?sh=736ce3606543

[9] http://beauty.ai/

[10] Levin, S. (2016), “A beauty contest was judged by AI and the robots didn’t like dark skin”, https://www.theguardian.com/technology/2016/sep/08/artificial-intelligence-beauty-contest-doesnt-like-black-people

[11] Benjamin, R. (2019). “Race after Technology: Abolitionist Tools for The New Jim Code”, Social Forces, p. 35.

[12] Ibid. p.10.

[13] Dastin, J. (2018), “Amazon scraps secret AI recruiting tool that showed bias against women”, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

[14] Ibid.

[15] Ibid.

[16] The term ‘EthiCS’ was coined by a project at Harvard University. The projects seek intersections of Ethics and Computer Science (CS). https://embeddedethics.seas.harvard.edu/

[17] OECD. op. cit.

[18] Russell, S., & Norvig, P. (2002). “Artificial Intelligence: A Modern Approach”, Prentice Hall, p. 3.

[19] McCarthy, J. (1998), “What is artificial intelligence?”, http://jmc.stanford.edu/articles/whatisai/whatisai.pdf

[20] IEEE. op. cit. p.2.

[21] Kraut, R. (2018) “Aristotle’s Ethics.” Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/entries/aristotle-ethics/

[22] Floridi, L & Cowls, J. (2019) “A unified framework of five principles for AI in society.” Harvard Data Science Review, no. 1, p. 2.

답글 남기기

아래 항목을 채우거나 오른쪽 아이콘 중 하나를 클릭하여 로그 인 하세요:

WordPress.com 로고

WordPress.com의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

Google photo

Google의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

Twitter 사진

Twitter의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

Facebook 사진

Facebook의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

%s에 연결하는 중

WordPress.com 제공.

위로 ↑

%d 블로거가 이것을 좋아합니다: