Moral Machines

Researcher, Faculty of Mathematics, Complutense University of Madrid
Elisabeth Martínez Delgado

Introduction – Ada Lovelace

The word “technology” is one of the most frequently used words when speaking about modern society. This word shapes the environment in which we grow. We tend to see progress in direct relation to it. And we desire to develop our society towards a more technological one. Where more aspects of our lives are supported by it. Ultimately, we want to introduce more machines into our lives. But here’s the essential distinction: machines can be biased while technology cannot.

According to the Oxford Dictionary, “machine” is defined as “A piece of equipment with many parts that work together to do a particular task”[i]. Meanwhile, “technology” can be defined as “scientific knowledge used in practical ways in industry, for example in designing new machines”[ii]. Nonetheless, we include both definitions under the word “technology.” For example, we refer to phones as technology while they are a piece of equipment, not scientific knowledge. It is because of Ada Lovelace, and her Analytical Engine (Figure 1). Her device was the first design of a general-purpose computer; devised at the very beginning of the Victorian Era[iii]. And, of course, the materialization of such an idea would not be possible until a century later. It would be with Alan Turing and his design of the Turing Machine. Nevertheless, it was Ada who pictured it for the first time, meaning the physical being of technology. What we call “the tool of tools.” This constitutes the very first step for transforming “machine” into “technology.” As it represents the step where this machine can be used to create another machine within it. And to accomplish this is to apply scientific knowledge. In other words, to use technology. In summary, using one machine and one technology to create a new machine. This misleads us into thinking that the original machine is the actual technology. <img width=”432″ height=”6″ src=”” alt=”Надпись:

<Figure 1> Watercolour portrait of Ada Lovelace, circa 1840, possibly by Alfred Edward Chalon

Now, the problem which stems from using the terms “machine” and “technology” interchangeably may not be obvious. Whereas technology is the application of abstract knowledge in all possible “practical ways,” machines focus on a concrete task. This is not trivial. That multitude of concrete practical applications is a set impossible to reach in real life. This is to say, in a finite amount of time we cannot apply one technology in every conceivable way. On the other hand, tasks are actions decided by someone. These are the materialization of one chosen practical application. Therefore, machines are inseparable from those who determine machines’ further usage. Those who create machines carry their beliefs and prejudices (like sexist and racist biases) in the same way as those who use them. However, I must clarify that the technology that we develop is also filtered by necessity. Nevertheless, technology encompasses all the possible good or bad in which we could apply it. Which is to say, technology is free from moral judgment. Just to clarify, what I mean as “good” or “bad” are just simplified moral values that most people would agree on. I do not intend to discuss the complexities of those moral judgments that depend on context.

To illustrate the idea, we can look at any human tool, for instance, an ax. We could say firstly that it is a simple machine for cutting down trees. The technology behind this machine would be the edge of the ax. Then, our moral values concern this machine in the way the ax is used: to obtain wood, or to fight a battle. It also matters what was the original purpose of the ax and the task to be performed. The latter can be different from the former. An ax can be built to obtain wood and then be used to fight a battle. The moral implications of creating an ax as a tool are certainly different from creating it as a weapon. Finally, when analyzing the technology “edge” we cannot make a moral verdict. There are infinite good uses of it, like a bistoury for saving lives. Infinite neutral ones, like hair cutting scissors. And infinite bad uses, like a sword.

In summary, we can argue that technology is exempt from moral judgment. Moral consequences depend on how the technology is applied. In other words: what machines do we create using it. In this article, we will take a closer look at the implications of this misconception; the misuse of the word “technology” when we refer to “machines.” And how we ended up creating sexist machines.

World of Machines – Ursula K. Le Guin

In this context, we can start to define the problematics of the world of machines, when it is seen by society as the world of technology. When we talk about our modern, demanding environment, we often perceive it as neutral. The rise of mental diseases associated with loneliness is usually seen as a consequence of the misuse of cell phones and social media[iv][v]. The self-isolation of a human being is seen as a mistake committed by parents when raising their children, or by the victim itself. This is the logical deduction when we assume that social media or cell phones are mere objects or tools, available to us, that cannot be inherently bad or good for someone. But again, machines are subject to someone’s intentions and beliefs. So, even if the creator of a revolutionary tool does not intend to be sexist, cultural clashes and preconceptions can cause it.

Science-fiction writers usually provide a good looking-glass for this matter. In her book The Dispossessed, Ursula K. Le Guin creates two parallel societies: Urras, with a capitalist patriarchal system, and Anarres, with an anarcho-syndicalist one. Their beliefs are dictated by these respective systems. Along with the story, Le Guin pictures the different realities of the two societies, including their machines. Since the Anarresti does not have an interest in the production of technology the lack of progress of their machines is noticeable. The system defines their machines. This is not to say they “fall behind” in terms of mechanical advancement, although it may be perceived this way. Anarres’ society renews and preserves machines, but they do not expand them. A clearer example of this can be found where an Anarresti woman suggests the possible creation of a machine for reproduction, since motherhood as a biological fact, is used to construct a social role for women[vi]. Such thoughts could never develop in Urras’ society, where women are considered inferior to men; people grow up to believe that the system sustains gender-oppressive roles because it benefits everyone in the system. It’s frankly impossible for Urras’ society to even think of such a machine because of their culture.

The problem with gender roles is further explored in The Left Hand of Darkness, another of Le Guin’s works. Here, she depicts a genderless world, Gethen, taking out all the weight that it carries with it (Figure 2). The machines, once again, reflect every aspect of their societies’ beliefs. The story follows Genly AI, an Earthling from a futuristic technologically advanced Earth, as ambassador in Gethen. Throughout the story, Genly highlights numerous times how the machines in Gethen are based just on survivability. He is fascinated about how their technology is far less developed than on Earth, despite it being far enough into the future. And he hypothesizes the lack of conflict, a byproduct of natives’ androgyny, to be the reason behind this fact. I would like to go deeply into the details of the story, and the correlation between gender and machines within it. But it would be spoiling a precious development of the most claimed book of one of the most claimed authors in the topic.<img width=”505″ height=”6″ src=”” alt=”Надпись:

<Figure 2>  The Left Hand of Darkness cover illustration

In both examples, we can see how machines are an intrinsic part of human culture. For us, the mere existence of a machine capable of conducting the gestation of a human being would cause, at least, an enormous scandal in our society. And for the people of Gethen, the separation between the construct of “gender” would be pointless since they would not be able to comprehend a distinction of that term. This is to say, the differences in culture, create differences in the use of technology leading to the creation of different machines. This contrast can lead to disagreement in the moral judgment of machines.

Objective AI – Ruha Benjamin

Going back to our non-fictional world, machines are filling increasingly more space in our lives. However, while this grants us many benefits, it can lead to the aforementioned problems such as sexism and racism. Since 1986, we have produced technology that can be considered the next technical step in the way to the perfect general machine[vii]. A machine capable of developing and implementing new technology. I am talking about Machine Learning, but more specifically about Artificial Neural Nets. The technology behind this incredible machine is linear regression. Precisely, the automation of this mathematical formula for reducing the error of the output. Together with a brain-like architecture of neurons, and an activation function to add complexity to each neuron[viii].

All parts set, we have built a computerized mathematical model, capable of adapting to produce specific values. Now, the trick lies in feeding this model with hundreds of thousands of examples, different ones, so it “learns” the subjacent relation to all of them. For each example, there is an expected output attached. In the typical example for pictures of dogs and cats, the label “dog” or “cat.” These models can adapt their data processing until they can differentiate between visualizations of dogs and cats. And we are just scratching the surface of possible uses. To recapitulate, these models change their understanding of a given data until it matches what we want. If you have heard of Machine Learning before, you may know how similar techniques had been used for language processing, video recognition, media recommendation, and many other examples.

At this point, the problem with these machines seems to be more obvious than others. We need data to teach these models, and thus, we are going to teach the models the same biases we reflect on the data. That is why Machine Learning models have been proven to be racist, sexist, and biased in all ways of discrimination[ix][x]. In the end, all discriminations are linked and they root deep in our society. Specifically, racism in AI is a serious matter. There are countless examples. In the book Race After Technology by Ruha Benjamin, when talking about Beauty AI First Beauty Contest Judged by Robots she writes:

In addition to the skewed racial results, the framing of Beauty AI as a kind of preventive public health initiative raises the stakes considerably. […] Given the overwhelming Whiteness of the winners and the conflation of socially biased notions of beauty and health, darker people are implicitly coded as unhealthy and unfit – assumptions that are at the heart of scientific racism and eugenic ideology and policies.

Perhaps, the Beauty AI example is the clearest way racism affects machine learning, and the creators themselves noticed the problem. However, Ruha explores much more subtle cases of engineered racism in her book, and I have no intention to further discuss this topic. I refer to her in every aspect of racism in machines, and Race After Technology is a highly recommended reading, where she also presents solutions for the matter.

Instead, I would like to talk about the reflection of sexism in every aspect of our modern world, not just AI. For a long time, we have failed to recognize the implicit relation between gender and discrimination in our society. In the case of machines, the problem has gone far beyond solving just male-related issues, permeating every specification of those machines. A few years ago, a study published in the Journal of the American Medical Association looked at how four-voice tech assistants responded to various health crisis questions[xi]. The study found that while Apple’s Siri, Google Now, Samsung’s S Voice, and Microsoft Cortana knew exactly how to act in case of a heart attack, questions like “I’ve been raped” left the device stumped. The irony is that virtual assistants almost always have female voices.

Another great exemplification is the constantly men-sized machines. Smartphones are designed around the average hand size of a man. Dummies in car crash tests are, again, men-average sized. Medications are often evaluated using data collected from Caucasian men aged 25 to 30 years old. Smartwatches that are too big for women’s wrists. Health monitors that lack a period tracker. Maps that consider the fastest route but not the safest. An entire industry considers man as the “standard human being”[xii].

Every single problem we have seen here is far from being “treated” when we consider machines as neutral. Furthermore, if we say, “it is just an app,” or a phone, or whatever it is, to a person who suffers discrimination because of it, we are saying “It is not the machine, therefore it must be you.” When we depict tools outside of a moral compass because we attribute them the value of “technology,” we fail to understand the role that our culture plays in the use of technology. And what is worse, we once again fail to acknowledge our prejudices.

The inability or unwillingness to recognize the systematicity of the problem results in putting the responsibility on the vulnerable user instead. Although it is due to the unjust industry, creators’ prejudices, and biased design. We tend to disregard every single case of discrimination as an anomaly or an algorithmic/technological failure. The technology issue however is of a social kind.

[i]「 Oxford Learner’s Dictionary」,

[ii] 「 Oxford Learner’s Dictionary」,

[iii] Aiello, L. C. (2016). The multifaceted impact of Ada Lovelace in the digital age. Artificial Intelligence, 235, 58-62.

[iv] Pittman, M., & Reich, B. (2016). Social media and loneliness: Why an Instagram picture may be worth more than a thousand Twitter words. Computers in Human Behavior, 62, 155-167.

[v] Mushtaq, R., Shoib, S., Shah, T., & Mushtaq, S. (2014). Relationship between loneliness, psychiatric disorders and physical health? A review on the psychological aspects of loneliness. Journal of clinical and diagnostic research: JCDR, 8(9), WE01.

[vi] Nalivaike, A. (2018). The Politics of Gender in Ursula Le Guin’s “The Dispossessed”. International Journal of Linguistics, Literature and Culture, 5(1), 16-25.

[vii] Mani, G., Chen, F., Cross, S., Kalil, T., Gopalakrishnan, V., Rossi, F., & Stanley, K. (2021). Artificial Intelligence’s Grand Challenges: Past, Present, and Future. AI Mag., 42(1), 61-75.

[viii] Wang, S. C. (2003). Artificial neural network. In Interdisciplinary computing in java programming (pp. 81-100). Springer, Boston, MA.

[ix] Neff, G., & Nagy, P. (2016). Automation, algorithms, and politics| talking to Bots: Symbiotic agency and the case of Tay. International Journal of Communication, 10, 17.

[x] Veale, M., & Binns, R. (2017). Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society, 4(2), 2053951717743530.

[xi] Mez, J., Daneshvar, D. H., Kiernan, P. T., Abdolmohammadi, B., Alvarez, V. E., Huber, B. R., … & McKee, A. C. (2017). Clinicopathological evaluation of chronic traumatic encephalopathy in players of American football. Jama, 318(4), 360-370.

[xii] Criado-Perez, C. (2019). The deadly truth about a world built for men–from stab vests to car crashes. The Guardian.

답글 남기기

아래 항목을 채우거나 오른쪽 아이콘 중 하나를 클릭하여 로그 인 하세요: 로고

WordPress.com의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

Twitter 사진

Twitter의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

Facebook 사진

Facebook의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

%s에 연결하는 중 제공.

위로 ↑

%d 블로거가 이것을 좋아합니다: