The Emergence of AI Governance in Korea and Its Orientation

Ph.D. candidate, Lagape, University of Lausanne
Jongheon Kim
JongHeon.kim@unil.ch

The Development of (not-so-reliable) AI and AI Governance

As Artificial Intelligence (AI) stopped being a mere imagination and began to integrate into real-life throughout the last decade, the failures and risks of the technology have been successively reported. The most recent and famous examples include Amazon having abandoned its machine-learning recruitment system in 2018 due to the bias of the system against women, and the Pulse, a face depixelizing software developed by researchers at Duke University, reconstructing people of color in photos including Barack Obama as white people. Researchers also revealed that training large AI models would consume a lot of computer processing power, to the extent that a particular type of neural architecture search would have produced the amount of CO2 nearly the same as the lifetime output of five average American cars. [1]

However, private firms are not here to protect us from the risks of their product. Worse, the aforementioned cases imply that they have been building AI technologies that were poorer versions of humankind as the technology has imitated and reproduced all the human problems, notably stereotypes. Nevertheless, the majority of technology experts have firmly declined the contestation. For them, those failures are merely temporary, which have only happened due to the lack of time and resources. All they need is – and what the politics and the public should provide them is – our confidence and patience (packaged with more data and less regulation). 

This gap between what has been happening and the technological optimism should be the major reason that the construction of balanced governance based on a wide-ranging discussion had been called for. Indeed, as the historians and the sociologists of science and technology have demonstrated, knowledge and social order are strongly intermingled – or, “co-produced,”[2] which means that the evolution as well as the pitfalls of technologies are not solely technological but also political, economic, and social at the same time. Even more so when the technology in question is expected to have considerable impacts on society, as is the case of AI.

Accordingly, from the mid-2010s, the governments of developed countries began to pay attention to AI governance, leading to the establishment of national plans: Australia, China, the European Union, France, Germany, India, Japan, Singapore, the United States, and so on. The South Korean (Korea, hereafter) government was not late in this global movement either and announced relevant plans, notably the Master Plan for the Intelligence-Information Society in 2016 and the AI National Strategy in 2019. In the “plans,” the governments commonly highlighted the importance of concerted efforts from all sectors to deliver an AI not only economically beneficial but also socially acceptable. It was emphasized that the participation of experts from diverse sectors would be essential to cover diverging demands and issues. 

Then, at the local level, what is the exact context in which each government endorsed those plans and what does the government suggest? Put differently, how has the meaning of AI been contextualized within a local backdrop and has led to a particular governance by interacting with embedded visions on science and technology? To answer these questions, this paper examines the Korean case. Based on document analysis including the publications by the government and the think tanks, and the media coverage, it delivers a situated account of how visions and unexpected events are intermingled with the construction of AI governance in Korea.

The Rise of the 4IR discourse and the Promotion of AI in Korea

One of the most distinctive features of the Korean documents related to the AI governance that we can notice, even from the titles of the aforementioned plans of countries, is its emphasis on the term “Fourth Industrial Revolution” (4IR). The 4IR is a discourse – or a hypothesis – proposed by Klaus Schwab in 2016 at the World Economic Forum (WEF), suggesting that the development of technologies, AI in particular, has been imposing a radical transformation on society. Despite the promotion by the WEF, it seems that the term did not take off apart from in a handful of countries, notably Korea. The results of Google Trends search are illustrative, as the search interest and the numbers of results in Korean surpass those in English (Figure 1). In the case of the national AI plans too, while other countries’ documents rarely mentioned the term, if not never, the Korean plans were built “to respond to the 4IR,” as their titles indicate.[3] What would be the reason for this particularity?

<Figure 1> Google Trends search result

Indeed, there was an important event in Korea regarding the promotion of AI: The Go match in 2016 between Sedol Lee, the Korean world champion, and AlphaGo, an AI player built by Google. Due to its vast number of strategies, Go had been considered as the most complex board game, of which the Korean and Chinese people had been proud. Including the experts of AI, almost no one in Korea thought the AI would beat Lee. When the event ended with AlphaGo winning the match by 4:1, the reaction was sensational. It made headlines day after day, occasionally accompanied by a hint of déjà vu of Asian humanist culture being destroyed by Western materialist technology. As Maeng convincingly suggested[4], it was from then AI has become a central element within science and technology policy in Korea while creating synergy with the 4IR discourse that promoted AI as the central enabling technology. 

In this process, the vision of developmentalism also played an important role. Anchored in the country’s imaginary throughout the second half of the last century, the vision, considering science and technology as a means of making the nation an advanced country by creating economic benefits, has been central in policymaking.[5] As I discussed elsewhere, although the vision tends to highlight the desirable side of the nation’s future, at its bottom, it constantly alludes to bitter experiences of colonization and the poverty.[6] This dialectics, which resonates with the dialectics between optimism and pessimism within the 4IR discourse[7], has urged policymakers as well as the public to focus on economic growth to catch up with the advanced countries and not to be colonized again. In this vein, the defeat of Lee led the country to believe that AI was not a technology to be achieved in the future anymore, since an AI capable of beating humankind has already arrived. Worse, it came from, again, the Western world, despite the extraordinary economic growth and technological development of Korea in the last few decades. Then, it is unsurprising that the Korean government took the initiative to overcome this challenge. 

Hence, on March 17, 2016, immediately after the defeat of Lee, the Park administration announced that it would invest 1 trillion KRW in AI research over the next five years. The Korean Information Society Development Institute published in June a report The Shock of AlphaGo, while the Ministry of Science, ICT, and Future Planning, which became the Ministry of Science and ICT with the establishment of Moon administration in 2017, published the Nine National Strategic Projects, including the development of AI. 

Dealing with the 4IR also became an essential element of the Moon administration’s agenda established in 2017. Not only did the 4IR discourse occupy a central place within the administration’s 5-year plan of the government operation, but also a Presidential Committee on the Fourth Industrial Revolution (PCFIR) was founded. Since then, the relevant ministries, the PCFIR, and think tanks bombarded the public with AI and the 4IR related policies and recommendations. For instance, the Ministry of Science and ICT announced: I-Korea 4.0 Plan for Addressing the 4IR in 2017; not less than two more strategies in 2018, notably the AI R&D Strategy for the I-Korea 4.0; and four additional strategies in 2019, including the AI National Strategy

With this successive publication of national plans and the related media coverage that followed, I suggest that, while the 4IR discourse has been frequently criticized by scholars as pure speculation promoting neo-liberal social order[8], it has succeeded in making the policymakers as well as the citizen consider it as a (kind of) reality. The remaining question is, then, what future do the plans envision and how will the future be governed – what orientation has been suggested by the established governance. In what follows, I try to answer these questions by analyzing the plans.

The Orientation of AI Governance

We now know that, in Korea, AI is not considered as a simple technology but the locomotive of the 4IR that is happening. Then, what do the national plans on AI suggest? How do they intend to govern the technology – what would the Korean AI governance look like?

The relevant documents commonly argued that to deal with the transformation called the 4IR, the nation should be united and concentrate its resources in terms of humanpower, education, funding, as well as policy to the AI development. 

Figure 2. Country as a functional system[9]

Accordingly, they categorized the country into three parts, which are the research (universities and research centers), the ICT industry, and society/institution (Figure 2). Industry and research were described as the ones that deliver innovation, while the role of society/institution was principally defined as the production of the workforce. It is noteworthy that the citizen as a whole was in turn classified according to their level of AI literacy: three-level of workers (leading-level, professional-level, practical-level), and the ordinary citizens (Figure 3). The workers were to be fostered to keep up with other developed countries, while a life-long AI education would be conducted from the primary school, to make, as President Moon insisted, the non-expert citizens the “first-class citizens…in using the technology without fear.”[10]

Put differently, although they did not use the term, the national plans envisioned the entire country as a functional system, which the students of systems theory called the “innovation system.” The perspective highlighted the importance of the division of sectors and the flow of products between each sector. The task of the government was accordingly defined as providing a good environment for the research and the industry. In terms of the policy, the former would include increased funding for long-term projects and the establishment of graduate schools specialized in AI. For the latter, the government intended to prime the pump not only by funding R&D but also by buying pioneering products that would not have its market yet. Since the regulation has been criticized as the major pitfall for bold innovation, deregulation was largely put forward, notably the shift from pre-regulation to post-regulation and the introduction of the sandbox[11], allowing the marketization of products without the prerequisite regulatory process. This approach was justified as the industrial development was identified with the advancement of society as a whole.[12]

This penchant for a business-friendly environment clearly resonates with the vision of developmentalism without conflict, as the state, regardless of its politico-economic orientation, constitutes “an integral part of the neoliberal program”.[14] However, it is still unclear how this orientation has been adopted since little is transparent in the governmental decision-making process. Although it is not feasible in this paper to open the “black-box”[15], taking a look at the PCFIR should be informative since it was established by the current administration as the advisory committee specialized in 4IR. 

Figure 3. Four Categories according to AI Competence[13]

Initially, the PCFIR was to be composed of 15 ministers and 15 civil members, which was reorganized to 5 ministers and 25 civil members at its launch. While this re-composition was intended to include more voices from civil society, it was criticized because most civil members came from the academia and industry of technology fields. At the same time, the Ministry of Health and Welfare as well as the experts of, for instance, ethics and social issues were excluded.

Hence, from its conception, the committee had a business-friendly orientation. Indeed, the only member representing the labor argued that the Recommendations for the Government established by the PCFIR only included the industry’s interests as if deregulation was the committee’s objective.[16] The orientation was also underpinned by the risk that “the emerging markets might be dominated by foreign nascent firms endowed with vast resources while the Korean government seeks to align domestic stakeholders’ interests”. [17]

In short, the description of the PCFIR implies the orientation of the AI governance in Korea has considerably favored the industry, at the cost of social concerns. This bias is alarming and somewhat disturbing as the 4IR was hypothesized not as technological progress, but rather, a transformation of society led by technology.

Who are the Experts of the 4IR and AI?

In this short text, I examined the Korean national plans and the PCFIR as the constituting factors of the AI governance of the country. I found out that, while the “4IR” and the development of AI were defined as a radical transformation in terms of not only the technology but also society at large, only a limited group had been admitted as “experts” to the governance building, which resulted in an explicitly business-friendly orientation.

At this point, I should assert that the governing orientation described above is unsatisfactory. Even less so as the governmental documents indicated that they had consulted experts from diverse areas and citizens through conferences and surveys. It was obvious that they failed in covering ethical and social issues in establishing the national orientation. 

Although AI is to be produced by the experts of technology, I believe that the interior of this technology should be filled with the knowledge delivered by a wide-ranging discussion about humans and society. Moreover, I assume that the inclusion of experts from diverging fields is not a solution but merely a first step toward responsible governance, which makes the exclusion of non-technological perspectives more alarming. I am not implying that social scientists are more ethical than computer scientists, for instance. Instead, I argue that their professional knowledge is complementary to conceive what we would call intelligence. I do think that “intelligence,” especially as it progresses, should not be built on narrow knowledge of technology and business.

If we are going to invest an exorbitant amount of money to develop AI that would transform our society, it should be a better version of ourselves not a poorer one. That is the pragmatic reason for building more inclusive governance.


[1] MIT Technology Review(2020.12.04), 「We Read the Paper That Forced Timnit Gebru out of Google. Here’s What It Says.」, https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/

[2] Jasanoff, S. (2004), States of Knowledge: The Co-Production of Science and Social Order, London: Routledge.

[3] For instance, Government of the Republic of Korea Interdepartmental Exercise (2016), Mid- to Long-Term Master Plan in Preparation for the Intelligence Information Society: Managing the Fourth Industrial Revolution.

[4] 맹미선(2017), “알파고 쇼크와 ‘4차 산업혁명’ 담론의 확산”, 서울대학교 석사학위논문.

[5] Kim, E.-S. (2018) “Sociotechnical Imaginaries and the Globalization of Converging Technology Policy”, Science as Culture, Vol. 27, No. 2, pp. 175–97; Kim, S.-H. (2014), “The Politics of Human Embryonic Stem Cell Research in South Korea”, Science as Culture, Vol. 23, No. 3, pp. 293–319.

[6] Kim, J. (2020), “We Are Late as Always, but Not That Much”, presented at the General Conference of European Consortium for Political Research, Innsbruck.

[7] Schiølin, K. (2019), “Revolutionary Dreams”, Social Studies of Science, Vol. 29, pp. 1–25.

[8] Avis, J. (2018), “Socio-Technical Imaginary of the Fourth Industrial Revolution and Its Implications for Vocational Education and Training”, Journal of Vocational Education & Training, Vol. 21, pp. 1–27.

[9] Translated and recreated from 4차혁명위원회 (2017), 『4차 산업혁명 대응을 위한 기본 정책방향』, 25쪽.

[10] 「인공지능 회의 현장 방문」, https://www.gov.kr/portal/gvrnPolicy/view/H1910000000129826?policyType=G00301&srchTxt=%EC%9D%B8%EA%B3%B5%20%EC%A7%80%EB%8A%A5%20%EC%9C%A4%EB%A6%AC

[11] Ringe, W.-G.  and Ruof, C. (2020) “Regulating Fintech in the EU: The Case for a Guided Sandbox”, European Journal of Risk Regulation, Vol. 11, no. 3, pp. 604–29.

[12] 관계부처합동 (2019), 『인공지능 국가전략』, 19–20쪽.

[13] Translated and recreated from 관계부처합동 (2019), 『인공지능 국가전략』, 25쪽.

[14] Kim, G. (2019), From Factory Girls to K-Pop Idol Girls, p. 5, Lanham: Lexington Books.

[15] Biesbroek, R. et al. (2015), “Opening up the Black Box of Adaptation Decision-Making”, Nature Climate Change, Vol. 5, No. 6, pp. 493–94.

[16] 「4차산업혁명위원회 유일한 노동계 위원 황선자씨 “위원회 권고문, IT 기업가인 위원장이 경영계 입장만 반영”」, http://news.khan.co.kr/kh_news/khan_art_view.html?artid=201911052217005&code=940702.

[17] 4차산업혁명위원회 (2019), 『4차 산업 혁명 대정부 권고안』, 10쪽.

답글 남기기

아래 항목을 채우거나 오른쪽 아이콘 중 하나를 클릭하여 로그 인 하세요:

WordPress.com 로고

WordPress.com의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

Google photo

Google의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

Twitter 사진

Twitter의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

Facebook 사진

Facebook의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

%s에 연결하는 중

WordPress.com 제공.

위로 ↑

%d 블로거가 이것을 좋아합니다: