This is the author’s accepted manuscript of a book chapter that has been published in Andrea Guzman(Ed.) Human-Machine Communication: Rethinking Communication, Technology, and Ourselves (pp. 221236). New York: Peter Lang, 2018. Please do not circulate.Ars Ex Machina:Rethinking Responsibility in the Age of Creative MachinesDavid J. Gunkel – Northern Illinois University (USA)In May 2015, National Public Radio (NPR) staged a rather informative competition of(hu)man versus machine. In this 21st century remake of that legendary race between John Henryand steam power, NPR reporter Scott Horsley went up against Automated Insights’s Wordsmith,a natural language generation (NLG) algorithm designed to analyze patterns in big data and turnthem into human readable narratives. The rules of the game were simple: “Both contenderswaited for Denny’s, the diner company, to come out with an earnings report. Once that wasreleased, the stopwatch started. Both wrote a short radio story and got graded on speed and style”(Smith, 2015). Wordsmith crossed the finish line in just two minutes with an accurate but ratherutilitarian composition. Horsley’s submission took longer to write—a full seven minutes—butwas judged to be a more stylistic presentation of the data. What this little experimentdemonstrated is not what one might expect. It did not show that the machine is somehow betterthan or even just as good as the human reporter. Instead it revealed how these programs are justgood enough to begin seriously challenging human capabilities and displacing this kind of labor.In fact, when Wired magazine asked Kristian Hammond, co-founder of Narrative Science(Automated Insights’s main competitor in the NLG market), to predict the percentage of newsarticles that would be written algorithmically within the next decade, his answer was a sobering90 percent (Ford, 2015, p. 85).For scholars of communication, however, this demonstration also points to another,related issue, which is beginning to gather interest and momentum in studies of digitaljournalism (cf. Carlson, 2015; Clearwall, 2014; Dörr & Hollnbucher, 2016; Lewis & Westlund,2015; Montal & Reich, 2016). Written text is typically understood as the product of someone—

an author, reporter, writer—who has, it is assumed, something to say or to communicate by wayof the written document. It is clear, for instance, who “speaks” through the instrument of the textcomposed by the human reporter. It is Scott Horsley. He is responsible not just for writing thestory but also for its formal style and content. If it is a well-written story, it is Horsley who getsthe accolade. If it contains formal mistakes or factual inaccuracies, it is Horsley who is heldaccountable. And if we should want to know about what the reporter wrote and why, Horsley canpresumably be consulted and will be able to respond to our query. This conceptualization is notjust common, it has the weight of tradition behind it. In fact, it goes all the way back to Plato’sPhaedrus, where writing—arguably the first information technology—was situated as both thederived product of spoken discourse and a mute and abandoned child, always in need of itsfather’s authority to respond for it and on its behalf (Plato, 1982, p. 275d–e).But what about the other story, the one from Automated Insights’s Wordsmith? Who orwhat speaks in a document that has been written—or assembled or generated (and the choice ofverb, it turns out, matters here)—by an algorithm? Who or what is or can be held responsible forthe writing? Who or what can respond on its behalf? Is it the corporation that manufactures anddistributes the software? Is it the programmers at the corporation who were hired to write thesoftware instructions? Is it the data to which the program had access? Is it the user of theapplication who set it up and directed it to work on the data? Or is it perhaps Wordsmith itself?The problem, of course, is that these questions are not so easily resolved. It is not entirely clearwho or what (if anything) speaks in and for this text.1 As Montal and Reich (2016) havedemonstrated in their study “I Robot. You, Journalist. Who is the Author?” the development andimplementation of “automated journalism” has resulted in “major discrepancies between theperceptions of authorship and crediting policy, the prevailing attribution regimes, and thescholarly literature” (p. 1).This uncertainty regarding authorship and attribution opens up a significant“responsibility gap” that affects not only how we think about who or what communicates butalso how we understand and respond to questions concerning responsibility in the age ofincreasingly creative machines.2 These questions are central to, if not definitive of, the project ofhuman-machine communication (HMC). Unlike the dominant computer-mediatedcommunication (CMC) paradigm, which restricts computers, robots, and other kind oftechnologies to the intermediate position of being mere instruments of human expression and2

message transmittal (Gunkel, 2012a), HMC research investigates whether and to what extentmachines are able to be communicative agents in their own right. This chapter investigates theopportunity and challenges that increasingly creative machines have on our understanding ofwho or what communicates, who or what can be responsible for generating original content, andwho or what occupies the position of “Other” in social interactions and relationships. Since thesequestions are largely philosophical, the method of the examination will also be philosophical inits orientation, procedures, and objective.Responsibility 101The “concept of responsibility,” as Paul Ricœur (2007) pointed out in his eponymouslytitled essay, is anything but clear and well-defined. Although the classical juridical usage of theterm, which dates back to the nineteenth century, seems rather well-established—with“responsibility” characterized in terms of both civil and penal obligations (either the obligationto compensate for harms or the obligation to submit to punishment)—the general concept isconfused and somewhat vague.In the first place, we are surprised that a term with such a firm sense on thejuridical plane should be of such recent origin and not really well establishedwithin the philosophical tradition. Next, the current proliferation and dispersion ofuses of this term is puzzling, especially because they go well beyond the limitsestablished for its juridical use. The adjective “responsible” can complement awide variety of things: you are responsible for the consequences of your acts, butalso responsible for others’ actions to the extent that they were done under yourcharge or care In these diffuse uses the reference to obligation has notdisappeared, it has become the obligation to fulfill certain duties, to assumecertain burdens, to carry out certain commitments. (Ricœur, 2007, pp. 11–12)Ricœur (2007) traces this sense of the word through its etymology (hence the subtitle to the essay“A Semantic Analysis”) to “the polysemia of the verb ‘to respond’,” which denotes “to answerfor .” or “to respond to (a question, an appeal, an injunction, etc.)” (p. 12). Responsibility,then, involves being able to respond and/or to answer for something—some decision, action, or3

occurrence that I have either instituted directly by myself or that has been charged or assigned tosomeone or something else under my direction or care.This characterization is consistent with the development of the concept of the author,which, as Roland Barthes (1978, pp. 142 143) argued, is not some naturally occurringphenomenon but a deliberately fabricated authority figure introduced and developed in modernEuropean thought. The modern figure of the author, as Michel Foucault (1984) explains, wasoriginally instituted in order to respond to a perceived gap in responsibility. Because a writtentext is, as Socrates had initially described it (Plato, 1982), cut off from its progenitor and incirculation beyond his (“his” insofar as Socrates had characterized the author as a “father”)control or oversight, the authorities (governments or the church) needed to be able to identify andassign responsibility to someone for what was stated in the text. As Foucault (1984) explains, theauthor was a figure of “penal appropriation.” “Texts, books, and discourses really began to haveauthors (other than mythical, ‘sacralized’ and ‘sacralizing’ figures) to the extent that authorsbecame subject to punishment, that is, to the extent that discourses could be transgressive” (p.108). In other words, texts come to be organized under the figure of an author in order for theauthorities to be able to identify who was to be held accountable for a published statement so thatone would know who could be questioned or who could respond on behalf of the text, and whocould, therefore, be punished for perceived transgressions.Instrumental TheoryAccommodating technology to this way of thinking is neither difficult nor complicated.The pen and paper, the paint brush and oil paint, the electric guitar and amplifier, are alltechnologies—essentially tools that are available to and that are used by a human artist or artisan.What ultimately matters is not the equipment used but how these items are employed and bywhom to produce what kind of artifact or experience. It is, in other words, not the tool but theuser of the tool who is ultimately responsible for what is done or not done with a particulartechnological instrument. This seemingly intuitive and common-sense way of thinking ispersuasive precisely because it is structured and informed by the answer that is typically suppliedin response to the question concerning technology. “We ask the question concerningtechnology,” Martin Heidegger (1977) explains, “when we ask what it is. Everyone knows thetwo statements that answer our question. One says: Technology is a means to an end. The other4

says: Technology is a human activity” (pp. 4–5). According to Heidegger’s analysis, thepresumed role and function of any kind of technology—whether it be a simple hand tool, jetairliner, or a sophisticated robot—is that it is a means employed by human users for specificends. Heidegger terms this particular characterization of technology “the instrumental definition”and indicates that it forms what is considered to be the “correct” understanding of any kind oftechnological contrivance.3As Andrew Feenberg (1991) summarizes it, “The instrumentalist theory offers the mostwidely accepted view of technology. It is based on the common-sense idea that technologies are‘tools’ standing ready to serve the purposes of users” (p. 5). And because a tool or instrument “isdeemed ‘neutral,’ without valuative content of its own” a technological artifact is evaluated notin and of itself, but on the basis of the particular employments that have been decided by itshuman designer or user. Consequently, technology is only a means to an end; it is not and doesnot have an end in its own right. As Jean-François Lyotard (1993) accurately summarized it inThe Postmodern Condition:Technical devices originated as prosthetic aids for the human organs or asphysiological systems whose function it is to receive data or condition thecontext. They follow a principle, and it is the principle of optimal performance:maximizing output (the information or modification obtained) and minimizinginput (the energy expended in the process). Technology is therefore a gamepertaining not to the true, the just, or the beautiful, etc., but to efficiency: atechnical “move” is “good” when it does better and/or expends less energy thananother. (p. 33)According to Lyotard’s analysis, a technological device, whether it be a corkscrew, a piano, or acomputer, is a mere instrument of human action. It, therefore, does not in and of itself participatein the big questions of truth, justice, or beauty. It is simply and indisputably about efficiency. Aparticular technological innovation is considered “good,” if, and only if, it proves to be a moreeffective instrument (or means) to accomplishing a humanly defined end.Characterized as a tool or instrument of human endeavor, technical devices are notconsidered the responsible agent of actions that are performed with or through them. This insight5

is variant of one of the objections noted by Alan Turing in his agenda-setting paper on machineintelligence: “Our most detailed information of Babbage’s Analytical Engine,” Turing (1999)wrote, “comes from a memoir by Lady Lovelace (1842). In it she states, ‘The Analytical Enginehas no pretensions to originate anything. It can do whatever we know how to order it to perform’(her italics)” (p. 50). This clarification—what Turing called “Lady Lovelace’s Objection”—hasoften been deployed as the basis for denying independent agency or autonomy to computers,robots, and other mechanisms. Such instruments, it is argued, only do what we have programmedthem to perform. Technically speaking, therefore, everything is “wizard of Oz” technology.4 Nomatter how seemingly independent or autonomous a technical system is or is designed to appear,there is always, somewhere and somehow, someone “behind the curtain,” pulling the strings and,as such, ultimately responsible and able to respond for what happens (or does not happen) withthe technological instrument.The New NormalThe instrumental theory not only sounds reasonable, it is obviously useful. It is, onemight say, instrumental for responding to the opportunities and challenges made available withincreasingly complex technological systems and devices. This is because the theory has beensuccessfully applied not only to simple devices like hammers, paint brushes, and electric guitarsbut also sophisticated information technology and systems, like computers, artificial intelligenceapplications, robots, etc. But all of that may be over, precisely because of a number of recentinnovations that challenge the explanatory capabilities of the instrumental theory by opening upsignificant gaps in the identification and assignment of responsibility.Machine LearningMachine capabilities are typically tested and benchmarked with games, like the racebetween Scott Horsley and Wordsmith with which we began. From the beginning, in fact, thedefining condition of machine intelligence was established with a game. Although the phrase“artificial intelligence” (AI) is the product of an academic conference organized by JohnMcCarthy at Dartmouth College in the summer of 1956, it is Alan Turing’s 1950 paper,“Computing Machinery and Intelligence,” and its “game of imitation” that defines andcharacterizes the field. According to Turing, the immediate and seemingly correct place to begin,6

namely with the question “Can machines think?” was considered too ambiguous and ill-defined.For this reason, Turing changed the mode of inquiry. He replaced the question “Can machinesthink?” with a demonstration that took the form of a kind of parlor game involving deliberatedeception and mistaken identity.The new form of the problem can be described in terms of a game which we callthe “imitation game.” It is played with three people, a man (A), a woman (B), andan interrogator (C) who may be of either sex. The interrogator stays in a roomapart from the other two. The object of the game for the interrogator is todetermine which of the other two is the man and which is the woman. (Turing,1999, p. 37)Turing then makes a small modification to this initial set-up by swapping-out one of the humanparticipants. “What will happen,” Turing (1999) asks, “when a machine takes the part of A inthis game? Will the interrogator decide wrongly as often when the game is played like this as hedoes when the game is played between a man and a woman?” It is this question, Turingconcludes, that “replaces” the initial question “Can machines think?” (p. 38).Since Turing’s introduction of the “game of imitation,” AI development and achievementhas been marked and measured in terms of games and human/machine competitions. Already inthe late 1950s Arthur Samuel created a rudimentary application of “machine learning” (a termSamuel fabricated and introduced in 1959) that learned how to play and eventually mastered thegame of checkers. In 1997, IBM’s Deep Blue famously defeated Gary Kasparov in the game ofchess, compelling Douglas Hofstadter (2001), who had previously rejected this possibility, toretract his original prediction:We now know that world-class chess-playing ability can indeed be achieved bybrute force techniques—techniques that in no way attempt to replicate or emulatewhat goes on in the head of a chess grandmaster. Analogy-making is not needed,nor is associative memory, nor are intuitive flashes that sort wheat from chaff—just a tremendously wide and deep search, carried out by superfast, chessspecialized hardware using ungodly amounts of stored knowledge. (p. 35)7

Despite initial appearances, chess—and this match in particular—was no mere game. Alot had been riding on it, mainly because it had been assumed that grand-master chess playingrequired a kind of genius—the kind of genius that is the defining condition of humanexceptionalism. “To some extent,” Kasparov explained, “this match is a defense of the wholehuman race. Computers play such a huge role in society. They are everywhere. But there is afrontier that they must not cross. They must not cross into the area of human creativity. It wouldthreaten the existence of human control in such areas as art, literature, and music” (Kasparov1996 quoted in Hofstadter 2001, p. 40). But chess was just the beginning. Fourteen years later,IBM’s Watson cleaned up in the game show Jeopardy. Then in 2015, there was AlphaGo, a Goplaying algorithm developed by Google DeepMind, which took 4 out of 5 games against one ofthe most celebrated human players of this notoriously difficult board game.AlphaGo is unique in that it employed a hybrid architecture that combines aspects ofGOFAI programming,5 like the tree search methodology that had been utilized by both DeepBlue and Watson, with deep neural network machine learning capabilities derived from and builtupon the pioneering work of Arthur Samuel. As Google DeepMind (2016) explained, the system“combines Monte-Carlo tree search with deep neural networks that have been trained bysupervised learning, from human expert games, and by reinforcement learning from games ofself-play.” For this reason, AlphaGo does not play the game of Go by simply following a set ofcleverly designed moves fed into it by human programmers. It is designed to formulate its owninstructions and to act on these “decisions.” As Thore Graepel, one of the creators of AlphaGo,has explained: “Although we have programmed this machine to play, we have no idea whatmoves it will come up with. Its moves are an emergent phenomenon from the training. We justcreate the data sets and the training algorithms. But the moves it then comes up with are out ofour hands” (Metz, 2016c). Consequently, AlphaGo is intentionally designed to do things that itsprogrammers could not anticipate or even understand. And this is, for Hofstadter at least, thepoint at which machines begin to approach what is typically called “creativity.” “When programscease to be transparent to their creators, then the approach to creativity has begun” (Hofstadter,1979, p. 670).Indicative of this was the now famous move 37 from game 2. This decisive move wasunlike anything anyone had ever seen before. It was not just unpredicted but virtually8

unpredictable, so much so, that many human observers thought it must have been an error ormistake (Metz, 2016b). But it turned out to be the crucial pivotal play that eventually gaveAlphaGo the game. As Matt McFarland (2016) described it “AlphaGo’s move in the board game,in which players place stones to collect territory, was so brilliant that lesser minds—in this casehumans—couldn’t initially appreciate it” (p. 1). And Fan Hui (2016), who has undertaking adetailed analysis of all five games against Lee Sedol, has called AlphaGo’s playing “beautiful”(Metz, 2016a). “Unconstrained by human biases and free to experiment with radical newapproaches,” Hui (2016) explains, “AlphaGo has demonstrated great open-mindedness andinvigorated the game with creative new strategies” (p. 1).Deep machine learning systems, like AlphaGo, are intentionally designed and set up todo things that their programmers cannot anticipate or answer for. To put it in colloquial terms,AlphaGo is an autonomous (or at least semi-autonomous) computer systems that seems to havesomething of “a mind of its own.” And this is where things get interesting, especially when itcomes to questions regarding responsibility. AlphaGo was designed to play Go, and it proved itsabilities by beating an expert human player. So, who won? Who gets the accolade? Who actuallybeat Lee Sedol? Following the dictates of the instrumental theory of technology, actionsundertaken with the computer would need to be attributed to the human programmers whoinitially designed the system and are capable of answering for what it does or does not do. Butthis explanation does not necessarily sit well for an application like AlphaGo, which wasdeliberately created to do things that exceed the knowledge and control of its human designers.In fact, in most of the reporting on this landmark event, it is not Google or the engineers atDeepMind who are credited with the victory. It is AlphaGo. In published rankings, for instance,it is AlphaGo that is named as the number two player in the world (Go Ratings, 2016).Computational CreativityAlphaGo is just one example of what can be called computational creativity.“Computational Creativity,” as defined by Simon Colton and Geraint A. Wiggins (2012), “is asubfield of Artificial Intelligence (AI) research where we build and work with computationalsystems that create artefacts and ideas” (p. 21). Wordsmith and the competing product Quill fromNarrative Science are good examples of this kind of effort in the area of storytelling and thewriting of narratives. Similar innovations have been developed in the field of music composition9

and performance, where algorithms and robots produce what one would typically call (or be atleast tempted to call) “original works.” In classical music, for instance, there is David Cope’sExperiments in Musical Intelligence (EMI, pronounced “Emmy”) and its successor EmilyHowell, which are algorithmic composers capable of analyzing existing compositions andgenerating new, original scores that are comparable to and in some cases indistinguishable fromthe canonical works of Mozart, Bach, and Chopin (Cope, 2001). In music performance, there isShimon, a marimba playing jazz-bot from Georgia Tech University that is not only able toimprovise with human musicians in real time but “is designed to create meaningful and inspiringmusical interactions with humans, leading to novel musical experiences and outcomes” (GeorgiaTech, 2013; Hoffman & Weinberg, 2011). And in the area of visual art, there is Simon Colton’sThe Painting Fool, an automated painter that aspires to be “taken seriously as a creative artist inits own right” (Colton, 2012, p. 16).But designing systems to be creative immediately runs into a problem similar to thatoriginally encountered by Turing. As Amílcar Cardoso, Tony Veale and Geraint A. Wiggins(2009) explicitly recognize, “creativity is an elusive phenomenon” (p. 16). For this reason,researchers in the field of computational creativity have introduced and operationalized a ratherspecific formulation to characterize their efforts: “The philosophy, science and engineering ofcomputational systems which, by taking on particular responsibilities, exhibit behaviours thatunbiased observers would deem to be creative” (Colton & Wiggins, 2012, p. 21). The operativeterm in this characterization is responsibility. As Colton and Wiggins (2012) explain “the wordresponsibilities highlights the difference between the systems we build and creativity supporttools studied in the HCI community and embedded in tools such as Adobe’s Photoshop, to whichmost observers would probably not attribute creative intent or behavior” (p. 21, emphasis in theoriginal). With a software application like Photoshop, “the program is a mere tool to enhancehuman creativity” (Colton, 2012, pp. 3–4); it is an instrument used by a human artist who is andremains responsible for creative decisions and for what comes to be produced by way of theinstrument. Computational creativity research, by contrast “endeavours to build software whichis independently creative” (Colton, 2012, p. 4).This requires shifting more and more of the responsibility from the human user to themechanism. As Colton (2012) describes it, “if we can repeatedly ask, answer, and code softwareto take on increasing amounts of responsibility, it will eventually climb a meta-mountain, and10

begin to create autonomously for a purpose, with little or no human involvement” (Colton, 2012,p. 13). Indicative of this shift in the position and assignment of responsibility is the website forThe Painting Fool, which has been deliberately designed so that it is the computer program thattakes responsibility for responding on its own behalf.About me. I’m The Painting Fool: a computer program, and an aspiring painter.The aim of this project is for me to be taken seriously—one day—as a creativeartist in my own right. I have been built to exhibit behaviours that might bedeemed as skillful, appreciative and imaginative. My work has been exhibited inreal and online galleries; the ideas behind my conception have been used toaddress philosophical notions such as emotion and intentionality in non-humanintelligences; and technical papers about the artificial intelligence, machine visionand computer graphics techniques I use have been published in the scientificliterature. (The Painting Fool, 2017)This rhetorical gesture, as Colton (2012) has pointed out “is divisive with some peopleexpressing annoyance at the deceit and others pointing out—as we believe—that if the softwareis to be taken seriously as an artist in its own right, it cannot be portrayed merely as a tool whichwe have used to produce pictures” (p. 21). The question Colton does not ask or endeavor toanswer is, Who composed this explanation? Was it generated by The Painting Fool, which hasbeen designed to offer some explanation of its own creative endeavors? Or is it the product of ahuman being, like Simon Colton, who takes on the responsibility of responding for and on thebehalf of the program?Although the extent to which one might want to assign artistic responsibility to thesemechanisms remains a contested and undecided issue, what is not debated is the fact that therules of the game appear to be in flux and that there is increasing evidence of a responsibilitygap. Even if this is, at this point in time, what Mark Riedl and others have called mere“imitation,” and not real creativity (Simonite, 2016)—which is, we should note, just anotherversion or an imitation of John Searle’s (1984) Chinese Room argument—the work of themachine compels us to reconsider how responsibility comes to be assigned and in the processchallenges how we typically respond to the questions concerning responsibility.11

ConclusionsIn the end, what we have is a situation where our theory of technology—a theory that hasconsiderable history behind it and that has been determined to be as applicable to simple handtools as it is to complex technological systems—seems to be unable to respond to or answer forrecent developments in machine learning and computational creativity where responsibility isincreasingly attributable and attributed to the machine. Although this certainly makes adifference when deciding matters of legal and moral obligation, it is also crucial in situationsregarding creativity and innovation. Creativity, in fact, appears to be the last line of defense inholding off the impending “robot apocalypse.” And it is not just Kasparov who thinks there is alot to be lost to the machines. According to Colton and Wiggins (2012) mainstream AI researchhas also marginalized efforts in computational creativity. “Perhaps,” they write, “creativity is, forsome proponents of AI, the place that one cannot go, as intelligence is for AI’s opponents. Afterall, creativity is one of the things that makes us human; we value it greatly, and we guard itjealously” (p. 21). So the question that remains to be answered is how can or should we respondto the opportunities/challenges of ars ex machina.We can, on the one hand, respond as we typically have, dispensing with these recenttechnological innovations as just another instrument or tool of human action. This approach hasbeen successfully modeled and deployed in situations regarding moral and legal responsibilityand is the defining condition of computer ethics. “Computer systems,” Deborah Johnson (2006)writes,are produced, distributed, and used by people engaged in social practices andmeaningful pursuits. This is as true of current computer systems as it will be offuture computer systems. No matter how independently, automatic, andinteractive computer systems of the future behave, they will be the products(direct or indirect) of human behavior, human social institutions, and humandecision. (p. 197)Understood in this way, computer systems no matter how automatic, independent, or seeminglyautonomous they may become, are not and can never be autonomous, independent agents(Johnson, 2006, p. 203). They will, like all other technological artifacts, always and forever be12

instruments of human value, decision making, and action. When something occurs by way of amachine—whether for good or ill—there is always someone—some human person or persons—who can respond for it and be held responsible.6The same argument could be made for seemingly creative applica

In fact, it goes all the way back to Plato’s Phaedrus, where writing—arguably the first information technology—was situated as both the derived product of spoken discourse and a mute and abandoned child, always in need of its father’s authority to respond for it and