This is an edited excerpt from Giorgi Vachnadze’s book Christian Eschatology of Artificial Intelligence published by Becoming Press.
At the advent of the 20th century, before digital computers were invented, ‘a computer’ was quite literally a person who calculated. Computers were mostly women. It wasn’t until World War II, that the shift in terminology took place as calculative practices would increasingly be delegated to computational machines. The following chapter draws from Nancy Katherine Hayles’s book My Mother Was a Computer [1], whose title was in turn borrowed from Anne Balsamo’s Technologies of the Gendered Body [2], leading to my own variation of the phrase used in the subtitle of this article. Not unlike my own mother, who spent 10 years of her life working as an accountant, Balsamo’s mother was also a computer. There is more than one reason why this title, if not shocking, should facilitate a general sense of nauseating discomfort. The phrase is inherently provocative as it purposefully violates a boundary, a psychological if not an ethical threshold marking a moment when the ‘machinations’ of the computational regime would/have become very uncanny. The statement is powerful as it immediately evokes the dangers of computational cosmology. Subjectivity marks the limit; when we think of ourselves and our loved ones in terms of microprocessors or ‘if-then’ statements, something seems to have gone terribly awry.
Hayle’s work offers an extended commentary on the computational episteme (or the Computational Universe, as she writes) with a critical interrogation of contemporary research paradigms in the field. More importantly, My Mother Was a Computer aims to trace the process whereby human enculturation, now already only partially mediated through the child’s actual relationship with the mother, becomes increasingly a matter of cybernetic subjectivation whereby the maternal function is gradually overtaken by digital devices and eventually automated systems like AI. The title therefore is meant as a challenge, a test perhaps, to the reader who really thinks she believes in the promises of technological progress, when the computational regime begins to colonize the most intimate domains of their human lifeworld. After all, these are the kinds of statements we would begin to hear non-ironically as we entered the post-human milieu. “My mother is an AI,” my own variation on Hayle’s phrase, could become one of many unsettling confessional trigger words for the psychoanalysis of the future when hybrid subjects will start to emerge against the background of bio-engineering experiments and human-AI interactivity.
The algorithmic regime operates through the new ‘canon and scripture’ of programming languages. Code is the new Bible as it operates within a multiple discursive constellation that includes the juridical, economic, social, and political domains of human life and communication. Code threatens to replace traditional narrative structures while activating new technologies of self-constitution and subject-formation. Practices of self-writing (Swonger, 2006) that date at least as far back as the 4th century B.C. are being substituted by the Digital Hypomnemata [3] (Mutchinick, 2021) and the “cultures” of self-programming. The mutative intertwinement between language, code, and bionic embodiment creates the conditions for the production of novel hybrid subjects. “… code and language,” writes Hayles, “operate in significantly different ways” (2010). Firstly, code is used as a communication relay between multiple kinds of “persons” – code is used to interact with both analogue and digital (post-biological) subjects. Second, coding, unlike natural language, is still an expert dialect in the sense that it requires special training to write and read programming languages. And third, artificial languages, in virtue of being readily formatted, are easily inserted into other technical (capitalist) apparatuses like accounting, digital marketing, and mathematical economics. In short: Code acts as an effective catalyst for the spread and normalization of economic rationality. It operates as an intermediary between the Homo Oeconomicus and the Homo Algorismus.
If English is the language of Global Capitalism, code is the language of the Universal Computational Regime. And just like hegemonic Americanised English of mass-culture entertainment, artificial languages still carry the weight, the spectral presuppositions and superstitions of Western science, religion, and metaphysics. Hayles draws on Jacques Derrida’s writings to extrapolate on these themes. Just like Koopman (2019), Hayles underscores how computational and algorithmic practices are operative both within and outside of digital computers, they comprise both the analogue and the digital domain. And similar to Shanker (2002) and others, Hayles uses Turing’s, still dominant, definition of computability: Computability is defined as a system, which starts out with a finite number of most basic mechanical procedures and eventually ends up rebuilding the most complex systems, living organisms, human thinking and the entire cosmos from elementary algebraic operations. This is also known, as we discussed in the previous chapters, as the mechanist thesis.
Think of cellular automata (Delorme, 1999), they belong to some of the most impressive inventions that came out of the computational paradigm. If CAs ever achieve the same levels of complexity we encounter in nature, this will be one way to prove the validity of the mechanist thesis and the relevance of computational epistemology. The underlying rationale behind experimental research on cellular automata is that any system (biological or other) is reducible to, and therefore can be rebuilt from, a binary system of “On” and “Off” configurations (Ilachinski, 2001). The interesting thing is how little human creativity and how much human labor is required to create complex autonomous systems. Machine Learning as a process is incredibly redundant, though the results are, if not ‘positive’ in the absolute and/or ethical sense of the word, at least impressive and highly influential. For better or worse, they shape our future. This marks an important difference between, for instance, the elegance of mathematics on the one hand and the unavoidable redundancy of ‘crunching numbers’ – the crucial component of AI research – on the other. The industrial hype rarely matches the boredom of the grind. Something that in and of itself is very interesting to consider, if not altogether suspicious. Next to industry, we have of course; the military (Surma, 2024) (Szabadföldi, 2021). Information is today the primary means whereby rich countries wage “war” i.e. controlled extermination and genocide, against smaller and “underdeveloped” nations (Downey, 2024). Coded warfare is superior to centralized command due to higher mobility and flexibility of networks. Hayles brings several examples of computational warfare that we will not be delving into for the moment.
The computational episteme, or cybernetic flesh, is quite distinct from both ancient Greek and Christian cosmologies (despite carrying the mythological attitudes and implications of both) in terms of the inherent simplicity of its fundamental ontology. The apparent, dazzling complexity of the computational world is sustained through the essential and redundant simplicity of binary operations: 1s and 0s. This initial redundancy is however counteracted by the unpredictable effects of computing achieved at higher levels of complexity. The basic rules may be simple to comprehend, but they serve as hypersensitive initial conditions that, in extreme cases, offer little to no information about the future states of the system. “Consequently, the Regime of Computation provides no foundations for truth and requires none, other than the minimalist ontological assumptions necessary to set the system running” (Hayles, 2010: 23). As discussed before, algorithms, unlike logical operations, contain black boxes and elements of uncertainty, and it is precisely the presence of erratic behaviour that makes them so interesting and effective.
The typical, if not symptomatic feature of the computational regime is to look at all the other epistemic paradigms i.e., literary, vitalist, organicist, social, anthropological, physicalist or other, as sub-systems of the digital and algorithmic frameworks. No matter which philosophical ontology we choose we are always reminded of the dominant order stating that our worldview can only be a subset, an alternative or specialized formulation of the computational model. Hayles demonstrates this need not be the case. In fact, there is little to no compelling evidence in favour of a Turing-Computable Universe, the question is at best undecided (Sprevak et al., 2017). Nonetheless, nature is being forced into electrical circuits and binary gates in a futile attempt to render everything rigid, one-dimensional and quite literally derivative. “Ensuring” thereby that the world remains predictable and governable. The emergence of digital computers has turned the entire multiplicity and phenomenological richness of human experience into a pixelated nightmare. Consumerism trains subjects to accept counterfeit worlds. Baudrillard famously refers to this aspect of consumer culture as hyperfunctionality (Baudrillard & Glaser, 1994), simulation and symbolic exchange (Buchanan, 2010b). Video games are an excellent example. Digital games subjectivate us into uncritical acceptance of the computational model, especially with increasingly powerful graphics cards, the simulations are indeed more real than the real – hyperreal (Thiry-Cherques, 2010). As virtual worlds become more convincing or in general “better” than the real world, the average consumer has little interest in maintaining a critical attitude or questioning the subtle and not-so-subtle forms of coercion built into them.
Speech, writing, and code — these are the three domains that Hayles offers a study and detailed investigation of in the second chapter of her work. As three distinct technologies of subjectivation and production of persons: speech, writing and code criss-cross and overlap in a multiplicity of diverse language games (Wittgenstein, 2010) – including games of power, control, and manipulation. “In the progression from speech to writing to code, each successor regime reinterprets the system(s) that came before, inscribing prior values into its own dynamics. Now that the information age is well advanced, we urgently need nuanced analyses of the overlaps and discontinuities of code with the legacy systems of speech and writing, so that we can understand how processes of signification change when speech and writing are coded into binary digits” (Hayles, 2010: 39). In and of themselves these technologies offer a quaint summary in the history and genealogy that constitutes the modern subject. They operate within more or less distinct regimes of truth and modalities of social practice. Importantly enough, their mutual relationship does not constitute a straightforward linear historical causal relationship as these apparatuses changed and mutated from speech to writing to code. There is instead a non-linear and circular relationship where each would look back on its predecessor and revise its contents and techniques in novel forms. Writing has affected, changed and re-arranged the ways in which people speak. Similarly; code has instantiated a new way of both reading and writing. “Although speech and writing issuing from programmed media may still be recognizable as spoken utterances and print documents, they do not emerge unchanged by the encounter with code” (Hayles, 2010: 39). The three dispositifs therefore constitute mobile and overlapping feedback loops of constant mutual re-enforcement, revision and incitement.
Each medium had resulted in an intensification of human experience and facilitated a substantive transformation in subjectivity. Writing is more than just recorded speech, and code has significantly modified the way we relate to ourselves in both oral and written form. The displacement of speech through writing allowed for its radical over-exposure and spatio-temporal overextension. Through the written medium speech was no longer confined to the time and place of its initial formulation. Writing has permanently modified human memory and communication. What makes coding, as a technology of recording and inscription, distinct is the nature of its addressees. Codes can be read by humans and machines alike. Hayles offers a structuralist and post-structuralist analysis of the computational regime through Saussurean linguistics and Derridian grammatology. One of the underlying questions addresses the strange and seemingly unexplainable complexity of all three mediums, but most importantly: how does the fundamental simplicity of programming languages (binary codes), unlike, say – the irreducible complexity found in writing and speech – allow for the emergence of moving structures that seem to imitate and successfully replicate the spoken and the written word? Put differently, if we assume that the multifaceted nature of human experience can more or less be captured in literary and poetic forms, but not through algorithms, then how does digital technology nonetheless allow such a convincing surface effect; such a compelling simulation of the lifeworld?
Starting off with Saussure. The central structuralist thesis claims that the signifier has nothing to do with the signified. There is no natural order of things and words. The sign is entirely arbitrary (Saussure, 1916). According to Saussure, the spoken word is nothing but the primordial form of writing, with the latter being nothing more than an extension of the former. Derrida rejects this position by claiming that writing has in fact overshadowed the original ontology of oral communication, enveloping it with its own metaphysics. There is in a sense, a textuality of spoken discourse that remains irreducible to speech. In addition, Derrida criticizes the alleged independence of the sign exposing the danger of such an abstraction which occludes the material workings of human practices, institutions, and power-networks that determine the function of a given sign. The sign may be arbitrary in the absolute sense, but the genealogy of signs speaks of the nexus of struggles and conflicts that are being fought over the way signs are instituted, distributed, maintained, and contested. Hayles refers to this as the erasure of materiality [4]. The critique captures a significant portion of what orchestrates the debate between structuralists and post-structuralists. The placement and role of artificial languages within the complex relationship of structure versus materiality will be shown to be crucial to Hayle’s investigation.
Drawing on the work of Friedrich Kittler (1990, 1999), Hayles offers her own take on the matter: “In the context of code, then, what would count as signifier and signified? Given the importance of the binary base, I suggest that the signifiers be considered as voltages” (2010). Since the binary diagram stands essentially for the presence (1) or absence (0) of an electrical signal, the voltaic schematism seems quite ingenuously appropriate. What is remarkable with Hayles’s analysis (among other things) is the homogenous constitution; the signifier and the signified are essentially made of the same stuff. Higher levels of expressive complexity are achieved through the re-arrangement and multifaceted combination of the base-level binary structure. A post-structuralist linguistic automaton! “Hence the different levels of code consist of interlocking chains of signifiers and signifieds, with signifieds on one level becoming signifiers on another” (Hayles, 2010: 45). The differentiation between voltages remains partially analogous to Saussure’s elaboration on the importance of differences between the signifiers that make signs articulate.
Things change, however, through a Derridian reading of difference as discontinuity. For Derrida the difference between the signs plays an active role in the constitution of meaning. The entropic nature of communication and the creative force of noise in information processing come to occupy the center stage. How can this view shed a light on the nature of code, which under normal interpretation leaves no place for indeterminacy or ambiguity? There is no place for floating signifiers in a programming language, every change of voltage must have a precise and determined meaning. Computers embody the system of perfect referentiality.
A similar, almost utopian correspondence between world and language is quite expertly executed in Wittgenstein’s (2021) Tractatus Logico-Philosophicus. Every part of the proposition is perfectly correlated with the elements of a picture expressed through it, which in turn picks out only and exactly those objects in the world that are represented by the elements of the picture. The Tractatus is (ideally) the mirror image of a perfect computer. If anything can be stated at all, it can be stated so with perfect clarity. “For the machine, obsolete code is no longer a competent utterance” (Hayles, 2010: 47). Satirically, we could claim along with Wittgenstein: “Whereof one cannot speak, thereof one cannot compute.” Unlike written languages, however, artificial languages offer their own materiality, which nonetheless maintains the same level of mobility as the abstraction itself. Coded signifiers do not step outside of their ontological field as they reach out to the signifieds. Both are located within the same matrix. This is an astounding feature in the ontology of software. The sign as well as the context of its application are clumped together within an apparatus that is abstract and concrete at the same time. Video games are worlds that we can carry around, they can be deactivated and then re-deployed in different spaces while maintaining the coordinates of their own world intact.
Ordinary language, unlike artificial languages, has the capacity of becoming functional through formalization, but it remains irreducible to this feature. Natural Language Processing software aims to overcome this difficulty, but so far without much success. One of the reasons for this, we could argue, is the underlying materiality of language. No matter how advanced the technology becomes, the Tractarian dream of a linguistic or audiovisual digital calculus will forever remain an ideological construct. The most important things are beyond that which can be articulated clearly. Therein lies the danger of cybernetic flesh. The simulation will always require an ongoing input of human blood, sweat, and nerves; the independence and autonomy of the virtual, no matter how dazzling and compelling it may seem, will never become self-sufficient – it will always remain dependent and parasitic on the corporeality of embodied forms of life. Why then, is the algorithmic regime so effective? This has more to do with power, rather than truth, or to put it in Foucauldian terms, it has to do with the illusion that true knowledge is independent of power. And wherein the illusion is most powerful, i.e. wherever a particular regime of truth becomes pervasive, therein power becomes the most oppressive. Algorithms point to the seduction of a universal language, a cybernetic heaven where things are transparent, where bodies are incorruptible, and where all suffering, reduced to errors and glitches, will have become completely erased, enhanced, corrected, and optimized. But the “truth” is that code acts as a gatekeeper to meaning, effecting multiple forms of constraint on what can and cannot be said and done. These limitations however, far from legitimate or “logical”, and often justified through rhetoric of safety and (national) security, are essentially the surface effects of narrow-minded and often unconscious attempts at domination and control whereby life becomes bio-politically administered under the Computational Regime (Greenhalgh-Spencer, 2020).
Unlike writing and speech, or less so at least, code comes with a built-in system of institutionalized practices. In order to use, understand and manipulate code, especially as a programmer, one must, to a significant degree, already be trained and subjectivated as a Homo Algorismus. This is why artificial languages and digital technology instantiate a whole new epistemic formation of power-relations and struggles. If your mother is not an AI, you are likely to be excluded from the game at the outset. In response to this tendency, Hayles offers an epistemic route of resistance, a line of flight so to speak, aiming to show that certain elements of human experience are irreducible to the computational paradigm. “…human cognition, although it may have computational elements, includes analog consciousness that cannot be understood simply or even primarily as digital computation” (Hayles, 2010: 55). Similarly, and to a significant extent, speech, and writing constitute independent and semi-open discursive constellations with unique and irreducible features of their own. Together, speech, writing, and code form a complex heterotopia of both digital and analogue discursive practices that facilitate the production of post-modern subjects.
Hayles defines digitization as the process whereby something continuous and analog is made digital and discreet. Where Christian monastic discipline involved the hierarchization of human conduct (vices, pleasures, thoughts, actions, habits, inclinations, etc.), similarly coding practices subject speech and writing to an ordered system consisting of multiple layers of structured command series (Demircan, et al. 2010). The cybernetic dispositif has placed us within a new constellation of computational embodiment where various aspects of our experience are rendered increasingly algorithmic. Each transformation from speech to writing to code introduces a novel element irreducible to the previous one changing and modifying our conduct to a significant extent. Each mutation in discourse introduces a new series of rules and language games. Compiling languages and executable systems confront users with previously unknown problems that remain unique to programming; certain difficulties and challenges that have no counterpart in traditional media [5].
At higher levels of expressive complexity, when the machinic operations become difficult or impossible to reduce and interpret as simple shifts in voltage, programming languages begin to modify and change natural languages to an extent where one could speak of the production and gradual constitution of original statements – machinic “creativity”, so to speak. At this point, autonomous systems begin to approach the threshold of the Turing test as it becomes very easy to anthropomorphize their behaviour and project human intentionality onto their actions (Coghlan, 2024). The line between complicated rule-following and causal determination becomes increasingly blurred as algorithms begin to exhibit emergent properties. But this is not a one-way street. Hayles points out two mutually reinforcing processes taking place. As autonomous systems become increasingly lifelike, it becomes easier to think of human intentionality in increasingly machinic terms. Satirically, then technological artifacts may tend to “counterproject” Machinomorphic features onto human behaviour, where rule-following and causal influence would collapse onto the notion of learning, which would apply equally to human as well as machine learning (“calculating humans”). Hayles points out the dangers such mystifications would carry as subjects tend to identify with machines, adopting their behaviour to the rules of programming languages and becoming more and more susceptible to technical manipulation and control.
Moving along a separate line of inquiry: If ‘Mother is an AI’, then the Virgin Mary is a Neutrino. Or so we could claim together with Isabelle Stengers (2023) [6] as we venture further into the theological unconscious of the computational regime. In contrast to Stengers however, we will not endeavor to “speak well” of either religion or the sciences. As clearly outlined in the preface, our purpose is neither to offend nor necessarily to go out of our way in respecting the feelings of others, but to raise some level of suspicion toward the current knowledge regime and perhaps even cultivate an original experience; a limit-experience that could help us glimpse beyond the confines of the current episteme. Let us begin another journey then, following the clearing made by Stengers, from the genealogy of the neutrino.
Neutrinos are quite abundant in the world. There is no shortage of these fermions in the Universe. Nonetheless, they are very difficult to detect (Reines, 1956), as they move about quite a bit and leave no trace of their activity. We will not delve into the technical details nor the philosophical and metaphysical subtleties involved in questions concerning for example, the experimental set-up required to detect the traces left by solar neutrinos (Argon-37 decay, etc.), nor the ontological status of indirect detection versus direct observation [7]; questions concerning realism and anti-realism in the philosophy of science and so forth, the reader is free to investigate these questions on their own [8]. What does seem interesting from an epistemological and genealogical point of view is the political function played by Davis’s Homestake Experiment (Davis, 1994), which served to bring together three distinct scientific disciplines: Nuclear Chemistry, Solar Physics, and the science of neutrinos. Needless to say, the unification of these formerly distinct fields was a financially lucrative enterprise, bringing the interests of multiple economic and state actors together. An alliance that was already prefigured theoretically in the work of John Bahcall.
Stengers continues to describe the set of complex historical and institutional processes that led to the formation of the scientific apparatus used to detect and measure the behaviour of the neutrino. The scientific construction of the neutrino, far from an “objective observation” was in reality a complicated field of conceptual and epistemologico-political interdisciplinary negotiations between various experts and stakeholders in the field, where each sought to impose their own methodology, interpretation, and set of conclusions and constraints on what was being “discovered”. Far from claiming that the neutrino was not discovered, or that neutrinos don’t exist, what Stengers aims to show (along with scholars like Latour), is that the “nature” of truth is such that “facts” operate more like fields, rather than objects – like states of affairs, rather than stable entities. Truth, therefore, is always an arena, a hermeneutic battlefront where multiple artificial apparatuses, among which we could easily claim are speech, language, and code, produce reality as a heterogeneous assemblage. It is when we begin to think of this assemblage as a homogenous, discrete, and neatly arranged collection of indivisible facts waiting to be discovered, that things like neutrinos, electrons, quarks, electromagnetism, gravity, and so forth begin to acquire a mythological connotation. This is where the neutrino becomes a Holy Virgin, untainted by the sinful errors of uncertainty and the contingencies of human practice and speculation. It is in this sense precisely that the “neutrino” does not exist.
We can speak here of flesh in a way that is entirely convergent with Merleau-Ponty (2013). It is not just humans that have flesh, but the world as a whole is “enfleshed.” In the same sense that scientific discoveries, the so-called facts, measurements, observations, recordings, computer simulations, and so forth; are comprised of pre-logical embodied practices, that are ontologically prior to their crystallization into formal-deductive models. Science is first and foremost an activity. But the copacetic structure of theories, as well as oversimplified reflections on the scientific method [9], what Foucault terms the connaissance or the surface effect of the Will to Know, tend to disguise the analog practices that sustain and orchestrate their abstract, as well as digital and computational counterparts. “The literature on the sciences is huge, but, as is the case with theology or apologetics in relation to religion, the assumption is made that science is taken for granted. Outside of this pious literature – a great part of which looks like the Manual for Inquisitors – one can count on the fingers of one hand the few excellent books of the memories and analytical writings written by the scientists themselves… As stimulating as these works might be, they are no remedy for the absence of inquiry, direct observation, and contradiction” (Latour, 2013). By refusing to acknowledge the underlying dynamism of human practice; science, just like Christian monasticism, is trying to resurrect the flesh of savoir, by placing the divine square within the material through the construction of a gapless series, abstract systems and through the “formatting” of human corporeality. The average ‘science enjoyer’ does not know about the existence of the neutrino in the strong sense, in the sense of having been acquainted with the heterogenous multiplicity of the techniques used to construct the neutrino, as well as the history of scientific practices that led to that construction. Instead, he only has faith in the facticity of its “objective” existence. They believe in science and that is the real problem.
But the complexity of the flesh is not just the material aspect of the world that resists total formalization. It also includes the social aspect of modern science; multiple human agents coming together throughout the formation of a research program. What makes thinkers like Foucault, Stengers, and Latour stand out among the various historians, sociologists, and anthropologists of science is how they demonstrate time and again, that power and knowledge are not complementary and mutually exclusive domains, they interweave and overlap in ways that makes them near impossible to separate. “…the differentiation between the inside and outside of this territory is permanently under construction by scientists” (Stengers, 2023: 76). The very content of science, the very heart of “truth” is heavily influenced by the political economy of research practices. The lines are blurred, the boundaries are fuzzy and both “science” and the “scientist” are a cluster, rather than a homogenous space; a complex assemblage of subjectivating apparatuses and discourses. Every scientist today, must take up the tasks and duties of an entrepreneur, a programmer, and a mystic, while at the same time hiding this multiplicity behind the glossy veneer of theory, all in order to attract investors and procure enough funds for the next big “revolution” in thinking.
Let us perform an abridged reading of Philippe Pignarre’s (2023) Latour-Stengers: An Entangled Flight in an attempt to tie everything together. As Pignarre notes, both authors share a similar methodological approach together with certain ethical concerns. They offer a pragmatically oriented paradigm used to investigate contemporary techno-scientific practices and their political-economic implications. What cuts across the literature is their common use of the term practices. A general emphasis on human practice holds tremendous subversive potential vis-à-vis cybernetic Capitalism. We mentioned before that the computational regime operates through a totalizing epistemology that refuses to acknowledge the existence of alternative modes of life and research. The computational universe wants to assimilate every ontological region of human experience and the world, into the cybernetic episteme. One could in fact argue, that the absorption of human activity, arguably the most complex process encountered in nature, would mark the threshold and a kind of final victory of the computational regime as it would entail the possibility of “real” Artificial Intelligence. An autonomous system that could act and facilitate structural changes in its surroundings. However, this remains a distant dream since many would argue that human embodied practices bear an irreducible complexity, which can only serve as a starting point, rather than the end goal of any world-building endeavors. Through the notion of practice, Stengers and Latour seek to shake the foundations of those well-rounded mythological constructions often encountered in prestigious journals, physics textbooks, and within the lion’s share of the literature produced in the natural (as well as social) sciences. Pignarre refers to this as the process of de-epistemologization. An act of resistance in and of itself.
The (anti)ontology of practices offers a critical and deconstructive potential, whereby the study of human knowledge through its particularist lens, leaves no stone unturned. Practices are up for grabs, in the sense that every and any social practice can be questioned and contested. Nothing is sacred and nothing is taken for granted. Especially the subject. The pragmatic analyses offered by Stengers and Latour can and are effectively mobilized to offer a genealogy of the modern subject. We are what we do and we do what is and has been done to us. Similar to Foucault’s idea of a subject dispersed in discourse, the Latour-Stengers apparatus sees the subject as a complex actor-network assemblage sustained through multiple practices. Importantly enough, there is no reductionism in the method. As Pignarre notes, Latour-Stengers effectively offer the reader a set of irreductions. Having no intention to domesticate the turbulence of everyday life, instead, they introduce complexity into oversimplified theoretical constructions, bringing theory closer to immediate experience. To how things really stand at the level of their appearance, at the level of Foucauldian savoir.
At face value, Latour’s method pairs incredibly well with AI research. Let us return to the question of actors and objects. It is almost as if Latour, by abolishing the sharp boundary between subjects and objects, has set the stage where subjectivated or socialized objects, especially autonomous systems, will begin to emerge without much resistance. Perhaps artificial intelligence is a bit too literal of an interpretation for ANT? Does the shoe fit perfectly? All too perfectly? Precisely by noticing how frictionless the application can be, we should practice even more vigilance and suspicion. We have spoken of hybrid subjects with Nancy Hayles and various modes of subject-dispersion with Foucault, Stengers, and Latour. As objects begin to overtake subjects, either by evolving or merging with them, the stage is now set for us to discuss joint-cognitive systems, ANT, AI, and STS [10].
As Venturini [11] aptly puts it, “no one has so skillfully woven anthropology, philosophy, sociology, and political science to unfold the imbroglios of contemporary technology” (2023), referring to Latour. First thing worth noticing, as Venturini takes care to emphasize, is that Latour himself took little interest in generative AI. “…he would have considered AI as a distraction, or worse he would have been appalled by the enormous quantity of energy consumed by these systems,” writes Venturini. Along with space travel and the metaverse, Latour claims these things are neither sustainable nor necessarily desirable as far as humanity as a whole is concerned. Venturini attempts to reconstruct what would have been Latour’s position concerning the political economy and the myriad of social practices concerned with AI. Far from an anti-science thinker, Latour was an amateur scientist himself who held great hopes and admiration for the potential use and utilization of technological constructions. The artificiality of scientific discourse, to him, was never a point of critique, quite the opposite; similar to architecture and engineering, constructions are very real, they have direct effects on human life and the planet at large. The historicization of the concept of nature, according to Latour, only made science all the more enticing. The problem with the work surrounding developments in AI, however, is that these practices could hardly be considered scientific.
Simulations and models comprise a specific subset of digitality. Digital practices are not reducible to computational paradigms. The analytical tools offered by AI would be less than appealing to Latour due to their tendency to render things too simple, straightforward, and homogenous. For Latour, research was a creative process, which implied a rigorous and methodical appreciation of heterogeneity in the universe. Latour does not seek to make analysis easier, quite to the contrary, in his writing, he always picks out the path of most resistance. However, Venturini argues, that Latour would be quite intrigued with AI as a tool for digital writing. But this admiration should not be taken at face value. “I believe he would have liked the idea of an interactive writing tool that is less obedient than a typewriter, as this may add another layer of resistance, and thus reflexivity, to the thinking of the researcher who uses it” (Venturini, 2023). We can see here that Latour’s hypothetical “excitement” over Natural Language Processing software is more of a subversive attitude. He would have enjoyed the challenge and the need for resistance that generative AI tools would create, as they would become obligatory; inseparable participants in the work of the researcher.
For Latour, via Venturini, AI will simply be another piece of technology inserted into the existing circulation of artifices, without thereby resulting in any exceptional change or modification to human life. They would be efficient and they would naturally be accompanied by the appropriate actions and reactions, struggles, assimilations, critique, and resistance as any other piece of technology that preceded them. Alongside Venturini we can conclude; as far as ANT is concerned AI does fit neatly into the paradigm, however trivially so. No more than any other piece of equipment; shovels, hammers, desktop PCs, and so forth. Yes, AI is an agent, yes, more lifelike than most of the other objects… so what? Just like other objects, autonomous systems are effective actors precisely due to the fact that they are different from humans. “AIs are interesting not because they can succeed in Turing’s imitation game, but because they fail it productively” (Venturini, 2023). More powerful, faster, more efficient, often less prone to error (or prone to different and new kinds of errors), and so on. This makes them only more effective as actors. And it makes them more interesting as objects of investigation and study.
Latour offers his own critique of the Turing test. The criticism is directly related to the surface effect of the knowledge-connaissance discussed above. No different from the neutrino, the Turing Machine is a sacralized object taken out of its embedded context and made to operate within an artificial, abstract thought experiment, where the computer is designed so as to “scam” the observer into thinking it’s a human being. The complexity of the network, the human milieu, where real machines are actually situated is completely neglected. “Hence the treachery of the Turing test. To pretend that there can be such a thing as an AI separated from the infrastructure that supports its existence, which includes many other machines as well as many other humans. And, symmetrically, that there can be such a thing as an equally isolated human tester” (Venturini, 2023). Just like NLP’s, human minds are culturally situated actors who act and think as collectives, rather than individuals. Why would Latour be concerned with the autonomy of an algorithm, when he considers human autonomy no less of a myth. It is not the technology that poses ethical problems, but rather its being plugged into an exploitative system that feeds on click-bait advertisements, profit, and the narcissistic attention economy. Venturini extends the critique to the concept of Singularity and AI takeover. There cannot be a “Mad” AI that “turns against” humans, since there is no clear line demarcating “natural” from artificial systems. There are however humans turning against each other and waging genocide through the use and abuse of artificial Gospel [12].
[1] My Mother Was a Computer: Digital Subjects and Literary Texts.
[2] Technologies of the Gendered Body: Reading Cyborg Women.
[3] See also the chapter “Self Writing” in Ethics: Subjectivity and Truth: Essential Works of Michel Foucault edited by Paul Rabinow.
[4] My Mother Was a Computer, p. 43.
[5] For a detailed technical discussion of problems encountered in C++ programming, see My Mother Was a Computer, pp. 56-59.
[6] See Stengers’s Virgin Mary and the Neutrino: Reality in Trouble.
[7] For a detailed study of these questions, I would refer the reader to the “Sociology of Solar-Neutrino Detection”, also cited by Stengers, written by Trevor Pinch and offering a detailed description of the technical set-up.
[8] The Stanford Encyclopedia of Philosophy is an excellent starting point.
[9] Popper’s falsifiability hypothesis being a case in point.
[10] Science and Technology Studies.
[11] “Bruno Latour and Artificial Intelligence”.
[12] “The Gospel”: How Israel Uses AI to Select Bombing Targets in Gaza, https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-aito-select-bombing-targets.
Bibliography
Baudrillard, J. (1994). Simulacra and Simulation. Ann Arbor: University of Michigan Press.
Buchanan, I. (2010a). Actual and Virtual. In A Dictionary of Critical Theory. Oxford: Oxford University Press.
Buchanan, I. (2010b). Symbolic Exchange. In A Dictionary of Critical Theory. Oxford: Oxford University Press.
Coghlan, S. (2024). Anthropomorphizing Machines: Reality or Popular Myth? Minds and Machines, 34(3),
25.
Davies, H., McKernan, B., & Sabbagh, D. (2023). “The Gospel”: How Israel Uses AI to Select Bombing Targets in Gaza. The Guardian, https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-aito-select-bombing-targets.
Davis, R. (1994). A Review of the Homestake Solar Neutrino Experiment. Progress in Particle and Nuclear
Physics, 32, 13-32.
Delorme, M. (1999). An Introduction to Cellular Automata: Some Basic Definitions and Concepts. In Cellular
automata: A Parallel Model (pp. 5-49). Dordrecht: Springer.
Demircan, E., Besier, T., Menon, S., & Khatib, O. (2010). Human Motion Reconstruction and Synthesis of
Human Skills. In Advances in Robot Kinematics: Motion in Man and Machine: Motion in Man and
Machine (pp. 283-292). Dordrecht: Springer.
Downey, A. (2024). The Algorithmic Apparatus of Neocolonialism: Counter-Operational Practices and the
Future of Aerial Surveillance. In Vision and Verticality: A Multidisciplinary Approach (pp. 109-121). Berlin:
Springer.
Foucault, M. (1988). Technologies of the Self. In Technologies of the Self: A Seminar with Michel Foucault
(Vol. 18) (p. 170). London: Tavistock.
Foucault, M. (1990). Maurice Blanchot: The Thought from Outside. In Foucault/Blanchot (pp. 7-60). New York: Zone Books, 9-58.
Foucault, M. (1988). The History of Sexuality: The Use of Pleasure (Vol. 2). London: Vintage Books.
Foucault, M. (1988). The History of Sexuality: The Care of the Self (Vol. 3). London: Vintage Books.
Foucault, M. (1990). The History of Sexuality: The Will to Knowledge (Vol. 1). London: Vintage Books.
Foucault, M. (2005). The Order of Things. London: Routledge.
Foucault, M. (2007). Security, Territory, Population: Lectures at the Collège de France, 1977-78. London: Springer.
Foucault, M. (2008). Psychiatric Power: Lectures at the College de France, 1973-1974 (Vol. 1). New York: Palgrave Macmillan.
Foucault, M. (2011). The Courage of Truth. Berlin: Springer.
Foucault, M. (2013). Archaeology of Knowledge. London: Routledge.
Foucault, M. (2013). Lectures on the Will to Know. New York: Palgrave Macmillan.
Foucault, M. (2016). On the Government of the Living: Lectures at the Collège de France, 1979-1980 (Vol.
8). New York: Palgrave Macmillan.
Foucault, M. (2019). Ethics: Subjectivity and Truth. In Essential Works of Michel Foucault: 1954-1984. London: Penguin.
Foucault, M. (2021). The History of Sexuality: Confessions of the Flesh (Vol. 4). New York: Pantheon.
Greenhalgh-Spencer, H. (2020). Teaching within Regimes of Computational Truth. Philosophy of Education
Archive, 686-699.
Hayles, N. K. (2010). My Mother Was a Computer: Digital Subjects and Literary Texts. Chicago: University of Chicago Press.
Ilachinski, A. (2001). Cellular Automata: A Discrete Universe. Singapore: World Scientific Publishing Company.
Kittler, F. A. (1990). Discourse Networks, 1800/1900. Stanford: Stanford University Press.
Kittler, F. A. (1999). Gramophone, Film, Typewriter. Stanford: Stanford University Press.
Koopman, C. (2019). How We Became Our Data: A Genealogy of the Informational Person. Chicago: University of
Chicago Press.
Latour, B. (2007). Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University
Press.
Latour, B., & Woolgar, S. (2013). Laboratory Life: The Construction of Scientific Facts. Princeton: Princeton University Press.
Merleau-Ponty, M., (2013). Phenomenology of Perception. London: Routledge.
Mutchinick, J. (2021). Hypomnemata Digitali: Memoria, Automatizzazione e Controllo Sociale. Mechane,
1(2), 81-98.
Popper, K. (2002). The Logic of Scientific Discovery. London: Routledge.
Pignarre, P. (2023). Latour-Stengers: An Entangled Flight. Hoboken: John Wiley & Sons.
Shanker, S. G. (2002). Wittgenstein’s Remarks on the Foundations of AI. London: Routledge.
Sprevak, M., Copeland, J., & Shagrir, O. (2017). Is the Whole Universe a Computer? The Turing Guide:
Life, Work, Legacy. Oxford: Oxford University Press.
Stengers, I. (2023). Virgin Mary and the Neutrino: Reality in Trouble. Durham: Duke University Press.
Surma, J. (2024). Deep Learning in Military Applications. Safety & Defense, 10(1), 1-7.
Swonger, M., (2006). Foucault and the Hupomnemata: Self Writing as an Art of Life. Rhode
Island: University of Rhode Island.
Szabadföldi, I. (2021). Artificial Intelligence in Military Application–Opportunities and Challenges. Land
Forces Academy Review, 26(2), 157-165.
Thiry-Cherques, H. R. (2010). Baudrillard: Work and Hyperreality. RAE-Eletrônica, 9(1).
Venturini, T. (2023). Bruno Latour and Artificial Intelligence. Tecnoscienza–Italian Journal of Science &
Technology Studies, 14(2), 101-114.
Wittgenstein, L. (1958). The Blue and Brown Books. Oxford: Blackwell. Oxford.
Wittgenstein L. (1980). Remarks on the Philosophy of Psychology (Vol. 1). Chicago: University of Chicago Press.
Wittgenstein, L. (1983). Remarks on the Foundations of Mathematics. Cambridge: MIT Press.
Wittgenstein, L. (1989). Lectures on the Foundations of Mathematics. Chicago: University of Chicago Press.
Wittgenstein, L. (2010). Philosophical Investigations. Hoboken: John Wiley & Sons.
Wittgenstein, L. (2021). Tractatus Logico-Philosophicus. London: Anthem Press.
Giorgi Vachnadze is a Foucault and Wittgenstein scholar. He completed his Bachelor’s studies at New Mexico State University and received a Master’s qualification in philosophy at the University of Louvain. Former editor and peer-reviewer for the Graduate Student Journal of philosophy “The Apricot”, he has been published in multiple popular and academic journals worldwide. Vachnadze’s research focuses on philosophy of language and discourse analysis. Some of the questions and themes addressed in his work include: History of Combat Sports, Ancient Stoicism, Genealogies of Truth, Histories of Formal Systems, Genealogy of Science, Ethics in AI and Psychoanalysis, Media Archaeology, Game Studies, and more.
Pingback: Sibernetik Söylem Analizi: “Annem Bir Yapay Zekaydı” – Maybe Machines of Fuzzy Futures
Pingback: Giorgi Vachnadze, Christian Eschatology of Artificial Intelligence: Pastoral Technologies of Cybernetic Flesh (2024) | Foucault News