Konstantin Baehrens, August 2021
Technical rationalisation of labour processes even up to technocratic ‘solutionism’ in the face of socio-economic and ecological problems and crises, horror scenarios of mass unemployment or the deregulation of labour conditions with regards to the “essentially technologically deterministic category of ‘Industry 4.0’” (Butollo/Paiva Lareiro 2021, 363), visions of exclusively machine-based split decisions within the shortest time possible on the stock market, in a nuclear war, or politics, a better philosophical understanding of what it means to be conscious and to think, or even of the ‘conditio humana’ – the discourse surrounding ‘Artificial Intelligence’ (AI) is often associated with great, socially relevant hopes and fears that are the subject of social confrontation. It therefore stands to reason that the technological development co-constituting the current state of the forces of production is examined in the Historical-Critical Dictionary of Marxism (HKWM) from a historical-critical perspective in respect to its general social and socio-economic, political and cultural underlying conditions, its effects, and its possibilities to be modified. Thus, the subject of the keyword “Artificial Intelligence” is not supposed to be merely presented as a “pure history of ideas” (Paul N. Edwards 1996, 239; quoted in the article), but further situated within its conceptual and social history. The article by Christof Ohm, now translated into English, addresses relationships of competition and processes of negotiation between different theoretical currents and institutions regarding the designations of ‘AI’ or ‘cybernetics’, the preconditions for state-public funding of research in its initial phase via military budgets, as well as the political repression and social discrimination that mathematician and computer science groundbreaker Alan Turing was exposed to on account of his sexual orientation.
The dictionary article on “Artificial Intelligence” was published in 2012 in German in Volume 8/I of the HKWM. Inter alia, this volume also contains thematically related entries on the “Cybertariat”, “Living Labour”, “Performance, Achievement”, “Direction”, and “Learning”. In recent years, the technology branch and the terminology of Machine Learning have won further relevance within social practice and discourses regarding AI. The article’s critical findings can also be made to bear upon these current developments, allowing for the reconstruction and thus the grasping of new challenges and conflicts faced.
As Sebastian Höfer elaborates in terms understandable for non-specialists, “the successful application of machine learning still requires significant expert knowledge and human intervention”. Specialised professionals first must semantically ‘label’ “large amounts of data” for the training runs, contextualise them, and, once the training has been carried out, evaluate whether and to what extent the results that are coughed up by the machine meet the objectives that these specialists themselves have set for the Machine Learning process. The development of Machine Learning therefore necessitates an immense expenditure of human, living labour, but also a constantly growing set of diverging data that is as extensive as possible in order to circumvent so-called ‘overfitting’, the condition in which an algorithm-based machine delivers excellent results for data with which it was ‘trained’, but fails to perform as expected in regards to new data, i.e. to ‘generalise’. According to Höfer, despite the large amounts of data employed during training, “the generalisation ability of the learned models in many areas is still far below what one would intuitively expect. In addition, the lack of interpretability of many methods is an important practical problem”, where, “depending on the application”, also “social issues” are raised, for example in the context of “sexist or racist facial recognition models, because the training data being used mainly depict white men” (Höfer 2018). An augmentation of structural racism and sexism is especially relevant when such technology is deployed in police work and surveillance. Donna Haraway’s 1985 criticism of the AI’s “military dispositive” referred to in Ohm’s article addresses a similar problem.
Insofar as the forms of application of such AI are “limited to specific tasks such as the categorisation of images, the detection of patterns in large amounts of data, the representation of human-machine interfaces and the like”, Timo Daum states: “The construction of thinking machines is (still) not on the agenda – the current task at hand for the relevant actors is rather to consolidate the data-extracting business models.” (2022, 253) This is the case due to the fact that “machine learning algorithms” do not only generally, “like all previous technologies, bear the imprint of their designers and culture” (Wajcmann 2022, 16), but, more specifically, the “anonymous constraints of the law of value […] continue to determine the purpose” (Schmiede 1996, quoted in Meyer 2022, 96). More recently, in addition to Daum, one can include Wolfgang Fritz Haug, Sabine Nuss, Florian Butollo, Patricia de Paiva Lareiro, Matteo Pasquinelli, and Judy Wajcman as among the theoreticians who have examined and systematised the consequences of application cases of Machine Learning with an eye toward the Marxist formulation of these questions.
Daum differentiates between historically longer-term tendencies of development and more recent specificities: “the constantly growing share of collective social knowledge within capitalist production on the one hand, and the continuous striving of capital to privatise the end products of this process on the other – are common threads throughout the entire history of capitalism” (2022, 248). What is new is the mode of valorisation of “unpaid user labour” (250), through which training data as resource is provided, labelled, and contextualised in order to be exploited for Machine Learning and at the same time to be processed “to create custom advertising profiles” (251). The latter step presents a central moment of profit generation within a corporation (cf. ibid. et sq.), with Daum differentiating between “three mechanisms of capital valorisation” in regards to the “monetisation of the consumer’s [or user’s] activities” (253 et sq.): 1) “valorisation of general knowledge as a service”, 2) “exploitation of the gratuitous labour of users”, 3) “perpetual innovation as a source of profit” (254).
Given that in all three cases this profit is ultimately siphoned off from other capital, it is more precisely referred to as rent. However, while a corporation such as Amazon cross-finances as yet unprofitable AI applications through other sectors, the mode of the technology’s application cannot simply be described as unproductive, as if it were merely absorbing surplus value produced elsewhere. Even if the “current technological thrust is occurring in the context of a long phase of weak economic growth” (Nuss/Butollo 2022, 7), “in the macroeconomic context of a structural over-accumulation of capital” (Butollo/Paiva Lareiro 2021, 359), it may primarily, but not solely consist in the “conquest of larger market shares of an overall stagnating market” (373) and a “redistribution between fractions of capital” (372). Amazon for example, following Haug drawing a quote from Karl Marx’s notes to the second volume of Capital, functions as a productivity-increasing mediating instance and in this respect should be considered as an (AI-supported) “machine which reduces useless expenditure of energy or helps to set production time free” – currently for the benefit of capital productivity (MECW 36/135 [MEW 24/133]; quoted in Haug 2020, 42). The technical potential thus made available for the rationalisation of total social production is thwarted under capitalist conditions by the real competition between private actors (cf. 46).
Marx himself developed his theory of the political-economic role of machinery, among other things, as a critique of Charles Babbage’s (1791-1871) views over the possibilities of a “mechanisation of management via computation” (Pasquinelli 2020, 124) in favour of capital productivity. With the current cases and forms of Machine Learning’s deployment as an example of AI, a machine is under development, whose capacity to facilitate the coordination of machines and planning must necessarily be kept under private control within the capitalist mode of production, i.e. must be ‘made scarce’ through regulation and taxation of access (see also “General Intellect”, HCDM, 212). Such privatisation takes place when corporations like Amazon and Uber offer their coordination capacities as a piece of service on the market (while simultaneously acting as marketplaces). The social division of labour that is mediated through the application of machine learning as a so-called ‘platform economy’ occurs as a private-sector service, even though the mass of data required for the development of the applied technology is collected through quasi-public and unpaid user behaviour – at most remunerated by usage licenses without payment (synchronously coupled with the targeted placement of advertisements and the creation of corresponding profiles, which in turn are either utilised as commodities for advertisers or brought to other markets). While platform cooperativism and FLOSS movements have so far remained rather marginal (cf. Kludas et al. 2019), problems connected to a lack of socio-economic democratisation become evident, as it is also the case in regards to the development of algorithms, or the juridical handling of technology to which decisions are increasingly being delegated. (In his article, Ohm points to warfare by means of drones.)
Butollo and Nuss state that the technology in question not only provides a “contribution to the rationalisation of the production of surplus value”, but also is conducive of “strategies for accelerating turnover rates of commodities, the diversification of supply, and the improvement of product quality – measures designed to achieve competitive advantages in the realisation of surplus value” (2022, 5; translation modified). According to this approach, there is indeed a potential “on the side of use value” for a “more effective deployment of human labour”, its “qualitative improvement, and a differentiation of supply”, that is however “limited” on the “side of exchange value” by the compulsion to “utilise” the technology “profitably” (Butollo/Paiva Lareiro 2021, 372). Furthermore, the labour objectified in this technology is producing additional ‘industrial reserve army’ and thus puts pressure on the price of living labour. Regarding the question of its “social benefit” (Nuss/Butollo 2022, 8), apart from productivity considered in purely quantitative terms, the “question of the sense of purpose, or of the critique, for example of the new possibilities of surveillance, individualised products, and the increase in the number of consumer articles” (Butollo/Paiva Lareiro 2021, 372, footnote 10) must of course be taken into account. Ohm’s article elaborates on the problematic of the effects of surveillance in the context of the ‘Internet of Things’.
What is considered as politically relevant is not limited merely to the ramifications of these technologies in their respective mode of application. It also extends to further causes bringing about these very ramifications (cf. Wajcman 2022, 18), such as the “dominance of a small number of corporations” possessing the required computing capacity (as well as the most users, largest data collections, and corresponding amounts of capital) tending towards monopoly and “the social consequences thereof” (15). If, in light of the “huge, casual, insecure, low-paid workforce that powers the wheels of the likes of Google, Amazon and Twitter”, political hope is vested in securing a “guaranteed basic income” (18 et sq.), the problem arises of a possible elimination of state social benefits which often cannot be provided in purely monetary terms. The large quantity of barely paid labour power being exploited through ‘crowdsourcing’ in countries of the Global South would in any case be excluded from a basic income in the richest industrialised countries. In contrast, Ohm records hopes associated with the prospect of a “socialisation of automation’s dividends”.
Daum (2020) makes note of socio-ecological questions in respect to the amount of data required for Machine Learning: Whilst “the computational cost for large AI models already doubles every three and a half months”, according to a study conducted by the OpenAI research group, due to improved energy efficiency – with a “data centre workload” that has increased “more than six fold” since 2010 – the absolute energy consumption has remained “largely constant”. However, this increase in efficiency is currently almost six times slower than the growth in the computational cost of large AI models.
Turing’s model of a universal calculating machine may potentially be capable of running through any calculable process, in reality however, probably not all of them: complexity-theoretical considerations suggest that the entire time of the existence of the universe might not suffice for such a task, and, for that matter, neither for more narrowly limited problems above a certain degree of complexity. Questions of the required energy and material consumption play a limiting role here as well.
A paradox that often arises when reflecting on AI is fundamentally grounded on the one hand in the ambition to evaluate its success in purely behaviourist terms, and on the other, in the tendency not to regard (human) cognition as the real behaviour of an acting subject that is physically, historically, and socially involved and for whom interpreting necessarily includes changing the respective object, but to regard cognition ultimately only as a computational process in the sense of processing systems of symbolic signs (cf. critically, e.g., Lake et al. 2017). As a contradictory complement to the above stands the widespread attitude towards AI characterised by Wajcman, as one in which people in a changing society “reify technology, treating it as a neutral inevitable force driving these changes” (2022, 18), not taking into account that they are producing it themselves. Nuss and Butollo, too, problematise a “technology fetish that obstructs a differentiated interpretation of contemporary capitalism from which political strategies can be deduced” (2022, 3; cf. Haug 2020, 24). Formulated in the terminology of Georg Lukács, such a “fetishization” is rendered possible when a given phenomenon such as ‘intelligence’ is apprehended in a socially and historically unspecific way and its “abstract concept (in most cases only some aspects of this abstract concept) is fetishized into purportedly independent being, into its own peculiar entity” (1948/1969, 129; in a later interview, Lukács explicitly rejected to see “in cybernetics, basically, the ideal of human thinking”; 1970/2009, 409). An understanding of intellectual production presupposes an analysis of the specific historical form of material production (cf. Marx, Theories of Surplus Value, MECW 31/181 [MEW 26.1/256 et sq.]): “All work is both mental and manual” (Pasquinelli 2020, 126; cf. “Immaterial Labour”, HCDM, 177-85).
Konstantin Baehrens (editorial coordinator in the HKWM International project).
Translated by Alexis Ioannides
Florian Butollo, Sabine Nuss (eds.), Marx and the Robots: Networked Production, AI, and Human Labour, translated by Jan-Peter Herrmann with Nivene Raafat, Pluto Press, London 2022.
Florian Butollo, Patricia de Paiva Lareiro, “Technikutopien und säkulare Stagnation: Der Kapitalismus als Treiber und Schranke des Digitalen”, in: Thomas Sablowski, Judith Dellheim, Alex Demirović, Katharina Pühl, Ingar Solty (eds.), Auf den Schultern von Karl Marx, Westfälisches Dampfboot, Münster 2021, 359-75.
Timo Daum, “Artificial Intelligence as the Latest Machine of Digital Capitalism – For Now”, in: Butollo/Nuss 2022, 242-54.
id., “Missing Link: Künstliche Intelligenz und Nachhaltigkeit – und ewig grüßt der Rebound Effekt”, in: www.heise.de, 22 March 2020.
Paul N. Edwards, The Closed World: Computers and the Politics of Discourse in Cold War America, MIT Press, Cambridge (MA) 1996.
Wolfgang Fritz Haug, “Online-Kapitalismus: Eine forschende Auseinandersetzung mit Staabs ‘Digitalem Kapitalismus’”, in: Das Argument: Zeitschrift für Philosopie und Sozialwissenschaften, no. 335, vol. 62 (2020), i. 2/3, 19-56.
Sebastian Höfer, “Algorithmen, Maschinelles Lernen und die Grenzen der künstlichen Intelligenz”, in: Jusletter, 26 November 2018
Santje Kludas [et al.], “Alle Macht den Plattformen? Genossenschaften, Freie Software und die Möglichkeit einer sozial-ökologischen Plattformisierung”, in: Anja Höfner, Vivian Frick (eds.), Was Bits und Bäume verbindet: Digitalisierung nachhaltig gestalten, oekom, Munich 2019, 120-23.
Brenden M. Lake [et al.], “Building machines that learn and think like people”, in: Behavioral Brain and Sciences, vol. 40 (2017), e. 253.
Georg Lukács, “On the Responsibility of Intellectuals” [1948; translated by Severin Schurger], in: Telos, vol. 2 (1969), no. 1, 123-31.
id., “‘Das Rätesystem ist unvermeidlich’ ”, in: Werke, vol. 18, Aisthesis, Bielefeld 2009, 395-430.
Christian Meyer, “‘Forward! And Let’s Remember’: A review of materialist technology debates of the past”, in: Butollo/Nuss 2022, 86-98.
Sabine Nuss, Florian Butollo, “Introduction”, in: id./ead. 2022, 1-11.
Matteo Pasquinelli, “Artificial Intelligence as Division of Labour: Reading Marx and Babbage in the 21st Century”, in: Wolfgang Girnus, Andreas Wessel (eds.), Lebendiges Denken: Marx als Anreger, Leipziger Universitätsverlag, Leipzig 2020, 117-27.
Rudi Schmiede, “Informatisierung, Formalisierung und kapitalistische Produktionsweise: Entstehung der Informationstechnik und Wandel der gesellschaftlichen Arbeit”, in: id. (ed.), Virtuelle Arbeitswelten: Arbeit, Produktion und Subjekt in der „Informationsgesellschaft“, edition sigma, Berlin 1996, 15-47.
Judy Wajcman, “Automation: Is it really different this time? A summary review”, in: Butollo/Nuss 2022, 12-21.