1: Algorithms and the Critical Theory of Technology

Author:

Despite the volumes written on digital politics, and notwithstanding their depth and scope, quality and clarity of arguments and insights from digital scholarship, there do seem to be some matters that require attention. In this spirit Evelyn Ruppert, Engin Isin and Didlier Bigo propose a more subtle, nuanced appraisal of ‘data politics’. They propose that digital networks, or more precisely the data they produce, reconfigures ‘relationships between states and citizens’, thereby generating ‘new forms of power relations and politics at different and interconnected scales’ (2017, 1, 2). They contrast this to the similar, albeit different, forms of calculation that feature in and facilitate modern European state formation. This comparison is apt given that Andrew Feenberg notes that ‘technology is one of the major sources of public power in modern societies’ (2010, 10). The key difference between these sets of literatures, Ruppert, Isin and Bigo argue, is that the digital one has yet to pin down its ‘subjects’. They suggest that this identification effort can best be achieved by employing the post-structuralist tools bequeathed by Michel Foucault and Pierre Bourdieu. Ruppert, Isin and Bigo summarize their approach by stating that ‘Data does not happen through unstructured social practices but through structured and structuring fields in and through which various agents and their interests generate forms of expertise, interpretation, concepts, and methods that collectively function as fields of power and knowledge’ (Ruppert et al, 2017, 3).

Despite the volumes written on digital politics, and notwithstanding their depth and scope, quality and clarity of arguments and insights from digital scholarship, there do seem to be some matters that require attention. In this spirit Evelyn Ruppert, Engin Isin and Didlier Bigo propose a more subtle, nuanced appraisal of ‘data politics’. They propose that digital networks, or more precisely the data they produce, reconfigures ‘relationships between states and citizens’, thereby generating ‘new forms of power relations and politics at different and interconnected scales’ (2017, 1, 2). They contrast this to the similar, albeit different, forms of calculation that feature in and facilitate modern European state formation. This comparison is apt given that Andrew Feenberg notes that ‘technology is one of the major sources of public power in modern societies’ (2010, 10). The key difference between these sets of literatures, Ruppert, Isin and Bigo argue, is that the digital one has yet to pin down its ‘subjects’. They suggest that this identification effort can best be achieved by employing the post-structuralist tools bequeathed by Michel Foucault and Pierre Bourdieu. Ruppert, Isin and Bigo summarize their approach by stating that ‘Data does not happen through unstructured social practices but through structured and structuring fields in and through which various agents and their interests generate forms of expertise, interpretation, concepts, and methods that collectively function as fields of power and knowledge’ (Ruppert et al, 2017, 3).

Similarly invoking Foucault, and with an eye on the extensive reach of computations techniques on everyday life, David Beer describes ‘the social power of algorithms’ (Beer, 2017, 1). This power, he suggests, poses several key issues for the prevailing conceptualization of political legitimacy and governance. Much of this comes from ‘the impact and consequences of code’ (Beer, 2017, 3) but also ‘the powerful ways in which notions and ideas about the algorithm circulate through the social world’ (Beer, 2017, 2). For Beer, the current disciplinary research agenda involves questions of how much agency algorithms have in complex decision-making systems that involve ‘sorting, ordering, and prediction’ (Beer, 2017, 6), with a priority placed upon how norms are established; inter alia, the encoded demarcation of deviance, abnormality and what elements are opaque to whom (Beer, 2017, 3, 2, 6). Ascertaining this impact involves having a better comprehension of the ordering effects of computation, akin, if you will, to a ‘thick description of algorithms’ to appreciate just how ‘authority is increasingly expressed algorithmically’, to poach Frank Pasquale’s phrase (2015, 8). This is an agenda worthy of wide support. Nevertheless, the two projects do prompt critical data scholars to ask how the purposeful construction of a subject might hide as much as it might illuminate.

One should not lose sight of the fact that there are redeeming elements to these two academic projects. However, there are undue silences about the role of value and capital, surpluses and deficits. As such this means that these projects’ search for an analysis of a ‘digital subject’ is irrecoverably partial because they miss the grounding of this subject in processes of valorization and extraction, accumulation and appropriation. In other words, through silently passing over the connection of value and capital – including their digital expressions – the mode of production disappears from their frameworks. And so the intense focus on the social worlds created by platforms underexplores the deeper currents within the increased mobility of transnational capital flows and the asymmetrical antagonisms in capitalism, issues at the very forefront of our mode of production. The consequence, sadly, is that these theorists cannot specify a venue for any coming subject to participate in ‘data politics’, nor can they identify principle agents and agendas for revolutionary social change.

In the spirit of sympathetic critique, in this chapter I treat ‘data politics’, or more precisely digitalization as a signature element within late neoliberalism. By neoliberalism, I nod towards Wendy Brown’s (2015) frame of analysis wherein capital is a ‘political rationality’. By using this phrase, she refers to the encroachment of financial ways of thinking onto everyday life and how this undermines democratic forms of social interaction, all to cater towards a preferred subjectivity with preferred social relations and norms. As capital dominates the labour–capital antagonism (through undermining labour protections, reducing welfare commitments, or rolling back redistribution) for Brown, the sustained slow weakening of democratic institutions and practices has created the right kind of climate for a more robust authoritarian turn against the liberal democratic order, one that undoubtedly caters towards the imperatives of capital in late modernity.

In the coming sections I use two case studies involving property rights and differential class power to suggest that there are many good reasons to foreground Marxian-inspired contributions to the aforementioned research agenda. In making this intervention on ongoing work into algorithms, computation and data, I want to contrast existing studies that focus on data’s interpellation of subjects as well as the normative regimes deployed for that interpellation, to a research agenda that clearly recognizes the role of global capitalism and its contradictions in the work that data does. From my vantage, the latter is currently underserved.

Before I turn to the themes present in these cases, I should add that I do not intend to survey the huge literature in these very active fields. Such an overview is a subject in and of itself and not my purpose here. This is because I am interested more in method than content. Thereafter, in the subsequent sections I argue that computation provides a venue for radical political advocacy, something urgently required given that the aforementioned issues suggest the possibility of politics being foreclosed. Therefore, the goal of the second half of the chapter is to attempt to specify a venue and criteria for politically meaningful scholarship. As my target is the conceptual ordering of the present state of the discipline, this task requires that we examine the border between history and philosophy. Ultimately, I advocate that the radical critique of computation and calculation must work from the register of capital. The issue is more than just analytical precision. At stake is the continuing relevance of a critical theory of technology that is politically adequate to understand the latest manoeuvre in the always already impulse of value towards the realization of its own totality.

Data, politics and rights

In late 2017 Strava released its Global Heat Map, a data visualization tool that plots activities logged by the app’s users. Drew Robb (2017), a data engineer at the company, wrote that the dataset covered two years and represented 700 million activities. Yet while this visualization conveyed the seductive elegance of simple numbers, shortly thereafter security researchers like John Scott-Railton (2018) at CitizenLab were able to identify secret military bases, patrols and logistics routes, often in surprising, mappable detail. All of this could be seen using the app’s routine interface. In other words, to employ a common adage, the interface was ‘used as intended, but not as expected’. As one might expect, an unbounded, multisided scandal unfolded. Accordingly, this scandal provides a good case illustration of the limitations of class opacity, something that I will address at the end of this section.

A ready response is that much of this scandal could be avoided if persons were more attentive to their privacy settings. This is somewhat true, but also a distortion of the main issue. If privacy-conscious persons like US Special Forces operators could not select the appropriate privacy setting then the issue is beyond any one person’s usage of privacy settings because it speaks to larger questions of the design of ‘privacy’ on these kinds of platforms.

Put otherwise, the Strava case well illustrates the need for a socio-technical approach to the study of platforms. One of the best scholars in this regard is Zeynep Tufekci. ‘The Strava debacle underscores a crucial misconception at the heart of the system of privacy protection in the United States’, she writes. ‘[T]he privacy of data cannot be managed person-by-person through a system of individualized informed consent’ (Tufekci, 2018). In this conception, data privacy is less like an individual consumer good, and more like a public good. Accordingly, for Tufekci there ‘must be strict controls and regulations concerning how all the data about us – not just the obviously sensitive bits – is collected, stored and sold’. Effectively, the adage that ‘informed consent cannot function when one cannot be reasonably informed’ readily applies to this situation. The deeper point lies with a highly individuated conception of rights upon which notions of informed consent rest.

Tufekci’s concern is that hoarding data can have opaque legacy effects from which it is impossible to opt out. Let us take a closer look at Facebook to illustrate the general point. Drawing upon its existing database and new data sources, recently the company filed a patent that seeks to categorize ‘class’ (Facebook, 2018). One interpretation is that this sorting and categorization technique will help with advertising preferences for third-party clients. For example, banks could use targeted ads aimed at the working class, which in the US disproportionately includes historically marginalized and racialized persons. Given their credit history, these ads would probably be for high-interest loans to people who are precarious, desperate and susceptible. In effect, the algorithm would be denying them opportunities for fair loans, thus having an adverse impact on this class. To a greater degree, this development continues the disquieting elements in consumer research, but are worse in some respects because Facebook can build profiles based upon user-generated data logged for over a decade. This raises the real prospects of a person being profiled as susceptible to a high-interest loan because of their parents’ credit history. Presently this kind of predictive and presumptive software has several flaws. First, marketing claims overstate the accuracy. In practice these algorithms lack reliable predictive power. Second, because these kinds of software reflect capitalist ideology, poor persons (and in the US that means disproportionately black Americans) are less likely to be treated as fairly as other racial groups. I will return to and develop this point towards the end of the chapter, but for present purposes it is important to note that questionable goals are married with questionable means.

To help think through the ramifications of the archival nature of digitization I want to briefly discuss some of Jenna Burrell’s observations. Burrell (2016) draws attention to three kinds of ‘opacity’, namely an understanding of how one is a recipient of an outcome or decision, especially when the inputs are themselves only partially known. The first kind of opacity stems from corporate propriety and rights to property to maintain their market share and support their accumulation efforts. Appeals to network security are enrolled to help this line of reasoning, so it is unlikely that opacity will be suspended. Nevertheless, it is important to state that claims of propriety-as-opacity are at some level asserting that property rights take priority over regulatory safeguards meant to protect rights that greatly contribute to human flourishing, items like the right to equality or freedom from discrimination.

The second kind of opacity derives from technical illiteracy. The reasoning goes that writing code requires knowledge of a specialized syntax, logic and grammar, which is inaccessible to many people while unpractised for others. I am less convinced that this kind of opacity should be attributed to technical illiteracy alone, for it is downstream from a division of labour. Furthermore, durable, categorical and intersecting inequalities increase the likelihood that certain classes and groups are overrepresented among those who receive programming training. What I mean is that this technical language is not democratic, nor was it designed to be democratic.

The lack of democracy in code links to the third kind of algorithmic opacity that Burrell identifies, one which is a result of what she terms their ‘depth of their mathematical design’. Her point is, roughly, that researchers occasionally treat technical apparatuses as overdetermined black boxes situated in the social world, inflected by ‘the pressure of profit and shareholder value’. Effectively this black box treatment examines something other than the ‘algorithmic logic’. The potential consequence is that this mode of analysis ‘may not surface important broader patterns or risks to be found in particular classes of algorithms’ (Burrell, 2016, 3). This opacity and its associated risks resemble that of derivative trading prior to the 2008 recession, a systemic risk only comprehended in hindsight. But again, Burrell might be naturalizing the extent to which capitalists court systemic risks irrespective of which ‘black box’ analysts use to conduct their trading practices.

To counter the aforementioned opacity, Burrell points to the need for code auditors and targeted STEM (science, technology, engineering and mathematics) education to help with the substantive inclusion of persons from historically marginalized groups. The general idea is that diverse hiring can lead to ‘AI that’s fair and unbiased’ and so negate prejudices ranging from bigotry and xenophobia to homophobia to ageism. A similar idea is behind a call for greater computational literacy among journalists (see Diakopoulos, 2013); the idea here being that journalists could better mediate technical knowledge to inform the public and civil society more broadly.

While auditing for inclusion and expanding computational literacy programmes does have practical merits and positive effects, it is here that I have a point of departure from Burrell. From my vantage this line of reasoning demonstrates some limitations of this variety of scholarship, which is that it generally gives insufficient attention to how opacity itself is coloured by the commodification impulse that underpins the organization, conduct of and rationalization behind digital platform companies. ‘More diversity’ might fit with optics and the politics of progressive neoliberalism, but it overlooks the fact that it is not the beliefs of persons that drive social marginalization, it is the larger imperatives of the social structures in which they live and work. For example, racism is not simply prejudice, it is prejudice that is empowered by the actions and protocols of institutional life, whether it be governmental or corporate in nature. If the goal is to become ‘less bias’ it can only happen within the broader parameters permitted by capitalist society. For example, it is doubtful whether tech firms seeking to become ‘more diverse’ will implement deconstructive, decolonial or post-capitalist protocols that stem from alternative political frameworks. As such, the tolerance for different logics is limited before conversations about inclusion are even raised; the parameters for alternatives constructions of difference are tethered from the beginning.

One problem with ideas like code auditors and inclusive diversity is that they are for the most part oblivious to class and class relations. And through this obliviousness, these ideas lack the basic structural prerequisites to really understand the material components that undergird people’s lived experience. Consider that despite the existence of ombudsmen, watchdogs and press councils, for the most part poverty is an underrepresented issue in the press. In the US nearly 50 million people live in poverty, yet primary coverage of this issue makes up 0.2 per cent of news from major news outlets (Froomkin, 2013), meaning that Barbara Ehrenreich (2015) is correct to say that ‘only the rich can afford to write about poverty’. Effectively, Ehrenreich is underscoring the class bias in the production, circulation and consumption of ideas, pointing to how intellectual forces have material consequences. A similar dynamic is at hand for code and computation, especially if we are already willing to acknowledge that computer code is not meant to be democratic.

Here it is appropriate to think about ‘capitalistic opacity’ and class obliviousness as (and really when) machine learning which combines multiple data streams is paired with AI and put in service of corporate policy that uses it to optimize for extracting profits at the expense of people. The degree of accuracy and fairness is less important than the consequences of the profiling while profiling is less important than the mandate that corporations are legally compelled to extract profits. So long as data are legally coded as private property, so these kinds of social questions that the Strava Global Heat Map illustrates will continue unabated. Capitalism means the protection of a private property rights regime above all else. Therefore, if digital sociologists wish to have a full comprehension of informed consent it is imperative to recognize that it is a downstream matter to the basic dynamics of capitalism. Arguments that such an analytical hierarchy simply mistreats objects by making them a ‘black box’ misconstrues how the properties of these objects come to be socially valued, and are put into motion and realized because of their ability to help extract surplus value. As such, I take a contrary view to Tufekci and Burrell: questions of design need to be supplemented with a recognition that platforms and apps take advantage of existing American jurisprudence, itself a reflection of actually existing American capitalism and its necessary commitment to a private property rights regime.

The code of capital

It is helpful to keep the topic of property regimes in mind when reading Andrew Clement and David Lyon’s analysis of how value and wealth is created by platform companies. They write about ‘the hyperactive but hidden world of online-data trafficking with its myriad actors feverishly harvesting, aggregating, packaging, profiling, brokering, targeting, selling and generally monetizing the personal information we generate in rapidly expanding volumes’ (Clement and Lyon, 2018). As a quick illustration, Facebook has upwards of 2 billion active users, with WhatsApp, Messenger and Instagram with 1.2 billion, 1.2 billion and 700 million users respectively. A market capitalization of $445 billion makes the company the fifth most valuable in the world. More broadly, in 2017 Silicon Valley contributed $252 billion to US gross domestic product (GDP) (Hinson et al, 2017). Within this sector, in 2018 Alphabet’s (2018) revenues were $136 billion and its total (unaudited) assets were valued at $232 billion, with total liabilities of $55 billion. In addition to commodification practices, exploiting their workers and wage theft, this value has come primarily from two sources: unpaid labour and looting other economic sectors.

Regarding unpaid labour, using rudimentary personalization algorithms for media content distribution and consumption to serve data-capturing purposes, Facebook’s revenue model depends on extensive commodification facilitated by intrusive surveillance practices. Nicole Cohen explains that ‘extensive commodification refers to the way in which market forces shape and reshape life, entering spaces previously untouched, or mildly touched, by capitalist social relations’. Cohen continues:

Not only is surveillance the method by which Facebook aggregates user information for third-party use and specifically targets demographics for marketing purposes, but surveillance is the main strategy by which the company retains members and keeps them returning to the site. (Cohen, 2008)

As one of the mainstays of digital society, the consent by consumers to engage in the creation of the value of Facebook comes from this digital data work, which has become an increasingly lucrative commodity to extract from a person’s everyday labour. Yet, as Facebook is so entrenched in the fabric of everyday life, it is uncommon to find critiques of its commodification practices. There is precedent for this kind of naturalization. As Marx wrote,

the advance of the capitalist mode of production develops a working class, which by education, tradition, habit, looks upon the conditions of that mode of production as self-evident laws of Nature […] The dull compulsion of economic relations completes the subjection of the labourer to the capitalist. Direct force, outside economic conditions, is of course still used, but only exceptionally. (1977, 899)

Moving on, it is users themselves who are the ones producing and uploading content to the platform, then being the audience and consuming it. A variety of techniques are used to keep users’ attention and return them to the platform. Using intimate surveillance, Facebook mines the data that their users produce to provide microtargeting services to advertisers who in turn try to induce manipulation by modifying beliefs, attitudes and affects for the ultimate end of advertisers acquiring and commodifying audiences, practices that often occur below a user’s threshold of awareness. The more users who depend upon platforms the more opportunities exist to acquire data and show advertising. The gains in wealth and power derived from data mining far outweigh the agency gained by the average user. Granted, users recognize the obvious utility of these digital services, but the relationship between consumer and platforms is skewed in favour of these corporations.

In short, people have been co-opted into participating in their commodification with only a few recognizing the inequalities. Concurrently, through creating a digital content distribution medium, Facebook shook several other economic sectors, the most notable being the news and advertising sectors. But whether rendered as ‘move fast and break things’, ‘disruption’ or ‘creative destruction’, these programmatic mantas in Silicon Valley are little more than efforts to reframe the raw predatory looting of other businesses, the accumulation by dispossession, and present them as innovative and positive.

An enabling practice in this accumulation of value is quantification and classification. Together Geoffrey Bowker, Susan Leigh Star and David Beer approach data classification as a socio-technical system which leads to particular material configurations and effects when implemented by institutions (Bowker and Star, 2000; Beer, 2016). Beer’s work suggests that the dominance of a quantitative mode of thinking which allows metrics to circulate and be empowered to ‘maintain, strengthen, or justify new types of inequality, to define value or worth, and to make the selections is central to affording visibility or invisibility’ (Beer, 2016, 163–4). He calls this the ‘social life of data’. For Bowker and Leigh Star, classification schema represent certain social and technical choices which, notwithstanding the apparently trivial or neutral, have significant ethical and political implications because they are beholden to a political rationality. As Kimberle Crenshaw stipulates, ‘the process of categorization is itself an exercise of power’ (1991, 1297). As such, these classifications invite consequences which do affect a person’s relationships, identities and interactions, even if a person is not fully aware of these effects.

At a greater degree of abstraction, metrics and classifications do shape the trajectory of a society’s development. South African apartheid provides a case study of this kind of path determinacy. While initiated in the pre-war segregationist era, formally apartheid racial classification legally consolidated in the late 1940s after the National Party came to power. These classifications determined the racial group to which a person was assigned, in turn overdetermining their position within the civic hierarchy and relative exposure to oppression. These assigned civic ascriptions were linked to the political project of establishing the national identity and legitimacy of the white ruling classes. Ideological naturalization through essentialist conceptions of race sought to conceal the shifting construction of difference and the labour regime it supported, even if people knew how fallacious this ‘scientific racism’ happened to be. Apartheid too had ‘data driven decision making’, seen as objective given the political categories in that society, even if edge cases existed (see Breckenridge, 2014). Granted, apartheid South Africa provides a clear means to see oppression facilitated by data registries and an associated classification system. But due to their institutional opacity, arguably the US state–platform nexus presents an unknown threat. Where the logic of South African apartheid decision making was explicit and open, less is known about decisions in a digitally automated system. Crisply capturing this point Iyad Rahwan and Manuel Cebrian (2018) write that ‘the internal mechanisms driving these algorithms are opaque to anyone outside of the corporations that own and operate them’.

In contradistinction to the opacity, one branch of critique from progressive neoliberals focuses on the errors that can occur with the input of data, insofar that the data fed into the calculations can be poor, incomplete, poorly designed, outdated, negligent, have oversights or be subject to subjective recording. Some combination of these will imprint themselves on the outputs leading to poor data-driven decisions. For example, when it comes to facial recognition technology, and the difficulties with capturing black faces, progressive neoliberals argue that racially diverse hiring practices and more attention to the selection of data can overcome matters of discrimination. However, this reasoning fails to appreciate just how much ideology is encoded into algorithmic code itself, not just the results these technologies produce. As such, there is a misplaced trust that creates delusions about impartiality. The point is not to train facial recognition technologies to better locate the faces of minorities or to eliminate bias, the point is to remove the impulse to use these technologies for carceral logics. And given that much of this technology is propriety and technology companies lobby to avoid regulation, until there is genuine substantive democratic oversight there is no grounds to trust technology companies that their efforts at achieving fairness will come about. But there is also another point worth making. Whereas 50 years ago post-structuralist theory started to become translated, and so Anglo-American scholars well understood how ideology, doxa, discourse and so on were encoded into language, much of this knowledge is only slowly being recalled and applied to the realm of computation. At times it appears as if there is an amnesia with theory, forgotten as new technological artefacts are reified through an intense focus on their properties and attributes at the expense of an appreciation of their embeddedness within a social system.

In the face of critiques pointing out exclusion and the embedded discriminatory biases in the design of the technologies, technologists (or perhaps more accurately, the public relations departments of technology companies) have responded with a rhetoric that their practices strive to eliminate bias. It is tempting to offer congratulations, but this may be premature or misplaced given that this apparent solution may be ill-conceived. For example, Joanne McNeil (2018) has a conjecture that much of this rhetoric stems from ‘simplicity’. In her words, ‘addressing “bias” has a concrete aim: to become unbiased. With “bias” as a prompt, the possibility of a solution begins to sound unambiguous, even if the course of action – or its aftereffects – is dubious.’ McNeil suggests that this narrow solution-orientation remedial action is liable to ‘obscure structural and systematic forces’. Like Herbert Marcuse decades before, she is indicating that the limitations of this mode of thinking can be identified in technical systems and are indicative of a society without critique. To be bald, it is not simply the case that programmers have overt prejudices. I am sure they do their best to not produce racist outcomes. But this confuses active bigotry with social relations in a racialized society. Put in plainer terms, the issues are not psychological and personal, but sociological and historical.

Notwithstanding these critiques, technologists readily admit that they do not fully understand AI decisions, how those decisions were arrived at and the reasoning steps involved. Accordingly, it seems unwise to mass deploy this technology in state security when the consequences cannot be precisely predicted. But then again, this unknowability can be understood to be a desirable feature – for when there is sufficient opacity it becomes difficult to assign responsibility. In short, because precise effects are unknown this code makes it convenient to dismiss appeals and otherwise skirt accountability. As such, the lack of transparency erodes the practical tenets of good governance, transparency and accountability. All in all, currently AI decisions are more likely to weaken democratic life than aid it.

Granted, epistemological issues are entangled with political considerations, but often in ways that transforms the objectives and tasks of inquiry. Certainly, big data science smuggles in a particular philosophy and sociology of science, with internal criteria as to what counts as good, valid science. Yet irrespective of the size of the dataset, when using observational data to infer causal relationships one is susceptible to the fallacy of induction. Therefore no volume of data can substitute for a mechanistic demonstration. Still, there is a much more pressing matter. Much like how C. Wright Mills observed that the bureaucratization of social science in the postwar period changed the discipline of US sociology and approaches to the conceptualization and study of society, there is a similar kind of dynamic unfolding at the moment. The resources required to produce big data population research mean that many in academia are excluded from contributing to this research paradigm. This means that we are courting conditions where a corporately beholden epistemology establishes truths and facts. The result is, to modify Mills’ term, that the new digital men of power own and control the vectors of information, maintaining their rule through enforcing information asymmetries and corresponding legitimating ideologies. And given that most scholars are excluded from knowing about this research, let alone partaking in it, they are not well-positioned to undertake informed critiques. Whatever judgements they form, technologists can say these critiques are outdated, besides which there are code auditors who work for the corporations. Yet again one sees the effects of a private property rights regime on the construction of knowledge and reality.

Code as material governance

Calling back to Ruppert, Beer, Tufekci and Burrell, they have strong moral criticisms of the injustices in digitalization and are right to investigate how technology is not just instrumental but shapes a way of life through design choices, side effects and secondary instrumentalization. However, this criticism does not necessarily amount to a philosophical comprehension of the new emerging society. Rather, their analysis tends to suggest a politics where facts are useful as demystification and where experts weigh in on matters of distribution. I am not convinced that one can find a suitable ‘subject’ for data politics in this line of inquiry because of the oversight regarding the effects of class rule. Given the ‘enclosure’ of digital public goods, the prevailing theories of data politics, while helpful, require supplementation. When reviewing the history of technology, one cannot help but be immediately swayed by how prescient Marx was in his conceptualization of machinery – what it was and what it did. Despite its initial formulation in the mid-19th century, the Marxian research tradition remains a vibrant and useful mode of analysis that is more than relevant for studies of the political economy of digital technology. Indeed, in many respects this tradition is superior to the general progressive neoliberal critique of digital technology.

Aside from a small committed set of radical political economists, for many researchers ‘Marx is now an irrelevant advocate of outdated economic theories’. But this is arguably a mistaken view insofar that political economy was the ‘principal domain of technology in his time’ (Feenberg, 2010, 69). Technology is of central importance in Marx, in part because modernity and technology are indissolubly linked. To explain how this body of literature is relevant for data politics, a development that has occurred close to a century after Marx’s death, consider how social media became a technique for surplus value extraction through commodifying user-generated data. This in turn has produced a whole field of politics about, over and for this commodity, but in the main this politics is predominantly capitalist in character.

Despite the mainstream rejection of radical analysis progressive neoliberals do sample from this literature, if in a way that severs the kinds of concepts they adopt from their Marxian heritage. As Christian Fuchs and Nick Dyer-Witheford (2013) argue, ‘Marxian concepts ... have been reflected implicitly or explicitly in Internet Studies’. They identify a long list which includes globalization, the public sphere and ideology critique. Yet through a robust literature review they also demonstrate how these ‘Marxian-inspired’ concepts have been divorced from the overarching political philosophy, sometimes without even acknowledging the linkage. However, when selectively using these concepts, the progressive neoliberal analysis stops well short of comprehending how the totality of social relations in capitalism impacts technological use.

For Fuchs and Dyer-Witheford the result of this divorce is that many digital researchers are ‘superficial in their various approaches discussing capitalism, exploitation and domination’. The absence of a class analysis or an assessment of the feasibility of finite political goals (like workers controlling production) instead leads to an intellectual politics based on the broad acceptance of cultural difference and self-fashioning but little else. Absent a grounding in class, this kind of analysis is an updated restatement of third-way accommodation of the horrors of late stage capitalism. Granted, progressive neoliberals argue that perhaps the whole world should not be mediated by algorithms. This is a valuable point but it must also be set beside common observations about the behaviour of reformist politics in a capitalist society – when undertaking amelioration of the more acute harms of capitalism, reformation stops well short of addressing the first causes of those acute harms. The result is a position which can accurately be described as capital accommodation.

To me, capital accommodationism is willing to make peace with the ‘comfortable, smooth, reasonable, democratic unfreedom’ that Marcuse argued ‘prevails in advanced industrial civilization’. Indeed, such an attitude is deemed ‘a token of technical progress’ (Marcuse, 2002, 3). He adds to these remarks by saying, ‘the more rational, productive, technical, and total the repressive administration of society becomes, the more unimaginable the means and ways by which the administered individuals might break their servitude and seize their own liberation’ (2002, 9). Thinking about these issues requires political engagement of the kind automated public spheres seek to curtail, and which progressive neoliberals tend to construe as intellectually compromising. By contrast, I see capital accommodationism in digital scholarship as hegemony in action and a depletion of the imaginative capacities to use the newly acquired Marxian concepts in a way that can advance human flourishing.

On the topic of Marxian concepts there is a point about hegemony worth noting. Geoff Mann insists that the point of the labour theory of value is to identify how value functions to reproduce capital’s hegemony. As the ‘paradigmatic instrument of hegemony’, he writes, ‘value is the means by which the particular interests of the hegemonic historic bloc (capital) are generalized, so they become understood as the general interest’ (Mann, 2016, 10). In effect, I think there is a prima facie case that value’s rationality restructures societies to the imperatives of capitalist accumulation. I am going to explore this tendency to restructure in the remaining portion of this section to argue that it is an important component of data politics.

I think we can see restructuring in how bureaucracies shape the social world. David Graeber has recently written about the process of ‘total bureaucratization’. He refers to the ‘gradual fusion of public and private power into a single entity, rife with rules and regulations whose ultimate purpose is to extract wealth in the form of profits’ (2015, 17). Similar sentiments are commonplace when discussing neoliberal economics, settler colonial logics around dispossession or capitalism’s infiltration of science, rationality and models of technological innovation. So, if it is not too much of a stretch to label the internal procedures of bureaucracies, public and private alike, as algorithmic – they have protocols that compute actions – then it is not too much of a stretch to label value as a kind of a procedure for ‘sorting, ordering, and predicting’.

The most apparent example of the restarting of rationality is how metrics replace professional judgement (i.e. judgement acquired through wisdom, experience and talent), with numerical indicators of comparative performance based upon standardized data. These metrics are then used for attaching rewards and penalties to measured performance. This kind of performance assessment courts goal displacement. For example, when performance is judged by a few measures, and the stakes are high, like keeping your job, people focus on satisfying those measures often at the expense of other, more important organizational goals that are not measured or measurable. As Graeber (2018) notes, people are ‘obliged to spend increasing proportions of their time pretending to quantify the unquantifiable’. Similarly, short-term goals are advanced at the expense of long-term considerations. This kind of problem is endemic to publicly traded corporations like Facebook and Google. Even so, data cannot make decisions. Even when there is deference to AI, it is just a change from one set of complex symbolic inputs to another set, still conditioned by a social order, with its inequality in power.

The computational reason put into circulation by digitally automated decision-making systems has dramatic social consequences. While I will elaborate upon the matter in Chapters 5 and 6, we know that platform and technology companies sell facial recognition software to US governmental agencies and otherwise cater to the general digital militarization of the border. For example, US Immigration and Customs Enforcement has a $53 million contract with Palantir, while there are rumours that the Department of Homeland Security seeks to mass deploy facial recognition AI (Dellinger, 2018). Platform and technology companies are aware of the optics of enabling the biometric authoritarian tendencies in the state security apparatus. Microsoft, for instance, sought to distance itself from their government contracts when their commercial relationship became publicly known (Bergen and Bass, 2018). In doing so they demonstrate an acute awareness that whether it be oppression, exploitation or alienation, they well know these processes are produced and facilitated by data and the products they make to collect and analyse that data. The underlying action is a kind of devaluation of people that makes dispossession easier to enact. This is perhaps the most important part of the carceral state.

So, the specific efficacy of any one product, or even a range of products, does not really matter. What matters is that biometric tracking provides justifications for state officials to license actions in support of its longstanding racial abjection, national security and border-imperialism projects. In conjunction with other surveillance practices, these technologies compromise basic human rights, like freedom of speech, consciousness and mobility. The result of the state’s algorithmic gaze is to render everyone, but especially the most vulnerable, even more exposed to state and market forces. This is because data surveillance firms cater to state security imperatives as well as lobby the state to create markets for their products. By way of illustration, Amazon Rekognition’s capabilities form part of a suite of services that Amazon Web Services already provides to US state security forces, while the company has met with state agencies to pitch its services (Rose, 2019; Edmondson, 2019). These are but the most recent developments of the longstanding tendency of the American state to weaponize communication technologies and deploy them against opponents, whether they be citizens or foreign nationals. For example, Customs and Border Protection claims that US citizens are exempted from biometric tracking at the border. But this is a rhetorical sleight of hand because to exempt a person from scrutiny requires that their identity is validated in the first place, meaning that they are subject to some form of biometric screening.1

Accordingly, it is worth asking to what extent class bias is encoded and enacted in the computational realm, what kinds of social conflicts this risks and what side of the capital–labour antagonism it favours. These kinds of questions are especially pertinent given how people are currently being rendered as particular subjects by automated decisions that use data acquired by breaching established privacy norms to perpetuate social stratifications as well as to intensify intersectional inequalities. And so, it is worth pausing to assess the contributions of machine learning to these displacements, alienations and restructurings.

Computational reason

One can understand capitalism’s contradictions, antagonisms and struggles as computations. To better explain what I mean by this, it is instructive to draw upon Andrew Feenberg’s critical theory of technology. I do so because while it is commonplace to accept the point that it is a mistake to reify technology as something separate from society, like the commodity form, too often there is a peculiar mystification where, to use Feenberg’s turn of phrase, ‘the illusion of technique became the dominant ideology’ (2010, xx). To this extent, I think there is some benefit to incorporating the ‘social critique of reason’ (Feenberg, 2010, 160) into the methodology used to understand digital societies. Much as we are suspect about rationality restructuring the social world, so too must we be suspect of technological innovation lest we misunderstand the epistemic regimes that are implicit in data politics. It is also because, rather than conceptualizing data politics as a radical break from modernity, I think that the internet contains the patterns of the 20th century. Let me explain.

Recalling the motifs introduced by Tufekci and Burrell, I want to examine the opacity of code. For Feenberg, technical code is the combination of two ontological registers, these being social demand and technical specifications. There are translations and interactions between these discursive and technical elements, meaning these codes are not technically neutral entities. Rather they have a formal bias in favour of hegemonic social values while being constrained by the limits of existing technical operations. Still, given the nature of Gramscian hegemony wherein prevailing beliefs are not outlined with clear propositions, much of the class struggle elements within technical codes goes unnoticed. ‘Goals are “coded” in the sense of ranking items as ethically permitted or forbidden, aesthetically better or worse, more or less socially desirable’, Feenberg writes, so ‘socially rational activities that appear fair when abstracted from their context but have discriminatory consequences in that context’. The opacity of data politics then is not the relative inattention to Marxist conceptions of technology, but rather as Feenberg suggests, that ‘machine design mirrors back the social functions operative in the prevailing rationality’ (2010, 68, 69, 17). To the extent that one ignores the material base of a society, it is likely that attempts to understand technical code will stall more often than they will succeed.

This brings up another kind of opacity that Burrell overlooked, one related to modern experience. Via Heidegger, Feenberg offers a ‘technological revealing’ of the many illusions that structure this experience. He roughly means that when objects and experiences are useful, the human subject appears as a pure decentred rationality, methodically controlling and planning as ‘thought extended’ to its own world. These modifications relate not to Heidegger’s being, but to the consequences of persisting divisions between Marxist classes, what we could otherwise call the enduring inequalities between rulers and ruled in technologically mediated institutions and modern societies. The goal here is to repurpose Heidegger’s concept of enframing, which Feenberg uses to convey that all persons, without exception, have become ‘objects of technique, incorporated into the mechanism we have created’ (Feenberg, 2010, 7). Beer tends to agree. He relays Heidegger’s adage that ‘calculation refuses to let anything appear except what is calculable’ (Heidegger, cited by Beer, 2016, 58).

Simplifying Heidegger, effectively his proposal is that we adopt new attitudes towards technology, attitudes akin to the way being reveals itself. I am less convinced by this proposal. In fact, my materialist inclinations think it is insufficient. Nevertheless, I do agree to the extent that data, as the by-products of being, when simply interpolated as ‘objects of technique’ reflect how technology is radically disconnected from the experiences of the people who use it and live with it. This is the general condition of alienation. To reiterate an earlier point, people generally see the utility of platforms, but these platforms exist in their current form primarily to generate profits. This helps explain why platforms are so alienating to their users, even while the users can see potentials in this technology. If an analysis departs from this standpoint, the chief problem is not one of legal rights but also initiative and participation, themselves grounded in the experience and needs of people.

Unfortunately digitization and ‘data politics’ does not yet appear to be harnessed for actually existing democratic decision making. This is because ‘the modern world develops a technology increasingly alienated from everyday experience. This is an effect of capitalism that restricts control of design to a small dominate class and its technical servants’. So when Feenberg argues that ‘The new masters of technology are not restrained by the lessons of experience and accelerate change to the point where society is in constant turmoil’, he is referring to one of the contradictions in capitalist societies where technological choices are privately made but affect the public. This operational autonomy positions owners as safe from the consequences of their own actions. As Feenberg writes, ‘the entire development of modern societies is thus marked by the paradigm of unqualified control over the labour process on which capitalist industrialization rest’ (2010, 70). Provided profit seeking is socially desirable, this continues without significant opposition. Nevertheless, we need an urgent replacement of this technology as well as the kind of reasoning it provides. The value of the critical theory of technology is that it interprets the world considering potentialities, insisting that a different world is possible and probable. We must confront these paradigms that hinder the action and fair consideration of these potentialities.

Feenberg places emphasis on the impact of contextual aspects of technology on design. For him, technology is not just the rational control of nature. Accordingly, we can conceptualize technology in ways that are not simply limited to or predicated upon efficiency as the explanation for technological development, in turn generating possibilities of change usually foreclosed. Yet in Western capitalist societies commercial models of innovation and rationality tend to conflate progress with the multiplication of consumer goods. Neither is technology an extra-political domain. For much of the modern period, good results were celebrated as progress, while the side effects such as pollution and deskilling of industrial work were the price of progress. However, the epistemic focus on precision and control that are the hallmarks of ‘good science’ or ‘good technology’ is rather limited, but also hinders other kinds of collective experimentation. For Feenberg this role is not determining, nor neutral, but rather constitutive. As such, the analysis of technology does not licence us to succumb to the ‘dystopian philosophies of technology’ (Feenberg, 2010, 51). Indeed, like his mentor, Herbert Marcuse, Feenberg insists that we push for ‘technologies of liberation’.

Liberation is not opaque

The recognition that algorithms and AI are becoming a venue for radical politics is gathering momentum, even if it is not always expressed in these terms. To be sure, in a memorable adage Tufekci (2014) reminds us that ‘that happens to #Ferguson effects Ferguson’. This crisp expression demonstrates how digital liberties are civil liberties. Another internet rights activist, the late Aaron Swartz, advocated for ‘the freedom to connect’. The loss of connectivity, he said, would effectively ‘delete’ the US Bill of Rights (Democracy Now, 2013). In light of these sentiments, a suitable analysis of data politics should foreground how democratic opportunities in science and technology have been historically suppressed. These steps must be pursued so that we can better identify when these kinds of actions are occurring. In short, the search for a subject for data politics may in fact miss something if it emphasizes the social complexity and embeddedness of technology, like Tufekci and Burrell, and minimizes the distinctive emphasis on top-down control that accompanies capitalist-led technical rationalization. While it is unlikely that digital scholars whose work has been covered in this chapter would argue that technology is autonomous, to differing degrees they neglect the totality of socio-historical-material experiences; it reflects an estrangement that forgets that technology, digital varieties included, comes from experience. The key site of investigation is not the technological artefact and its attributes, but rather the social purpose it serves. The core problem is ‘data politics’, where algorithmic-led surplus extraction primarily for capital accumulation has been naturalized to such an extent that is becomes camouflaged and so escapes comment or critique. Effectively, code is a mode of material governance that encloses reasoning thereby limiting radical critique.

The lack of transparency and accountability will take on great importance in the coming years as governments consider whether to implement ‘citizen scores’ based on data produced by sensors and networked computing, a development that will exacerbate inequalities and disparities thereby paving the way for de-democratization and authoritarianism. Already we see how financial credit scores delimit a person’s life chances. Essentially, these issues pose big questions for digital democratic theorists. Yet, much like the state formation literature that Ruppert, Isin and Bigo appeal to that too often views industrialization in a partial manner, so to do they treat digitalization as separate to ‘the social question’. And so, their conception of digitalization can be philosophically richer. By contrast, a Marxist approach can find a ‘subject’ for ‘data politics’ that is constituted by stakes and venues, inequalities and rationalizations that stem from technology in society. This is not to suggest that the traditions of inquiry Ruppert, Isin and Bigo support have nothing to offer – of course they do – but that these traditions require supplementation from other more radically avowed approaches.

Sadly, the progressive neoliberal consensus is ensuring that the algorithms that will dispossess and exploit us are thoroughly ethical and transparent, at least regarding gender, race and capability. Unsurprisingly, class tends to be overlooked in this agenda. This conception of digital society shares a ‘family resemblance’ to the recent promissory narrative that the internet would democratize. From my vantage, the current academic interest in critical data politics is not matched by a commitment to radicalism in the contemporary American political sphere where critiques of both capitalism and imperialism are rare. And so, I lament the narrow ideological conformity in academic analysis which is silent on the central antagonisms in capitalism.

These remarks should not be misconstrued to mean that I advocate for the exclusivity of capital and class in the study of the digital world. But one cannot have an adequate understanding of the control and distribution of goods and resources by algorithms that encode the forces of social differentiation without it. And so, within the broad study of the politics of algorithms, researchers must be wary of pursuing projects that promote uncritical categories of analysis or obscure class antagonisms. Focusing on a critical political economy rather than a moralizing distributional critique can avert courting frameworks that are analytically weak. This agenda does not help us understand how and why, despite its best efforts, labour often loses.

Finally, it is important to focus upon the sheer contingency of outcomes. I mean here that the platforms we have become so accustomed to could have been otherwise. This contingency requires a ruling ideology to both stabilize and justify this line of investment. The result of this naturalization is, as Brian Wynne notes, ‘complex and usually distributed but highly coordinated modern technologies, [that] once established, lay down both material and imaginative pathways and constraints that themselves effectively delimit what may be seen as possible future developments’ (Wynne in Feenberg, 2010, x). Accordingly, ideology is just as important to capitalism as surplus value generation; ignoring it means there is little traction to understand regimes of technological innovation. Much of this ideology exists to justify the private property rights regime and otherwise hide the way capital structures relations that ultimately form durable stratifications. Promoting this kind of critique can help us develop the means to exit the hegemony of the value form. Pursuing this kind of investigation is important because barriers have been (and are being) created that thwart our participation in our own societies. These are the topics that will matter in the coming decades. And this is even more reason to practice an adequate critical theory. And to the extent that Ruppert, Isin and Bigo help us achieve that goal, I very much welcome their contribution. It is with this that I invite orthodox digital researchers to join in with ‘the ruthless critique of all that exists’.

1

See Timcke (2017) for a discussion of how the US state uses technology to marginalize political dissents and subordinate the black community.

Content Metrics

May 2022 onwards Past Year Past 30 Days
Abstract Views 61 29 0
Full Text Views 160 160 23
PDF Downloads 116 116 5

Altmetrics