Drawing together the threads of the book, this chapter argues that algorithmic thinking can only be understood through an analysis of its tensions. It looks at the different forces and tensions explored throughout the book to build upon this central argument. In addition, the chapter then turns to Michel Foucault’s concept of the ‘will to know’, arguing that what we are now seeing is a mutation of this into a desire or will to automate.
This chapters explores the pushing back of the boundaries of the known and the knowable. Taking Katherine Hayle’s concept of the ‘cognizer’, the chapter looks at how super cognizers are merging that act as bridges into an algorithmic new life. The chapter then develops a series of features of these super cognizers and uses these to think about the tensions created and agency meshes into the form of new forms of knowing. The chapter uses this central concept to think about the tension created by the stretching of the known.
Beginning by thinking about the broader shifts towards algorithmic processes and systems, this chapter reflects on the core issues discussed in the book. In particular, it develops the idea of algorithmic thinking and looks at how this might be contextualized. The chapter introduces the idea of the ‘algorithmic new life’ and how this conception of the changes algorithms will bring is crucial to future developments. The chapter closes by looking at the importance of tensions in understanding algorithms and provides an outline of the two key tensions that structure the book’s content.
Exploring the tensions that are created as different forms of agency mesh, this chapter looks at where the human actor is reintegrated into algorithmic thinking. Using a case study of a large risk-management system, it looks directly at how the boundaries around the acceptability of automation are managed. The chapter argues that notions of overstepping and of too much automation are embedded into understandings of these limits. The chapter looks at how human agency is circumscribed within algorithmic thinking, and how limits are boundaries are managed and breached in the expansion of algorithmic systems.
Algorithmic thinking creates both new knowns and new unknowns. This chapter reflects on the tension generated by unknowability. Drawing on Georges Bataille’s concept of ‘nonknowledge’, the chapter examines the historical development of advancing neural network technologies. The chapter argues that the presence of nonknowledge is now pursued in the advancement of these forms of automation and AI. It closes by reflecting on what the presence of nonknowledge might mean for the development of algorithmic thinking and how we can identify a suspension knowing that operates in these systems.
Taking case studies of the art market and the smart home, this chapter looks at the sidelining of the human within algorithmic systems. Focusing on the application of blockchain, the chapter looks at the vulnerabilities within systems and how humans are perceived to represent weak points within data systems. The chapter argues that a posthuman security is emerging, in which the human is bypassed in order to produce images of a secure society.
From machine learning and artificial intelligence to blockchain or simpler news-feed filtering, automated systems can transform the social world in ways that are just starting to be imagined.
Redefining these emergent technologies as the new systems of knowing, pioneering scholar David Beer examines the acute tensions they create and how they are changing what is known and what is knowable. Drawing on cases ranging from the art market and the smart home through to financial tech, AI patents and neural networks, he develops key concepts for understanding the framing, envisioning and implementation of algorithms.
This book will be of interest to anyone who is concerned with the rise of algorithmic thinking and the way it permeates society.
This chapter argues that employers in sectors with a high concentration of migrant workers are most likely to continue to rely on such precarious migrant labour, despite pre- and post-Brexit promises for increased investment in automation in labour-intense and migrant-dominated sectors of the economy, such as agriculture. Empirically, the argument is supported by examining specialist reports, political and media statements, as well as the Pick for Britain campaign as a case in point because it exemplifies a politically salient friction between the long-standing racialisation of EU migrants and dependency on their labour. By critically engaging with the trope of cheap labour we show how it co-exists within a discursive reality where the insufficient deployment of automation technology in the agricultural sector clashes with the significant reliance on precarious and exploitative migrant labour, which is progressively dehumanised by post-Brexit migration policies.
Chapter 4 tells the story of how technological development and automation of work dominated political thinking and policy. It tells the story of technological fear, suspicion and inevitability. The chapter overviews and examines a wide range of policy documents and reports on automation, robotics and technological displacement with a theoretical framework provided by Marx, Polanyi and the operaismo movement. We argue that the competitive relationship between robot workers and human workers framed by the principles of labour cost, efficiency and productivity results in the shift from the integration of workers as a collective in a volatile social and economic environment to a project of self-realisation by establishing links between performance, knowledge and the ability to remain employable in a competitive automated economy.
The introduction unpacks the trope of stealing jobs which has become increasingly important in political and public debates and was a key argument in the campaigns leading to Brexit. The chapter introduces key concepts, such as neoliberalism, Homo and xeno Homo Oeconomicus, neoliberalism and precarity, and maps out the relationships between them. Thus, the chapter introduces the main argument of the book – that there is a mutually constitutive relationship between discourses of automation and immigration, which legitimises and entrenches a divisive type of neoliberal governmentality; and the importance of this argument in the context of the pre- and post-Brexit British political economy.