Drawing together the threads of the book, this chapter argues that algorithmic thinking can only be understood through an analysis of its tensions. It looks at the different forces and tensions explored throughout the book to build upon this central argument. In addition, the chapter then turns to Michel Foucault’s concept of the ‘will to know’, arguing that what we are now seeing is a mutation of this into a desire or will to automate.
This chapters explores the pushing back of the boundaries of the known and the knowable. Taking Katherine Hayle’s concept of the ‘cognizer’, the chapter looks at how super cognizers are merging that act as bridges into an algorithmic new life. The chapter then develops a series of features of these super cognizers and uses these to think about the tensions created and agency meshes into the form of new forms of knowing. The chapter uses this central concept to think about the tension created by the stretching of the known.
Beginning by thinking about the broader shifts towards algorithmic processes and systems, this chapter reflects on the core issues discussed in the book. In particular, it develops the idea of algorithmic thinking and looks at how this might be contextualized. The chapter introduces the idea of the ‘algorithmic new life’ and how this conception of the changes algorithms will bring is crucial to future developments. The chapter closes by looking at the importance of tensions in understanding algorithms and provides an outline of the two key tensions that structure the book’s content.
Exploring the tensions that are created as different forms of agency mesh, this chapter looks at where the human actor is reintegrated into algorithmic thinking. Using a case study of a large risk-management system, it looks directly at how the boundaries around the acceptability of automation are managed. The chapter argues that notions of overstepping and of too much automation are embedded into understandings of these limits. The chapter looks at how human agency is circumscribed within algorithmic thinking, and how limits are boundaries are managed and breached in the expansion of algorithmic systems.
Algorithmic thinking creates both new knowns and new unknowns. This chapter reflects on the tension generated by unknowability. Drawing on Georges Bataille’s concept of ‘nonknowledge’, the chapter examines the historical development of advancing neural network technologies. The chapter argues that the presence of nonknowledge is now pursued in the advancement of these forms of automation and AI. It closes by reflecting on what the presence of nonknowledge might mean for the development of algorithmic thinking and how we can identify a suspension knowing that operates in these systems.
Taking case studies of the art market and the smart home, this chapter looks at the sidelining of the human within algorithmic systems. Focusing on the application of blockchain, the chapter looks at the vulnerabilities within systems and how humans are perceived to represent weak points within data systems. The chapter argues that a posthuman security is emerging, in which the human is bypassed in order to produce images of a secure society.
From machine learning and artificial intelligence to blockchain or simpler news-feed filtering, automated systems can transform the social world in ways that are just starting to be imagined.
Redefining these emergent technologies as the new systems of knowing, pioneering scholar David Beer examines the acute tensions they create and how they are changing what is known and what is knowable. Drawing on cases ranging from the art market and the smart home through to financial tech, AI patents and neural networks, he develops key concepts for understanding the framing, envisioning and implementation of algorithms.
This book will be of interest to anyone who is concerned with the rise of algorithmic thinking and the way it permeates society.
Origin stories set the stage for the development of a field of study and are integral to the ways they grow and shift. Similar to other reclamation projects, fat studies aims to rewrite the history of ‘fat’ by subverting its violent use for surveillance and control, and positioning it as a natural human characteristic. Its origin story is inextricably linked to the activism and scholarship of white and white-passing women, and is often located in gendered expectations of the ‘appropriate’ feminine body. As a result, the racial origins and functionings of fatphobia become erased and create a normative fat subject that is typically cisgender, female and white, which is reproduced in much of the research emerging from the field. I, along with other fat activists and scholars, propose a fundamental shift towards an intersectional fat studies, with race as an entry point to analysis towards rewriting the field’s history and presence.
In response to COVID-19, many care homes closed to visitors and new ways for carers and residents to stay in touch were tried. This UK study employed an online survey to explore carer experiences of staying in touch from a distance. The research highlighted: the importance of ongoing connections (through visits and remotely); diverse approaches to maintaining contact; and concerns about safeguarding and well-being. Findings underscore the importance of developing personalised approaches to staying in touch during future care home closures and for those who require an ongoing approach to remote contact due to distance, illness or additional caring responsibilities.