AI, Digital Technologies and Society Collection

 

As a taster of our publishing in AI, digital technologies and society, we put together a collection of free articles, chapters and open access titles. If you are interested in trying out more content from Bristol University Press subject areas or Global Social Challenges collections, ask your librarian to sign up for a free trial.

AI, Digital Technologies and Society Collection

You are looking at 1 - 10 of 35 items

Authors: and

Scholars and practitioners from across the allied health disciplines have commented on the use of artificial intelligence (AI) as an adjunct for clinical diagnosis and prognosis. Few have attempted to make sense of AI as a communicating and deliberating agent in health praxis. To fill this gap, this chapter builds on the works of Atul Gawande, Eric Topol, Ruha Benjamin, Safiya Noble, Athena du Pré, Barbara Overton and others to offer a rhetorical-deliberative framework for reconceptualizing AI as a fully realized member of the healthcare team. Taking this view of AI provides a pathway for humanizing the machinic attributes of 21st century technological medicine while simultaneously (re)humanizing practitioners, patients and the overall medical ecology they inhabit and depend upon for cultivating health and well-being. Humanizing health praxis in this way may augment the quality of healthcare delivery and outcomes as we continue our journey with the artificial beings that do and will inhabit our world.

Full Access
Author:

This chapter explains the basic components of trust. In anticipation of a benefit, trust is a relational construct building on truthful communication and autonomous choices. However, there is no guarantee that trust will be established. The introduction will help to better navigate more complex trust concepts such as public trust.

Open access

Although there is a significant body of work exploring uneven SDG gains resulting from digital inequalities experienced by marginalized groups, the implications for religious ethnic minorities have not been addressed. This chapter outlines five mechanisms which may result in religious and ethnic minorities being less likely to benefit from digitalization: lack of internet access, increased likelihood to experience barriers once online, greater risk of online hate speech, internet shutdowns, and automated discrimination.

Open access

This introductory chapter outlines some of the core issues in the relations between trust, technology and power. After discussing different political forms of trust that inform the debates within the book, the focus shifts from what trust is to what it does, how it is used by power. A performative understanding of trust is set out that frames the discussion in terms of norms and roles associated with trust and technology, and the ways these can extract legitimacy and exacerbate inequalities. The structure of the book is also outlined. The chapter provides a focus on key areas of discussion such as data, AI and regulation, and sets out the main arguments of the text: that the extractive quantification of trust denies the political potential for mistrust.

Full Access

This chapter uses the concept of ‘imaginaries’ as an overarching heuristic to analyse how publics are discursively constructed through imaginaries of measurement technologies. We illustrate this empirically through a case study of a technological drama, the launching of and responses to a personalization algorithm at the New York Times. Thus, we argue first that different imaginaries of the public and the press as cultivators of those publics are invoked when attempting to legitimize or delegitimize emergent technologies. Second, by linking our case study to a historization of the increasingly datafied distribution and audience measurement technologies, we explore how publics/audiences are constructed differently as new measurement technologies emerge, from democratic collectives to segmented consumers, and finally, with the introduction of personalized recommendations as aggregated datapoints.

Open access
Author:

This chapter exposes how an employer’s use of automated job candidate screening technologies (algorithms and artificial intelligence) creates risks of discrimination based on class and social background. This includes risks of ‘social origin’ discrimination in Australian and South African law. The chapter examines three recruitment tools: (1) contextual recruitment systems (CRS); (2) Hiretech such as Asynchronous Video Interviewing (AVI); and (3) gamification.

Full Access

Most of the studies on gig work focus on the Global North, with very little research done on how platform workers in Africa are responding to the digital economy. Ostensibly based on freedom and self-employment, this new work order is deepening worker insecurity, undermining worker rights and dramatically increasing inequality between a core group of extremely wealthy senior managers/owners and a growing pool of precarious workers. However, our research among food service delivery couriers in Johannesburg, Accra and Nairobi shows that digital technology is generating forms of counter-mobilisation, often into self-organised network associations. By technologically linking platform workers, the gig economy tends to link their bargaining power, thus contributing to the emergence of hybrid forms of union-like associations (associational power) and new partnerships with traditional unions and NGOs (societal power). The chapter concludes by suggesting that the new digital technology is a double-edged sword: while it extends authoritarian managerial control over workers, increasing their insecurity and deepening levels of inequality, at the same time it increases workers’ workplace bargaining power, providing them with the ability to develop innovative forms of collective solidarity, organisation and strike action.

Full Access
Author:

This chapter presents the context and consequences of representational harms facing LGBTQ individuals and communities through the old and newer digital media logics of the hybrid media ecosystem. The chapter lays out the argument of the book and the evidence to support it; that representational harms – harms that ignore, criticize or blame minority communities, and in doing so, reinforce inequity – are a form of information warfare. This warfare is manifested through the manufacture of an imagined LGBTQ enemy, and the rationalization, normalization and monetization of stigma-driven marginalization based on that fiction. The transformation of cultural production through digital media platforms has restructured the terms by which culture is distributed and paid for. This transformation has revealed the workings of stigma-derived anti-LGBTQ economic and political purchase through the hybrid media ecosystem. The chapter identifies digiqueer criminology as an overarching framework through which to articulate the relationship between LGBTQ agency, and the technostructural forces that constrain or enable LGBTQ expression. In doing so, the chapter identifies the role of meaning making through knowledge production processes, the material value of identity, and its impact on securing LGBTQ relationship, conduct and expression rights. The chapter provides a summary of the chapters in the book.

Full Access
Author:

This chapter investigates whether OFQUAL’s briefly used assessment ‘algorithm’ systematically produced unequal grade outcomes along racialised categories. Inspired by student-led protests in August 2020 following initial outcomes produced by the algorithm, this chapter evaluates core concerns raised throughout the protests, which positioned the algorithm’s outcomes as socially discriminatory by design.

Full Access
Author:

Drawing on ethnographic data, this article analyses employees’ cultural appropriation of AI systems within delivery platforms and manufacturing in Germany. Cultures of technology appropriation in workplaces emerge in a context of domination. Deviant forms of appropriation thus constitute a form of organisational misbehaviour for which employees must assume repercussions. Employees’ criticism of AI systems at their workplaces therefore differs strongly depending on whether management is present or not. In those cases studied here, the dysfunctionalities and disciplining functions of AI systems were criticised openly in settings where management was absent, while in situations of co-presence this criticism predominantly took the form of subversive humour. Systematically, employees ascribed absurd identities to technologies; this functioned as a low-risk form of criticism and provided continual mutual affirmation of a shared critical stance towards specific technologies. It thus established a critical organisational technoculture. Such practices of subversive humour are indicative of the critical lucidity of employees, but also signify a relative inability to influence workplaces and their technological infrastructures. In some cases, however, subversive humour laid the cultural basis for more practical forms of technological misbehaviour including the manipulation of algorithms and even sabotage.

Full Access