Browse

You are looking at 1 - 10 of 254 items for :

  • Technology, Data and Society x
Clear All

This case study discusses the multiple roles of trustification within contact tracing apps during the COVID-19 pandemic. The internal problems of the UK’s design and deployment of a contact tracing app is discussed in relation to trust in governments, companies, research communities and the role of the media. This is brought into perspective by thinking about the global neo-colonial impact of the UK and the apps in general, with examples from specific countries from across the Global South. These examples draw on the commonalities and contextual differences that shape the experiences of different marginalized populations. The drive towards quantification and solutionism is critiqued under the assumptions on which it is based, assumptions that conceal and perpetuate inequalities.

Restricted access

This case study confronts the various ‘tech for good’ initiatives that embed solutionist values in humanitarian activities. Drawing on examples of ‘data for good’ in Turkey, ‘AI for good’ at the UN, and various other manifestations of these schemes, the ill-defined nature of ‘good’ and its relation to discursive power are discussed. The metricization of humanitarianism is shown to be dehumanizing in the obscuring of context and lived experience. The application of generic technologies to humanitarian issues is not only extractive in itself, but it also operates as a tool to extract trust in the same technologies and companies for wider applications of technologies against different populations.

Restricted access

This case study focuses on facial recognition technologies. The face is a key part of interpersonal interactions and trust, but when it gets quantified as data and assessed by algorithms it entrenches bias and discrimination. The use of facial recognition and emotion recognition technologies to assign social values such as trustworthiness (whether in job interviews or policing) is particularly problematic. The discussion brings in issues of race and gender as part of defining a default quantifiable human as a technologically embedded norm, and outlines technical, psychological and ethical critiques of facial recognition as a legitimizing technology for power and inequality.

Restricted access

This concluding chapter provides an overview of the different relations between trust and technology, particularly the technologization of trust defined as trustification. This is brought back to the twin mechanisms of quantification and discourse that support power over and through technology. The constant displacement of trust across different parts of complex sociotechnical assemblages is discussed in terms of the way false trade-offs are constructed in technology discourse. Drawing together the intertwining threads of the book, a call to action is put forward based on the politicization of technology and the value of collective mistrust in addressing power inequalities.

Restricted access

This chapter looks at the way trustification works within corporate logics of power in technology discourses, beginning with an overview of how corporate priorities such as risk have come to dominate and legitimize specific ways of counting people and society for profit. The links between capital and colonial histories of accounting and managing people are discussed in relation to risk and power. The influence of economic narratives is traced through to contemporary issues such as the platformization of work, the power of data brokers, and algorithmic scoring of people for credit and other opportunities. The dominance of innovation as a legitimizing discourse is linked to the measurement of risk and adoption as proxies for trust, perpetuating economic inequalities in ways that entrench power and avoid scrutiny.

Restricted access

This introductory chapter outlines some of the core issues in the relations between trust, technology and power. After discussing different political forms of trust that inform the debates within the book, the focus shifts from what trust is to what it does, how it is used by power. A performative understanding of trust is set out that frames the discussion in terms of norms and roles associated with trust and technology, and the ways these can extract legitimacy and exacerbate inequalities. The structure of the book is also outlined. The chapter provides a focus on key areas of discussion such as data, AI and regulation, and sets out the main arguments of the text: that the extractive quantification of trust denies the political potential for mistrust.

Restricted access

In this chapter, the focus is on the role of the media, particularly online platforms, in shaping the discourses that support the quantification of trust and the extraction of legitimacy. The sliding of terms and understanding that blurs what any given technology is and what it can do are discussed, as well as the harms it can generate or the inequalities it can perpetuate. Hype is shown to be a performance of functionality, complexity and legitimacy based on these misrepresentations in the media and in the way platforms present themselves to publics. Power over media is power over discourse is power over trust and legitimacy; this chain of proxies is critiqued with perspectives from disability and trans activism that challenge dominant narratives about what technology is and who it is for.

Restricted access
How Technology Discourses Quantify, Extract and Legitimize Inequalities

We are often expected to trust technologies, and how they are used, even if we have good reason not to. There is no room to mistrust.

Exploring relations between trust and mistrust in the context of data, AI and technology at large, this book defines a process of ‘trustification’ used by governments, corporations, researchers and the media to legitimise exploitation and increase inequalities.

Aimed at social scientists, computer scientists and public policy, the book aptly reveals how trust is operationalised and converted into a metric in order to extract legitimacy from populations and support the furthering of technology to manage society.

Restricted access

This chapter focuses on the role of trust and metricization in academia and global tech research communities. It starts with a discussion of the colonial roots of academic power, and the role of epistemic injustice in defining certain privileged voices as more important than others. Credibility is assessed across academic metrics and discourse, to think about the power structures that shape research agendas and funding, and entrench historical biases around knowledge production. Examples discussed include the UK’s REF research assessment system, the uberfication of the university, the role of the lab, and the relations between academia and other forms of power. Discourses of quantification, extraction, objectivity and expertise are confronted in the way they perform barriers to who can shape, do or talk about creating knowledge.

Restricted access

This chapter applies a framework of trustification to state logics of power in technology discourses. The discussion begins with historical precursors in the legitimization of state authority. This authority is then linked to an increasing trend towards measurement as a root of legitimacy, the metricization of proxies for trust in response to a decline in political trust throughout societies. The role of the state is discussed in relation to its own citizens, other states, technical infrastructures, and marginalized or colonized groups. Throughout, the discussion draws on examples from administrative categories, surveillance practices, regulatory strategy and smart cities, among others. The chapter concludes by considering the corporate lobbying influences that shape the drive towards measurement in state logics, the blending of private and state interests in the regulation of technologies.

Restricted access