Browse

You are looking at 1 - 10 of 233 items for :

  • Race and Crime x
Clear All

Disinformation has been described as a threat to political discourse and public health. Even if this presumption is questionable, instruments such as criminal law or soft law have been utilised to tackle this phenomenon. Recently, technological solutions aiming to detect and remove false information, among other illicit content, have also been developed. These artificial intelligence (AI) tools have been criticised for being incapable of understanding the context in which content is shared on social media, thus causing the removal of posts that are protected by freedom of expression. However, in this short contribution, we argue that further problems arise, mostly in relation to the concepts that developers utilise to programme these systems. The Twitter policy on state-affiliated media labelling is a good example of how social media can use AI to affect accounts by relying on a questionable definition of disinformation.

Restricted access
Authors: and

This paper outlines and applies a framework for analysing how technologies can contribute to social harms at individual, institutional, and societal levels. This framework – which we term the technology-harm relations approach – synthesises the insights of postphenomenology and critical realism to detail how harms can emerge from direct human-technology relations (first-order harm relations), as well as the results of the results of human-technology relations (second-order harm relations). To apply this framework, we explore how, through first- and second-order harm relations, predictive policing algorithms might magnify harm through conventional law enforcement activity. We explain how first- and second-order harm relations are by-products of a system that currently generates harm through false ideals of objective, neutral, and non-discretionary enforcement, and that aims to promote consistency while at the same time eroding accountability for decisions utilising automated processes.

Restricted access

In this paper, we take the management crisis in the Finnish Immigration Service, Migri, as an example to illustrate ambiguous qualities of automated decision making in the context of the production and alleviation of social harm. The case lies at the crossroads of political and legal discussions on immigration and artificial intelligence (AI) transformation. As a result of the persistent backlog of cases held by Migri for processing, since the ‘refugee crisis’ of 2015, numerous asylum seekers remain in a state of bureaucratic limbo. Automating part of the decision-making process offered a potential solution to the harms caused by prolonged processing; however, it was hampered by features of the Finnish constitutional system. The applicants most likely avoided the potential algorithmic harms of prematurely implemented automated systems. However, possible algorithmic solutions to preexisting analogue harms have also been prevented. Through the analysis of policy and legal documents related to immigration and automation, we show that the disconnect between distinct political priorities leaves a variety of harms unaccounted for and may cause fractures in the Finnish harm reduction regime. Given that the development of algorithmic systems is subject to a constant struggle between contradictory values and expectations placed on these systems in terms of the alleviation of harm(s), we argue that a holistic view of harms and solutions to these harms in digitalised societies may facilitate the harm reduction potential of algorithmic systems.

Restricted access

Criminological studies of social harms extensively document intersections of power and the production of harm, revealing how the actions of the powerful in the public and private sectors expose (typically) less powerful groups to harm, often with impunity. While this scholarship provides much needed insight into the often minimised or dismissed harms of the powerful, attention must also be paid to the agency of the victimised and the outcomes of their active efforts to resist such harms, especially in a digital context where concepts such as ‘power’ and ‘capital’ might take a different meaning. To this end, this paper expands existing criminological scholarship on social harms by providing new insights on how the dynamics of resistance by ordinary citizens, that is, people not generally considered part of the powerful capitalist elite, can nevertheless produce secondary social harms. The paper uses the example of online resistance to the COVID-19 digital tracing ‘track and trace’ app in England and Wales to unravel how ordinary citizens utilise their agency to resist the perceived harms of powerful actors while, at the same time, producing the secondary social harm of information pollution.

Restricted access

Artificial Intelligence (AI) systems for both crime prevention and control have been in use for several decades although they have in recent years become the subject of growing criminological attention. Despite its transformative potential for societies, AI in general has long existed in a normative void and has been subject to limited regulation and control. The recent draft of the EU AI Regulation can thus be welcomed as the first comprehensive effort to regulate AI in an attempt to set regional, and potentially global, standards. The approach adopted in the Regulation, however, does not seem to adequately address some of the major concerns surrounding AI when it comes, for instance, to its use in criminal justice arenas. This short intervention discusses how a different approach, focusing on the social harms at stake rather than technological risks, could be useful for overcoming some of the limitations of current regulatory attempts.

Restricted access
Author:

Predictive policing lies at the intersection of a diachronic paradox between the innovativeness of algorithmic prediction and its selective application to archetypes of conventional criminology. Centring on the Italian context, I outline a critique of predictive policing, proceeding from its embeddedness in the neoliberal restructuring of security provision and the increasingly blurred boundaries between private and public agencies. Rejecting the narrative of technical neutrality and operational smartness, I retrace the interdependence of a selective understanding of security that has paved the way for predictive policing and the impact of automated predictions on the governance of crime control. I argue that the production of social harm under predictive policing follows three main patterns: firstly, the continuation of a tolerable rate of street crime; secondly, a dramatic acceleration in the marginalising and stigmatising potential of criminal targeting; and thirdly, the impairment of democratic accountability through tautological schemes of self-legitimation.

Restricted access
Authors: and

In this paper we suggest that theoretically and methodologically creative interdisciplinary research can benefit the research on social harms in an algorithmic context. We draw on our research on automated decision making within public authorities and the current on-going legislative reform on the use of such in Finland. The paper suggests combining socio-legal studies with science and technology studies (STS) and highlights an organisational learning perspective. It also points to three challenges for researchers. The first challenge is that the visions and imaginaries of technological expectations oversimplify the benefits of algorithms. Secondly, designing automated systems for public authorities has overlooked the social and collective structures of decision making, and the citizen’s perspective is absent. Thirdly, as social harms are unforeseen from the perspective of citizens, we need comprehensive research on the contexts of those harms as well as transformative activities within public organisations.

Restricted access