Browse

You are looking at 1 - 10 of 317 items for :

  • Youth Justice x
Clear All

Our article seeks to understand the contours of what has been termed a ‘dual pandemic’ in the UK: twin crises of increasing domestic violence and abuse (DVA) alongside the spread of COVID-19, both of which have disproportionately affected Black and minoritised communities. Our article draws upon the perspectives of 26 practitioners who provide specialist DVA services for Black and minoritised women and girls in England and Wales. Based on interviews with these practitioners, we explore the nature and patterns of the DVA which their Black and minoritised women clients experienced during the pandemic. Our findings highlight the pandemic-related risks and challenges that lead to specific manifestations of DVA within Black and minoritised communities and reveal the practice and policy landscape of the ‘by and for’ DVA sector during the pandemic and beyond.

Restricted access

This article introduces and evaluates ‘scrapbooking’ as a critical pedagogic approach to gender-based violence (GBV). This approach is inspired by the rapid development of conceptual and methodological tools for researching violence and abuse and the need for their translation into transformative teaching. Drawing on a feminist methodology of ‘research conversations’, but original in its development of ‘pedagogic conversations’, this research advocates further empirical attention to GBV teaching and presents its own four ‘lessons learnt’ from experimenting with scrapbooking. Scrapbooking is argued to facilitate not only the translation of GBV research into teaching, but also affective and embodied consciousness-raising and continuum-thinking in both students and tutors.

Restricted access
Authors: and

This paper outlines and applies a framework for analysing how technologies can contribute to social harms at individual, institutional, and societal levels. This framework – which we term the technology-harm relations approach – synthesises the insights of postphenomenology and critical realism to detail how harms can emerge from direct human-technology relations (first-order harm relations), as well as the results of the results of human-technology relations (second-order harm relations). To apply this framework, we explore how, through first- and second-order harm relations, predictive policing algorithms might magnify harm through conventional law enforcement activity. We explain how first- and second-order harm relations are by-products of a system that currently generates harm through false ideals of objective, neutral, and non-discretionary enforcement, and that aims to promote consistency while at the same time eroding accountability for decisions utilising automated processes.

Restricted access
Author:

Research has demonstrated that algorithmic and AI (AAI) technologies produce significant social harms. This short intervention turns the focus from social harm production to social harm reduction and seeks to map how and what AAI technology features affect harm reduction efforts by AAI system designers.

The intervention argues that six cumulative features are relevant. The technological agency embedded in AAI systems makes conscious harm reduction by design possible but also potentially strips previous human harm safeguards. Complexity destabilises designers’ capability to predict and control system performance, justify outcomes, and analyse and ensure the legitimacy system ontologies. Uninterpretability further complicates harm-reduction efforts by making analytical tracing of system logics difficult and introducing alien ontologies. Non-linear performance further destabilises outcome prediction capabilities, while indeterminacy and dynamicity provide the ultimate challenges for harm reduction. If the AAI systems performance patterns are indeterminate in nature or prone to change during use due to lifelong learning, designers lose direct control over the systems. While the technology features challenge harm reduction, concerted efforts in managing and containing their effects allow designers to engage in harm reduction.

Restricted access

Criminological studies of social harms extensively document intersections of power and the production of harm, revealing how the actions of the powerful in the public and private sectors expose (typically) less powerful groups to harm, often with impunity. While this scholarship provides much needed insight into the often minimised or dismissed harms of the powerful, attention must also be paid to the agency of the victimised and the outcomes of their active efforts to resist such harms, especially in a digital context where concepts such as ‘power’ and ‘capital’ might take a different meaning. To this end, this paper expands existing criminological scholarship on social harms by providing new insights on how the dynamics of resistance by ordinary citizens, that is, people not generally considered part of the powerful capitalist elite, can nevertheless produce secondary social harms. The paper uses the example of online resistance to the COVID-19 digital tracing ‘track and trace’ app in England and Wales to unravel how ordinary citizens utilise their agency to resist the perceived harms of powerful actors while, at the same time, producing the secondary social harm of information pollution.

Restricted access
Author:

Predictive policing lies at the intersection of a diachronic paradox between the innovativeness of algorithmic prediction and its selective application to archetypes of conventional criminology. Centring on the Italian context, I outline a critique of predictive policing, proceeding from its embeddedness in the neoliberal restructuring of security provision and the increasingly blurred boundaries between private and public agencies. Rejecting the narrative of technical neutrality and operational smartness, I retrace the interdependence of a selective understanding of security that has paved the way for predictive policing and the impact of automated predictions on the governance of crime control. I argue that the production of social harm under predictive policing follows three main patterns: firstly, the continuation of a tolerable rate of street crime; secondly, a dramatic acceleration in the marginalising and stigmatising potential of criminal targeting; and thirdly, the impairment of democratic accountability through tautological schemes of self-legitimation.

Restricted access

In this paper, we take the management crisis in the Finnish Immigration Service, Migri, as an example to illustrate ambiguous qualities of automated decision making in the context of the production and alleviation of social harm. The case lies at the crossroads of political and legal discussions on immigration and artificial intelligence (AI) transformation. As a result of the persistent backlog of cases held by Migri for processing, since the ‘refugee crisis’ of 2015, numerous asylum seekers remain in a state of bureaucratic limbo. Automating part of the decision-making process offered a potential solution to the harms caused by prolonged processing; however, it was hampered by features of the Finnish constitutional system. The applicants most likely avoided the potential algorithmic harms of prematurely implemented automated systems. However, possible algorithmic solutions to preexisting analogue harms have also been prevented. Through the analysis of policy and legal documents related to immigration and automation, we show that the disconnect between distinct political priorities leaves a variety of harms unaccounted for and may cause fractures in the Finnish harm reduction regime. Given that the development of algorithmic systems is subject to a constant struggle between contradictory values and expectations placed on these systems in terms of the alleviation of harm(s), we argue that a holistic view of harms and solutions to these harms in digitalised societies may facilitate the harm reduction potential of algorithmic systems.

Restricted access

Disinformation has been described as a threat to political discourse and public health. Even if this presumption is questionable, instruments such as criminal law or soft law have been utilised to tackle this phenomenon. Recently, technological solutions aiming to detect and remove false information, among other illicit content, have also been developed. These artificial intelligence (AI) tools have been criticised for being incapable of understanding the context in which content is shared on social media, thus causing the removal of posts that are protected by freedom of expression. However, in this short contribution, we argue that further problems arise, mostly in relation to the concepts that developers utilise to programme these systems. The Twitter policy on state-affiliated media labelling is a good example of how social media can use AI to affect accounts by relying on a questionable definition of disinformation.

Restricted access