Abstract

In his article ‘Selective incapacitation revisited’, Thomas Mathiesen (1998) addresses the dominance of a technical and scientific language associated with the risk prediction culture that originated from a criminological research community where risk is considered as objective and measurable. In this article I discuss how practitioners perceive these aspects of risk prediction. For policymakers, targeting means using thresholds to target groups of offenders, but for frontline officers, it means targeting an individual. The officer must set an individualised assessment against the aggregated assessments from risk predictions. I will analyse how this has manifested in three Norwegian risk assessment projects: the offender assessment system, risk assessment of violent extremism, and early intervention to prevent youth crime. This article contributes to the understanding of how the political aspects of risk impact practitioners, and how the concept of risk as an artifact is understood by practitioners. I will first present the context of selective incapacitation and the history of research in this field. I will then contextualise the different understandings of risk within policy and practice. The main section is an analysis of the three cases. I end by discussing how acknowledgement of the political aspects of risk can promote sensitivity around the use of risk assessment tools.

The large number of books, articles, and reports of prediction often gives the impression that we are confronted by a kind of scientific, theoretical, abstract, unpolitical interest in the question of how to make prediction more accurate. If we make clear to ourselves the goal of prediction actually is incapacitation, the political function of prediction activity becomes clearer. (Mathiesen, 1998: 455)

Thomas Mathiesen (1998) introduces his article ‘Selective incapacitation revisited’ with a call for risk prediction to be viewed as an activity with a political function. He shows how risk prediction has dominated the research culture since the 1900s and is critical of the way research legitimises incapacitation. He argues that the goal of the sophisticated calculation behind selective incapacitation is twofold: to legitimise indefinite sentences for those classified as violent or dangerous people, and to justify the use of imprisonment. His article therefore addresses the dominance of the technical and scientific language associated with the risk prediction culture, which originated in a criminological research community where risk is considered something objective and measurable.

This article is part of a special issue in memory of Thomas Mathiesen’s work. I will not discuss the possibilities of accuracy, or the research culture Mathiesen describes but rather, how practitioners perceive risk predictions. I will provide a framework for a specific type of prediction, where risk assessment compares people within a group and their inter-individual variation. These risk assessment tools should be distinguished from causal research design: the former have a much lower degree of internal validity than the latter and are based on uncertain data, such as antisocial behaviour registered in police databases.1 What further complicates the use of this type of risk assessments tool in practice is that, while for policymakers targeting mean targeting groups of offenders based on thresholds, for frontline officers it means targeting an individual. The frontline officer must set an individualised assessment against the aggregated assessments from risk predictions (Kemshall, 2003: 100). Inspired by Mathiesen’s view of the risk prediction culture in the research community, I will take a closer look at how the concept of risk is currently handled by decision makers and practitioners in the Norwegian criminal justice system. I will analyse how this has been manifested in three Norwegian risk assessment projects: the offender assessment system, the risk assessment of violent extremism, and early intervention to prevent youth crime. This article is a contribution to the understanding of how the political aspects of risk impact practitioners. I will address the inherent challenges related to prediction accuracy by examining practitioners who put risk research into practice. How is the concept of risk as an artifact understood and handled by these decision makers?

I will first present the context of selective incapacitation and the long tradition of research in this field. I will then contextualise different understandings of risk within policy and practice, highlighting the connections between selective incapacitation and practitioners’ perceptions. The main section is an analysis of the three cases, and I end by discussing how acknowledgement of the political aspects of risk can promote sensitivity around the use of risk assessment tools and a more nuanced discussion.

Selective incapacitation and the research culture

Risk products are used to measure and manage risk in areas ranging from medicine and health to finance, law, business and industry. The field of risk analysis consists of ‘hard’ subjects such as mathematics, biostatistics, toxicology and engineering, and ‘softer’ subjects such as law and justice, psychology, sociology, criminology and economics. The technical aspects of risk prediction and predictive instruments have always been important to decision making in the context of criminal justice. In the penal field, the quest for statistically valid and predictively useful risk factors for recidivism and parole violation became important at the beginning of the 19th century. Parole predictors were the first real attempt to provide empirically grounded and statistically valid tools (Burgess, 1928 in Harcourt, 2007). In the 1970s, researchers set about developing predictions for a range of criteria (Farrington and Tarling, 1985). It is this research culture that Mathiesen found fault with. The research community and politicians aspired to learn, through sophisticated methodologies, whether a convicted person was likely to reoffend, escape from prison, be rehabilitated in prison, or be eligible for parole. The idea was that detention budgets should be spent on those identified as dangerous to society, and on identifying those with the necessary motivation for rehabilitation programmes.

As Mathiesen (1998) points out, predictive methods were used to make criminal justice more manageable through the selective incapacitation of dangerous individuals. The proponents of incapacitation wanted to use forecasts to calculate how much less crime there would be, if people who had been convicted of serious crimes, or who had reoffended a number of times, were imprisoned. This was termed collective incapacitation, with no prediction of which of the group were high-risk. Mathiesen describes some collective incapacitation experiments, that led to a dramatic increase in the prison population in the United States. They were based on the unethical principle of punishing people for crime they might commit in the future, not crime they had committed in the past (Mathiesen, 1998: 457). Various risk-prediction tools and techniques were subsequently introduced, to ensure accuracy. One of the earliest initiatives was the famous Rand study, led by Greenwood and Abrahmse (1982), which developed tools for selective incapacitation based on broad categories of historic and static risk indicators for classifying hazardousness: 1) previously punished for the same offence; 2) imprisoned for more than 50% of the last few years; 3) conviction before the age of 16; 4) having served time in a juvenile institution; 5) drug use in the last two years; 6) drug use as a young person; 7) unemployed for more than 50% of the previous two years. Mathiesen argues that, although researchers claim to recognise variations in the predictive power of these indicators, the variations are all minor or marginal, and all have low accuracy rate to predict dangerousness:

The more dangerous the behaviour, the more difficult it is to predict. What the studies, taken as a totality, actually show very clearly is that you have to detain a much larger number of people than those who are actually dangerous in order to reach the dangerous. (Mathiesen, 1998: 461)

This is also well known in the ‘psy’-disciplines: dangerousness has ‘poor predictive power’ (Kemshall, 2003: 52). Although this has led to tension between the scientific episteme and risk prediction, it also results in further engagement in developing sophisticated risk indicators in the research field. In his article Mathiesen (1998) questions to what degree this research activity has any utility, given its lack of accuracy.

Risk prediction cultures in policy and practice

The background to the introduction of risk prediction tools was a critique of the clinical and diagnostic models of the 1970s (Hannah-Moffat, 2013). Mathiesen (1998) says that, in the penal realm, this was also driven by the ‘what works movement’, which developed from criticism of the rehabilitation paradigm, expressed in Martinson’s (1974) slogan ‘nothing works’ as regards recidivism. As a clinical approach, this has a long tradition and is followed by healthcare professionals such as psychologists, doctors and psychiatrists. Using their own discretion, practitioners make assessments of individual clients based on information found in medical records and interviews, often in interdisciplinary teams. The advantage of clinical methods and discretionary assessments is that they can provide a better understanding of the individual’s developmental course and of predisposing factors related to risk. The goal is to implement tailored, accurate measures, and to minimise the risk behaviour. Researchers have, however, pointed out that clinical assessments can be inaccurate and too subjective (Hannah-Moffatt, 2004). Clinical assessments can also be prejudiced and biased by the professional’s age, social class and experience, and this affects interpretation skills, attitudes and perceptions. Demands came to be made for therapists’ assessments to be more objective, and risk assessment tools based on probability and consequence statistics were developed (Hannah-Moffatt, 2004). The actuarial methods Mathiesen points to emerged in the 1980s in response to criticism of the deficiencies of practitioners’ assessments.

These statistical tools were developed by researchers at a time when risk assessment technologies were becoming more widespread among such practitioners as social workers, child welfare professionals, prison officers and the police. This is what Feeley and Simon (1992) term ‘actuarial justice’, where essentially statistical techniques from insurance and risk management became part of the penal system’s approach to assessing the risk of offending and recidivism. Based on statistical models, the actuarial methods were designed to provide a basis for determining which factors predict the recurrence of negative behaviour, such as recidivism or the repeated use of violence. Such mapping is often used to enable services to use resources more efficiently. Selective incapacitation, mentioned above, is a good illustration of this: indicators used to predict for it include previous convictions, age, number of prison sentences, types of offence, drug use and gender. Actuarial approaches are therefore based on information about risk at group level by applying static historical data.

However, it became clear that, even if a person belongs to a high-risk group, it is not possible to make reliable predictions at the individual level. There is always uncertainty in relation to predictions, as Mathiesen (1998) points out, and rare events are more difficult to predict than common ones – the ‘black swan’ phenomenon (Taleb, 2007). Predictions are based on historical information, and if records are not updated, changes resulting from, for example, the implementation of measures, are not taken into account. Actuarial risk assessments have therefore been criticised for being too static for practitioners assessing probability at the individual level, and what are known as structured mapping tools have emerged that combine risk assessment tools with professional judgement: so-called third-generation risk-assessment tools (Andrews and Bonta, 1994; see also Berkman Klein Center, 2024). By incorporating human judgment into the work, the method is regarded as much more dynamic and flexible: one that can be adapted to changes in the individual. For example, structured mapping tools, together with professional judgement, are used to assess the risk of violence (Hannah-Moffat, 2004; 2013).

Although risk assessments have become increasingly tailored, the issue is how they should be deployed in practice. Kemshall argues that the third generation of risk-assessment tools is still characterised by a probability discourse: ‘While pure actuarialism has been superseded by combined tools, the approach is still underpinned by the use of aggregates derived from meta-analysis and probabilistic thinking’ (Kemshall, 2003: 79). Previous studies also demonstrate that practitioners in the police and correctional services are opposed to perceiving risk assessment as an objective process with no element of discretion (Kemshall, 1996; 1998; 2003; Robinson, 2003; Gundhus, 2005). However, although practitioners have reservations about the objectivity of prediction, it has also been perceived as helpful in guiding experienced practitioners in pre-trial risk assessments (Terranova et al, 2020). Research suggests that risk assessments may have some utility in predicting treatment interventions. There is no evidence that they can predict future crime, but they have had some success in targeting offenders for intervention programmes in prison and probation (Kemshall, 2003: 71–2). This research shows that disagreements about the tools between, for example, practitioners and those who create strategies and risk tools, are not primarily about the validity or refinement of the instrument. The disagreement is about different understandings of risk, the extent to which risk can be measured, and the interpretation and use of the calculations. Studies show that, for practitioners, risk is dependent on several factors, which are highly-contextualised and situation-specific (Kemshall, 1998; 2003).

This article is not about what risk elements can contribute to improving predictions, but about how practitioners handle the risk indicators that have been selected, and the probability scores formed on their basis. Understanding risk as situation-specific and embedded belongs to a social-constructivist discourse on risk. It emerged from the work of Mary Douglas (1986; 1992), who did not think that risk should be seen as an artifact. This view has been concerned with how risk is always embedded in an institution and negotiated by actors, making it impossible to talk about risk as something objective. It is more concerned with how we experience the world around us and what we value. Lupton (1999) argues that perceptions of risk are anchored differently among employees and managers in risk-focused organisations. Risk-assessment tools lack an understanding of how institutional practices affect the risk assessments on which they are based. Risk assessments might lack the element of interpretation and are characterised by the absence of consideration of the influence of the social context. Despite being described in technical-scientific language, risk factors ‘are by nature imprecise and difficult to target, and many risk factors are not amenable to minimisation’ (O’Malley, 2001). Drawing on these insights, I will look at how the concept of risk and assessments of it are made use of in specific cases.

The cases were selected to shed light on different aspects of crime and recidivism prevention. The analysis of offender risk assessments draws on previous research evaluations (Hansen et al, 2014). The analysis of risk assessments aiming to prevent extremism, violence and terror also builds on an evaluation conducted by Lid and Christensen (2023). The third case draws on original empirical research consisting of 14 semi-structured interviews with 18 practitioners, carried out by Pernille Erichsen Skjevrak and me. The interviews were conducted between 2021 and 2023. In addition, we conducted 98 hours’ participatory observation of the development of a risk-assessment tool and I observed the development of a report. Seven of the interviews were with police officers who directly participated in the project. In addition to them, we interviewed lawyers, and conducted a group interview with five bureaucrats from the Norwegian Police Directorate about their views on these new approaches to youth crime prevention and risk assessment. The empirical data is theorised by applying risk assessment as a sensitising concept. Such concepts encourage an exploratory and open-ended approach, allowing researchers to uncover the diverse forms and perspectives that a concept can assume in different contexts (Bowen, 2006). I have used it in this way, to navigate complexity without imposing predefined measures.

Offender assessment systems

Risk/need assessment tools such as OASys came with the emergence of evidence-based treatment and are designed to sort offenders into cognitive programmes (Andersson, 2004). According to the risk-assessment tool, structured mapping reveals the individual’s criminogenic needs that are also the risk predictors, which, if intervened upon, can prevent criminal behaviour. As Kemshall formulates it: ‘The tools’ major contribution is in the area of criminogenic needs assessment and the targeting of offenders for accredited programs of intervention in both prison and probation’ (Kemshall, 2003: 71). Treatment must be matched to the ‘needs’ of the offender linked to the criminal behaviour, and only certain needs legitimise intervention. Needs are therefore reframed in the language of risk, with low risk matched to low-intensity service and more intensive treatment to high-risk. This also means that having a need makes you a risk (Kemshall, 2003: 71; Hannah-Moffat, 2004). The risk factors amenable to change are those targeting behaviours and thinking patterns, and interventions have largely consisted of cognitive-behavioural programmes.

In the original Canadian version, OASys matches the mapping to suitable interventions: mainly cognitive and behaviour intervention for low risk. The risk assessment tool is based on the theory that the reasons for crime are psychological, and lack consideration of the influence of the social context of political processes and institutional frameworks. The advantage for policymakers is that mapping and matching involves the introduction of apparently objective measuring instruments. Its weakness is that the tools are presented as unaffected by political processes and moral judgements, which gives them an aura of scientific objectivity, as pointed out in Mathiesen’s (1998) critique of selective incapacitation. Risk assessment tools are thus based on the belief that risk can be measured as something natural and objective. However, the risk that is measured exists only in the artificial microworld created by the risk analyst.

OASys was originally designed to identify both dangerousness and needs (Andrews and Bonta, 1994). A major point here is that, in Norway, it is only used to identify the need to prevent crime (Hansen et al, 2014; Ugelvik, 2022). This is in line with the social crime- prevention approach and the welfare state logic of Norwegian criminal justice, which generally aims for inclusion rather than exclusion (Egge and Gundhus, 2012). The Norwegian Correctional Service developed ‘Need and resource mapping assessment’ (BRIK) in 2004, based on OASys, to map various criminogenic needs related to offender recidivism (OASys, 2004; Giertsen, 2006). After the OASys pilot, the concept of risk was taken out of the Norwegian version of the assessment tool (FOR, 2011). It was reframed as a ‘need and resource mapping assessment’ (BRIK). This was because of the negative consequences of talking about offenders as risks. Although the needs in question relate to risk factors that contribute to crime – so called criminogenic risk factors – it is a legal requirement that they should only map needs and resources. Talking about need rather than risk also made the assessment tools more acceptable to practitioners (Hansen et al, 2014). Did it, however, change the role of the instrument? It still connects interventions to criminogenic needs. Our interviewees found it limiting that the term ‘need’ was so narrowly defined and related only to criminal behaviour. Evaluation in the correctional-service field, both prison and probation, makes clear that the use of the hybrid term, ‘criminogenic needs’, also has disadvantages for treatment. Firstly, the approach became even more individualised, rather than making possible the group-based interventions in prison that, in their view, had worked better (Hansen et al, 2014). Several of the interviewees, both prison and probation officers and offenders, said that it was a major challenge to use the standardised form to identify individual criminogenic needs. It really did not identify the person’s needs.

In the evaluation, it is claimed that the advantage of BRIK is that it connects several administrative bodies together in follow-up and reintegration work (Hansen et al, 2014). BRIK is based on research on the effects of serving a sentence which shows that reintegration into society is just as important as preventive measures taken in prison. The reintegration aim of the tool also differs from interventions in the Canadian version (Andrews and Bonta, 1994). This means that the actual interventions used go beyond the narrow behavioural and cognitive measures that are part of the original OASys risk-assessment tool. In the Norwegian version, follow-up measures are applied to create networks and support reintegration, by means, for example, of work and education.

Risk assessments and violent extremism

The prevention of violent extremism is an area where drawing conclusions from group to individual level is particularly problematic. Research on predicting dangerousness and the developments of tools for violence prediction derive predominantly from the mental health field and reflect psychiatric attempts to predict dangerousness accurately. Mathiesen (1998) claims that risk-prediction tools are not value-free objective tools that emerge from their technical representation, but inevitably reflect a series of policy and value decisions, including the choice of variables, and the relative weight given to false positives and negatives. Tools have largely been generated from statistics on institutional groups, either in prison or in mental hospitals. This has led to resistance from practitioners, as Kemshall observed:

Traditionally violence prediction has been plagued by unacceptable levels of unreliability (Monahan, 1981). Practitioners have been resistant to prediction on the grounds of both possible litigation and ethics (Walker, 1996). (Kemshall, 2003: 75)

In Norway, since 2014, when the Government presented the action plan against radicalisation and violent extremism, there have been widespread efforts in schools, health and child welfare, the municipalities, the correctional services, the police and the Police Security Service (Ministry of Justice and Public Security, 2020). Various stakeholders carry out assessments based on reports or individuals’ worries and concerns, from prevention and outreach staff in the municipality and the police and the Police Security Service, and they experience a need for tools that can indicate the level of risk (Lid and Christensen, 2023). Based on information about typical cases, previous experience, or other information, they have to decide what to do with someone. Should they report the person to the police? Should the police initiate supervision measures?

Lid and Christensen’s (2023) evaluation of risk assessment tools in the Scandinavian countries to prevent violent extremism show that the division of labour between the Police Security Service and other areas of police responsibility is unclear: the two partly overlap, and there are no clear guidelines for the services’ responsibilities in preventive work, especially in Norway (Lid and Christensen, 2023). The division of duties appears to be determined by the degree of threat. The police’s responsibility is to prevent people they believe may develop an intention and capacity to commit acts of extremist violence from actually doing so. The police assessment of intention and capacity to commit an act of extremist political violence is seen as particularly complex by practitioners. This work is done at the stage before the Police Security Service’s risk assessment, but after the initial consideration of whether there are grounds for concern about radicalisation. This is therefore an assessment that is carried out before someone has been involved in terrorism-related activities, for example in a pre-crime phase, but it may also involve risk assessment in connection with the reintegration of, for example, travellers returning from Syria.

Based on interviews and previous research, Lid and Christensen (2023) argue, in agreement with already-mentioned research on dangerousness, that risk assessment has clear limitations as a tool for predicting future violent extremism. It is unclear how a previous history of violence and protective factors should be assessed. Research shows that there are many pathways to violent extremism (Lid and Christensen, 2023: 22). Both internationally and nationally, findings show that there are strong links between crime and terrorism, while these are almost impossible to predict in advance. They argue, however, that the purpose of the risk-assessment tool should not be to make predictions, but to make it easier to prioritise among what have been judged to be worrying signals and cases. The type of information that is needed is clearly determined by when early intervention is initiated and how much data is available for the concern. What information is available depends on prior monitoring, reports from teachers, and assessment of tip-offs from interdisciplinary groups where anonymous cases have been discussed.

Lid and Christensen (2023) show that Norwegian practices in this field vary greatly between police districts and, in most cases, consist of unstructured risk assessments – or the use of their own forms. Norwegian interviewees said that completely-unstructured prioritisation of concerns has its challenges. The dominant narrative about the causes of radicalisation and the potential for violent extremism in Scandinavia concentrates on mental disorders and diagnoses, and this makes the situation all the more complex. It has created a need for information from health authorities before prevention work can be done. Only in Denmark is this type of information shared among practitioners. In the report, it is somewhat unclear what is gained from sharing health data at the ‘info houses’ in Denmark, except for helping participants feel more confident when ‘everything has been assessed’. Nevertheless, the evaluation report shows that the absence of professional guidelines for identifying risk has meant police officers create profiles without having the necessary professional expertise (Lid and Christensen, 2023: 19, 65). One conclusion of the evaluation is that it is important to raise the individual practitioner’s awareness of when they should move from professional knowledge to personal assumptions about the phenomenon, and of the basis for the interpretation, and of the information the assessment is based on. In principle, risk profiles should be constructed by qualified personnel, but in practice this does not happen.

Even the structured assessments of the third-generation tools do not escape subjectivity in their interpretation of risk (Lid and Christensen, 2023: 23). Research shows that structured guidelines can lead to uncertain knowledge being used in an unreflective way in a busy workday (Briggs, 2013; Molander, 2016). What information and assessment are shared for is important. Lid and Christensen (2023: 43) say:

Knudsen and Stormoen (2020) point out that actors in the health and care services may find it ‘particularly problematic’ to participate in a risk-assessment process driven by safety rather than treatment or rehabilitation considerations. This indicates that the purpose of the assessments may be important for whether the health and welfare services perceive it as legitimate to participate in the assessments. (Author’s translation)

The purpose of the risk assessments should therefore be clear, and distinct from the purpose of selective incapacitation. The practitioners’ ethical considerations are therefore consonant with Mathiesen’s (1998) concerns about the ethical issues connected with selective incapacitation (Knudsen, 2018; Knudsen and Stormoen, 2020). The elusive and imprecise concept of dangerousness can lead to urgent efforts. What is most interesting in this case, is that the notion of risk is perceived as too controversial to use. For example, the Danish ‘infohus’(info-house) does not call what they produce ‘risk assessment’, because they associate this with the work of the police intelligence services (Lid and Christensen, 2023: 29). They therefore make a clear distinction between risk/needs assessments for support or help, and security assessments, in the same way as the BRIK example above. The police and security service may also decide that concern about high risk should not be passed on to the information centre, but handled by the police and security service alone, as it should have nothing to do with support and assistance.

Early intervention for children and young people

The third case is a project Pernille Erichsen Skjevrak and I carried out in the Oslo police district, using risk-assessment tools to identify young people who are the ‘most suitable candidates’ for crime prevention. Drawing on 14 interviews with 18 people, one of the topics we examine is how risk is conceptualised by the various practitioners involved. How are crime preventers in the police and the intelligence analytics unit actually using risk-assessment tools? How much do risk-assessment technologies increase the intrusive scrutiny of young people?

The background to this is that, in the Norwegian police, dedicated units are striving to prevent youth crime and delinquency. To identify young people early in their potential criminal career, preventers can request a report from the intelligence unit to identify individuals at risk who have not yet come to the attention of the police. Intelligence analysts use software to construct a profile of such young people and their associates. However, the data used by the intelligence unit must be available in police databases and registered by police officers, that is, beat officers or crime preventers in the local community. The (street-level) police gaze is therefore a key element in being selected as a candidate for attention. The final report is also shared with patrols, which increases the chances of intrusive and stigmatising control by them.

The intelligence analyst looks for young people registered as associated with negative incidents, and for connections between unknown youths and those involved in several crime incidents. The extracts are first imported into Excel to be structured and categorised – what the analyst calls cleaning data to prepare it for analysis. The data is then imported into Analyst’s Notebook (ANB), a software tool used to compile, process and visualise information based on entity-link-property (ELP) methodology. An entity can be a person, a criminal case, an event, or a piece of information. The links are the roles the person has in the criminal case, event, or piece of information. Once the data has been collected and visualised in ANB, the analyst goes through the clusters and zooms in on individuals to see details of the relationship between entity and links. If it is decided that an individual should continue to be viewed as a candidate, further searches in the police systems are made to see if there is more information available, or if there are reasons why the individual should not be further scrutinised. The person might already be under scrutiny, for example, or might after all not belong to this geographical unit. If no such information is uncovered, the analyst goes on to search the registers to find what have been selected as eight risk indicators: parents that do not live together, signs of drug use in the home, indications of mental health problems in the home, family members involved in criminal cases (and what kind), association with other people linked to crime, use of alcohol, use of other drugs, and being involved in violence or abuse cases in the home. There are ‘no’ and ‘yes’ tick boxes for the indicators: each ‘yes’ scores one point, so the possible top score is eight. This would predict an increased risk of criminal behaviour and is thus the predictive element of the intelligence analysis.

The use of risk indicators for classifying young people was seen as problematic by researchers and teachers when the project was presented at the Norwegian Police University College in 2019. Internally it was also criticised by police crime preventers for drawing conclusions from limited information. According to the internal evaluation and our observations, this immediately led to less weight being given to the risk factors used.

Our findings show that personal consideration of risk by the intelligence analyst is more important in decision making than the direct application of quantified risk scores produced by the system. This is in agreement with Briggs’ (2013) study of practitioners in Youth Offending Teams (YOTs) who are required to use the Asset standardised risk-assessment tool to predict the risk of reoffending by children and young people. They also found it difficult to apply the risk scores, and Briggs mentions how ‘practitioners go beyond narrow conceptions of risk and, by going deeper, embrace the complexities of children’s and young people’s biographies and social and material circumstances’ (Briggs, 2013: 24). This approach, which also encompasses broader structural issues and incorporates social science theories and perspectives, appears to be more constructive and useful for understanding future criminal careers than narrower individual characteristics considered in isolation from structural explanations of crime. This logic is also more consonant with social crime-prevention strategies in Norway, which aim to foster the social inclusion, rather than exclusion, of young people. The analysts and preventers exchange views on who should be selected for attention. The use of risk technologies turns out to be a predominantly manual process, strongly influenced by the human police gaze. The intelligence analysts’ introduction of digital tools into preventive police work has led to a surprising increase in communications and discussions between police and analysts about the selection of candidates. They are therefore more likely to co-produce the output and work it out together in stages, over several meetings.

As with the example of criminogenic needs considered above, risk factors are only concerned with previous negative incidents, unlike the preventers’ view of the person. They apply a more socially constructive notion of risk. On one hand, intelligence analysts are more interested in recorded incidents in the police registers than in social background. One of them says, ‘The most important thing is what a person has done, not the risk indicators in their family or lifestyle’. On the other hand, analysts and preventers agree that negative incidents alone do not justify intervention at an early stage. This sparked discussion about how young people’s resilience has been overlooked. The risk analysis products, the preventers said, gave a one-dimensional portrait of young people, and they spoke of how the emphasis on looking for risks often led youngsters to be seen as threatening and dangerous.

The police took a cautious view of their own data and were more aware of the need to be careful about how it was used, and this also affected how they worked. They used different language and had different ideas; they made an effort to avoid labelling and stigmatising young people. Of specific interest here is the fact that the notion of risk became problematic and ethically dubious. This was possibly due to their taking a constructive view of risk, rather than treating it as an objective artefact. Their language was critical of the technical and instrumental aspects of the language of risk and terms used in it. Making a report will have the effect of widening the net, since more young people are ‘discovered’ in the database and brought under the police gaze. The knowledge base, however, was no longer seen as solid enough to initiate police responses. The digital prism makes young people visible in the police registers (see Flyverbom, 2019), but what this information is used for is important. The police gaze has in fact turned out to be more concerned and reflective than suspicious.

The changes in organisational collaboration, such as the face-to-face collaboration between analysts and preventers and the discussions it led to, affected how risk indicators were interpreted. This shows how risk technologies can also be framed as sense-making technologies, rather than practical instruments for revealing the truth and making interventions. In the case just described, it also led to the avoidance of applying the word ‘risk’ to young people. The point of departure was reflection on the output of the data analysis, and their conviction that no mapping tools are accurate. Various types of errors, such as false positives and false negatives, occur when moving from the statistical to the individual level. They also took into consideration the criticism of risk factors for directing attention towards indicators (symptoms) rather than causes.

Risk cultures and politics

I have analysed how practitioners in the correctional services and the police practice a constructivist conceptualisation of risk by the way they interpret what constitutes risk in context. Those who create strategies and produce assessment tools tend to use an artificial concept of risk, framing it in a technical and statistical discourse where the multiplicity of unforeseen factors and events is replaced by the probability of a selected factor or event. When risk-assessment tools are introduced as neutral and objective, conflict and resistance are created precisely because of the divergent ways of thinking and understanding of the various professions and professional cultures. The findings therefore echo Kemshall’s (2003: 98) observation:

There is a key difference between developments that are ‘risk driven’ in principle and risk delivered on the ground…. Tool use and risk classification in particular have proved to be important sites of worker and manager mediation, and traditional practices are adapted to new demands rather than replaced.

Her argument is that, in this case, risk-based tools have had an adaptive and evolutionary, rather than transformative impact. This is also a consistent finding in the selected cases: risk assessments can be useful in ways other than that of predicting an individual future offence. They can help with the selection of appropriate measures, as in the risk assessment of offenders and extreme violence, or they can shed more light on causal relationships or be a starting point for discussions between practitioners. For this to happen, imaginaries, mechanisms, knowledge base, and processes by which risk prediction and assessment tools are developed, must be subject to high standards of visibility and accountability (Kaufmann, 2023). Now that the concept of risk is understood to be more unpredictable, diverse, and uncertain, ever more formalised systems for assessing and managing it are being developed (Power, 2004). Since the 1990s, the concept of risk has been explored by professional groups such as judges, the police, and prison officers (Chan et al, 2001; Robinson, 2002; 2003; Aas, 2005; Gundhus, 2005; 2009; 2013; Kemshall, 2008). The risk assessment tools used in criminal justice are based on an unrealistic theoretical assumption that science is neutral and consonant with reality, and that, if not contested, it should be used as actionable knowledge. The risk-assessment tool assumes that risk assessment is an objective scientific process.

This contrasts with a contextual view of risk, which assumes a situated understanding of science, where risk depends not only on what is being assessed, but also on who is doing the assessing (Haraway, 1988). Other assessors of risk may assess uncertainty differently, according to their own imaginaries, conditions, and framework (Kaufmann, 2023). Since science is a situated practice that cannot correspond to reality, it is important to highlight the imaginaries and perspective through which risk is generated, processed and assessed. The knowledge base for predictions is also of importance. Preventing risk inevitably involves dilemmas: risk reduction can lead to improved practice, but may also lead to greater inequality. Evidence bases for predicting risk often use arrest history, previous offences and the use of violence as indicators. Predictive instruments must be distinguished from cause-orientated research design, as they have a much lower degree of internal validity and are uncertain. An important reason for this is that the knowledge base for risk assessments is created by comparing people within a specific group of, for example, offenders (inter-individual variation) rather than by comparing larger population groups. The indicators that are created are thus based on antisocial behaviour, where criminogenic factors in practice become an intermediary for other underlying factors (root causes). These can be a lack of work, education, and so on, that is, risk factors that can be remedied in other ways (dynamic factors). There may also be biological factors (static factors). In practice, it is also difficult to distinguish between what are seen as static and what are seen as dynamic factors. One criticism of the use of risk assessments is therefore that they focus on symptoms rather than causes (see Prins and Reich, 2018). Statistical calculations about future events are not based on exact science, but on imprecise, uncertain and qualitative data.

The three cases demonstrate that deciding how to navigate and negotiate risk is a complex activity, and that risk scores do not provide objective and clear answers. For example, the boundaries between help and control are blurred. The three practitioner groups have concerns about the way the notion of risk creates mistrust. It is related to such negative incidents as violence, criminogenic needs and behaviour. Trust is the cornerstone of successful efforts: for example, it enables public and community actors to help vulnerable people (see Christensen, 2015; Lid and Christensen, 2023: 52). It is difficult to work with risk factors because of the lack of boundaries beyond which there should be prevention or intervention. In all three cases, what practitioners do not want to lose is trust. They must have the trust of both the general public and the groups they work with, including offenders, potential violent extremists and young people.

There are also issues to do with the blurring of help and control when institutional logics get mixed up. The downside of a holistic, overlapping approach is that the police become social workers and social workers become police officers, which leads to the securitisation of social workers. Anja Dalgaard-Nielsen and Jakob Ilum (2020) argue that the Scandinavian model’s emphasis on action and cooperation with municipal actors beyond the security professionals is a strength, but it also creates challenges and grey-area activities when disparate actors are involved. It increases surveillance in society – the widening of the security net – and social workers find they are being securitised. The field is characterised by the interweaving of help and control – preventive work and security control are mixed and boundaries are blurred (Lid and Heiersted, 2019: 31, 37–40, 43).

Different conceptualisations and epistemologies of risk can be discerned in both generations of risk-assessment tools and subsequent risk-management approaches. As Mathiesen (1998) argues, developments in risk tools can be seen in a broader context than that of the mere improvement of risk methodologies. Early statistical predictors used in selective incapacitation reflect the imperatives of criminal justice, that is, to accurately classify and respond to habitual criminals (Pratt, 1997). The methodological techniques of actuarialism spread into many areas of policy at that time. Third-generation tools are rooted in more recent criminal-justice trends: the aim now being to evaluate interventions, promote evidence-based treatment, classify and target offenders for change programmes, and allocate scarce resources in a rational and transparent manner. In this endeavour, sexual and violent offenders have presented a particular challenge to the use of risk-assessment tools, due to low base rates and the difficulty of grounding them sufficiently in actuarial factors. It is these offences, however, that attract most media coverage and public attention when things go wrong and where there is most uncertainty and complexity. In this area, claims about the neutrality of technologies are the most problematic.

Conclusion

In the 1970s and 1980s, risk was communicated via experts in the form of precise calculations, and a sharp distinction was drawn between real (objective) risk and perceived risk. In the 2000s, these tools were criticised on the ground that they were being used to do away with professional judgment by creating distrust in the subjective. This led to the creation of expert systems such as OASys, that have an autonomous life of their own but are deployed differently in practice. Such expert systems thus take on a dual role, both producing and reproducing risk and uncertainty. Their presentation of the results of calculations as objective means that supporters of this tradition can be described as naive positivists. They view risk assessment as an objective, value-free activity that can be performed neutrally. It is this assumption that leads to expert assessments being regarded as objective facts, while the assessments of ‘most people’ are seen as subjective opinions. However, few in the risk-research community claim that there are universal criteria for choosing a theory and that these guarantee rationality in science – which is what the logical positivists argued in the 1920s (Aven and Renn, 2010). Knowledge gained from qualitative approaches to risk, therefore, can remind us of political, social and cultural understandings of the concept, which in turn enriches and extends the basis for expert opinion on risk.

I have shown that, in the cases considered, risks are perceived as associated with crime and criminal behaviour. However, practitioners have developed an approach based on a broader, more nuanced, definition of risk: one that I referred to earlier as the qualitative social science tradition. ‘Risk’ describes all aspects of people’s experiences and feelings about the kinds of hazards they face, the consequences of those hazards, and therefore also refers to what is an acceptable level of risk (Aven and Renn, 2010). In detailed surveys of people’s perceptions of risk, this definition has been useful. Most people include all possible considerations when assessing risk, for example, the absence of available help, and not just an abstract probability figure for uncertainty and loss or risk factors for criminal behaviour. I have argued that this might be as useful as the risk scores, and knowledge about the causation of crime in populations might also be helpful. Research has shown that different social and cultural groups have varying experiences and assessments of risk. These variations are in turn influenced by the historical origins of risk phenomena; for example, one might have a risk culture that regards risk as dangerousness, handled in a particular way by the governing authorities – there are both social and political factors to consider.

Mark Brown (2000) is inspired by sociocultural understandings of risk and examines why risk has largely been conceptualised from top-down perspectives, which are so poorly suited to the ways in which risk is operationalised in practice. He is critical of research on risk as a top-down enterprise, which attempts to deliver understandings of it to those working in the criminal justice system, and which rejects other ways of thinking about prevention and potential offenders. The advantage of structural judgement is that it is a more holistic approach to risk, that creates awareness of discretion. Moreover, viewing those working in the criminal justice system as merely passive actors reacting to orders from above and agents within a broader ‘inexorable logic of risk’ is to oversimplify matters. There are significant firewalls to policy implementation. The roles of workers are important sites for mediation and resistance (O’Malley, 2001).

In the last ten years, algorithms have become widely used in criminal justice systems (Kehl et al, 2017), and have legitimised a more automatic and technical view of risk (Mehozay and Fisher, 2018). The science of risk assessment is developing (Hannah-Moffatt, 2016). One important question that needs to be answered is what type of interventions should be implemented on the basis of mapping. Mathiesen (1998) argues that if the risk markers cannot be matched by an intervention to change the person, mapping is used to justify selective incapacitation. It also reduces the practitioner’s options and their selective navigation. One problem is that the focus on identified risk indicators shifts the focus to precipitating causes rather than more fundamental ones. One problem with risk assessment is that both knowledge and measures are shaped by the practice on which they are based, such as what is observed by the police (Prins and Reich, 2018). The life cycle of the data is generated and curated by the form itself (Kaufman, 2023). A young person who comes under the police spotlight is more likely to be seen when the offence is repeated. Since risk assessments are so dependent on past history, measures based on this can reinforce inequality by giving the problems of certain groups more attention than others. The idea of criminogenic needs also obscures how risk is linked to more complex socio-structural factors. As Hannah-Moffat puts it:

I do not suggest that agency and individual responsibility are unimportant – just that there is also a need to empirically examine how socio-structural processes and factors interact with and inform agency. This type of analysis requires an analytical and practical decoupling of risk and need and a more complex, situated socio-structural understanding of how unmet needs can produce risk. (Hannah-Moffat, 2016: 35)

Risk assessment can also lead to negative publicity, with some groups being labelled as more deviant, as is seen in the fact that youth crime is largely associated with the working class and immigrants, rather than with young people growing up in wealthy homes. Without a debate on risk, we end up intervening on symptoms rather than causes, and assume that everything has been settled.

Note

1

The basis for risk assessments is the comparison of people within a specific group of, for example, offenders (inter-individual variation), rather than of larger population groups. The indicators created are then based on antisocial behaviour, where criminogenic factors effectively become an intermediary for other underlying factors (root causes). These might be lack of work, education and so on, that is, risk factors that can be proxy for others (dynamic factors). There may also be biological factors (static factors).

Funding

This work was supported by the Norwegian Research Council under Grant: 313626 ‘Algorithmic Governance and Cultures of Policing: Comparative Perspectives from Norway, India, Brazil, Russia, and South Africa’ and Nordforsk under Grant: 106245 ‘Critical perspectives of predictive policing’.

Acknowledgements

I would like to thank two anonymous reviewers for their constructive comments and suggestions. I am very grateful for collaboration with Pernille Erichsen Skjevrak on the CUPP project.

Conflict of interest

The author declares that there is no conflict of interest.

References

  • Aas, K.F. (2005) Sentencing in the Age of Information: from Faust to Macintosh, London: GlassHouse Press.

  • Andersson, R. (2004) Behandlingstankens återkomst: från psykoanalys til kognitiv beteendeterapi, Nordisk Tidsskrift for Kriminalvidenskab, 91(5): 384403. doi: 10.7146/ntfk.v91i5.71608

    • Search Google Scholar
    • Export Citation
  • Andrews, D.A. and Bonta, J. (1994) The Psychology of Criminal Conduct, Cincinnati, OH: Anderson Publishing.

  • Aven, T. and Renn, O. (2010) Risk Management and Governance: Concepts, Guidelines and Applications, Berlin: Springer-Verlag.

  • Berkman Klein Center (2024) Risk assessment tool database. https://criminaljustice.tooltrack.org/tool/16629

  • Bowen, G.A. (2006) Grounded theory and sensitizing concepts, International Journal of Qualitative Methods, 5(3): 1223. doi: 10.1177/160940690600500304

    • Search Google Scholar
    • Export Citation
  • Briggs, D.B. (2013) Conceptualising risk and need: The rise of actuarialism and the death of welfare? Practitioner assessment and intervention in the youth offending service, Youth Justice, 13(1): 1730. doi: 10.1177/1365480212474732

    • Search Google Scholar
    • Export Citation
  • Brown, M. (2000) Calculations of risk in contemporary penal practice, in M. Brown and J. Pratt (eds) Dangerous Offenders. Punishment and Social Order, London: Routledge.

    • Search Google Scholar
    • Export Citation
  • Chan, J., Brereton, D., Legosz, M. and Doran, S. (2001) E-policing: The Impact of Information Technology on Police Practices, Brisbane: Criminal Justice Commission.

    • Search Google Scholar
    • Export Citation
  • Christensen, T.W. (2015) A Question of Participation: Disengagement from the Extremist Right: A Case Study from Sweden, Roskilde: Roskilde Universitet.

    • Search Google Scholar
    • Export Citation
  • Dalgaard-Nielsen, A. and Ilum, J. (2020) Promoting disengagement from violent extremism in Scandinavia: what, who, how? in S.J. Hansen and S. Lid (eds) Handbook of Deradicalization and Disengagement, London: Routledge.

    • Search Google Scholar
    • Export Citation
  • Douglas, M. (1986) Risk Acceptability According to the Social Sciences, London: Routledge & Kegan Paul.

  • Douglas, M. (1992) Risk and Blame: Essays in Cultural Theory, London: Routledge.

  • Egge, M. and Gundhus, H.I. (2012) Social crime prevention in Norway, in P. Hebberecht and E. Baillergeau (eds) Social Crime Prevention in Late Modern Europe, Brussels: Brussels University Press, pp 25577.

    • Search Google Scholar
    • Export Citation
  • Farrington, D. and Tarling, R. (eds) (1985) Prediction in Criminology, New York: State University of New York.

  • Feeley, M. and Simon, J. (1992) The new penology; notes on the emerging strategies of corrections and its implications, Criminology, 30(4): 44974. doi: 10.1111/j.1745-9125.1992.tb01112.x

    • Search Google Scholar
    • Export Citation
  • Flyverbom, M. (2019) The Digital Prism: Transparency and Managed Visibilities in a Datafied World, Cambridge: Cambridge University Press. doi: 10.1017/9781316442692

    • Search Google Scholar
    • Export Citation
  • Forskrift om forsøksprosjekt kartlegging av domfeltes behov (2011) Forskrift om Forsøksprosjekt om Kartlegging av Domfeltes Behov for Tiltak Med Sikte på å Lette Tilbakeføring til et Kriminalitetsfritt Liv, FOR-2011-09-30-978, Lovdata, https://lovdata.no/dokument/SF/forskrift/2011-09-30-978.

    • Search Google Scholar
    • Export Citation
  • Giertsen, H. (2006) Oppdelt i småbiter og satt sammen på nytt OASys ‘Offender Assessment and management SYStem’: et lovbrytermålesystem, Materialisten. Tidsskrift for Forskning, Fagkritikk og Teoretisk Debatt, 34(1): 2345, https://www.nb.no/items/aeb0950eb304b582e6f4f26a29dd3c0c?page=1&searchText=seriestitleid:%22oai:nb.bibsys.no:998340939444702202%22.

    • Search Google Scholar
    • Export Citation
  • Greenwood, P. and Abrahmse, A. (1982) Selective Incapacitation, Santa Monica, CA: Rand.

  • Gundhus, H.O. (2005) ‘Catching’ and ‘targeting’: risk-based policing, local culture and gendered practices, Journal of Scandinavian Studies in Criminology and Crime Prevention, 6(2): 12846. doi: 10.1080/14043850500391055

    • Search Google Scholar
    • Export Citation
  • Gundhus, H.I. (2009) For Sikkerhets Skyld: IKT, Yrkeskulturer og Kunnskapsarbeid i Politiet, Oslo: Unipub forlag.

  • Gundhus, H.I. (2013) Experience or knowledge? Perspectives on new knowledge regimes and control of police professionalism, Policing, 7(2): 17894. doi: 10.1093/police/pas039

    • Search Google Scholar
    • Export Citation
  • Hannah-Moffat, K. (2004) Criminogenic needs and the transformative risk subject: hybridizations of risk/need in penality, Punishment & Society, 7(1): 2951. doi: 10.1177/1462474505048

    • Search Google Scholar
    • Export Citation
  • Hannah-Moffat, K. (2013) Actuarial sentencing: an ‘unsettled’ proposition, Justice Quarterly, 30(2): 27096. doi: 10.1080/07418825.2012.682603

    • Search Google Scholar
    • Export Citation
  • Hannah-Moffat, K. (2016) A conceptual kaleidoscope: contemplating ‘dynamic structural risk’ and an uncoupling of risk from need, Psychology, Crime & Law, 22(1–2): 3346. doi: 10.1080/1068316x.2015.1114115

    • Search Google Scholar
    • Export Citation
  • Hansen, G.V., Dahl, U. and Samuelsen, F. (2014) Evaluering av BRIK: Behovs- og Ressurskartlegging i Kriminalomsorgen, Research Report, Østfold: Høgskolen i Østfold. http://hdl.handle.net/11250/22715

    • Search Google Scholar
    • Export Citation
  • Haraway, D. (1988) Situated knowledges: the science question in feminism and the privilege of partial perspective, Feminist Studies, 14(3): 57599. doi: 10.2307/3178066

    • Search Google Scholar
    • Export Citation
  • Harcourt, B.E. (2007) Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age, Chicago, IL: University of Chicago Press.

    • Search Google Scholar
    • Export Citation
  • Kaufmann, M. (2023) Making Information Matter: Understanding Surveillance and Making a Difference, Bristol: Bristol University Press.

  • Kehl, D.L., Guo, P. and Kessler, S.A. (2017) Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessments in Sentencing, Cambridge, MA: Berkman Klein Center for Internet & Society, Harvard Law School. http://nrs.harvard.edu/urn-3:HUL.InstRepos:33746041

    • Search Google Scholar
    • Export Citation
  • Kemshall, H. (1996) Risk assessment: fuzzy thinking or ‘decisions in action’? Probation Journal, 43(1): 27. doi: 10.1177/026455059604300101

    • Search Google Scholar
    • Export Citation
  • Kemshall, H. (1998) Risk in Probation Practice, Aldershot: Ashgate.

  • Kemshall, H. (2003) Understanding Risk in Criminal Justice, Maidenhead: Open University Press.

  • Kemshall, H. (2008) Risks, rights, and justice: understanding and responding to youth risk, Youth Justice, 8(1): 10016. doi: 10.1177/1473225407087040

    • Search Google Scholar
    • Export Citation
  • Knudsen, R.A. (2018) Measuring radicalisation: risk assessment conceptualisations and practice in England and Wales, Behavioural Sciences of Terrorism and Political Aggression, 12(1): 3754. doi: 10.1080/19434472.2018.1509105

    • Search Google Scholar
    • Export Citation
  • Knudsen, R.A. and Stormoen, O.M. (2020) Risikovurderingsverktøy mot terrorisme og ekstremisme: Erfaringer fra kriminalomsorgen i Nederland, Storbritannia, og Sverige, Report, Oslo: Norsk Utenrikspolitisk.

    • Search Google Scholar
    • Export Citation
  • Lid, S. and Christensen, T.W. (2023) Risikovurderinger og Reintegrering av Radikaliserte Individer i Norden, Report, Oslo: NIBR.

  • Lid, S. and Heierstad, G. (2019) Norske handlemåter i møte med terror. Den gjenstridige forebyggingen nå og i framtiden, in S. Lid and G. Heierstad (eds) Forebygging av Radikalisering og Voldelig Ekstremisme, Oslo: Gyldendal Akademisk, pp 1548.

    • Search Google Scholar
    • Export Citation
  • Lupton, D. (1999) Risk, London: Routledge. doi: 10.4324/9780203980545

  • Martinson, R. (1974) What works? Question and answers about prison reform, The Public Interest, 42: 2254.

  • Mathiesen, T. (1998) Selective incapacitation revisited, Law and Human Behavior, 22: 45569. doi: 10.1023/a:1025727111317

  • Mehozay, Y. and Fisher, E. (2018) The epistemology of algorithmic risk assessment and the path towards a non-penology penology, Punishment & Society, 21(5): 52341. doi: 10.1177/1462474518802336

    • Search Google Scholar
    • Export Citation
  • Ministry of Justice and Public Security (2020) Handlingsplan mot radikalisering og voldelig ekstremisme. https://www.regjeringen.no/contentassets/a7b49e7bffae4130a8ab9d6c2036596a/handlingsplan-mot-radikalisering-og-voldelig-ekstremisme-2020-web.pdf

    • Search Google Scholar
    • Export Citation
  • Molander, A. (2016) Discretion in the Welfare State: Social Rights and Professional Judgment, Abingdon: Routledge.

  • O’Malley, P. (2001) Risk, crime and prudentialism revisited, in K. Stenson and R. Sullivan (eds) Crime, Risk and Justice: The Politics of Crime Control in Liberal Democracies, Cullompton: Willan.

    • Search Google Scholar
    • Export Citation
  • OASys (Offender Assessment and management SYStem) (2004) Pilot Report, Oslo: Kriminalomsorgens Sentrale Forvaltning.

  • Power, M. (2004) The Risk Management of Everything: Rethinking the Politics of Uncertainty, London: Demos.

  • Pratt, J. (1997) Governing the Dangerous, Sydney: Federation Press.

  • Prins, S.J. and Reich, A. (2018) Can we avoid reductionism in risk reduction? Theoretical Criminology, 22(2). doi: 10.1177/1362480617707948

    • Search Google Scholar
    • Export Citation
  • Robinson, G. (2002) A rationality of risk in the probation service: its evolution and contemporary profile, Punishment & Society, 4(1): 525. doi: 10.1177/14624740222228

    • Search Google Scholar
    • Export Citation
  • Robinson, G. (2003) Implementing OASys: lessons from research into LSI-R and ACE, Probation Journal, 50(1): 3040. doi: 10.1177/0264550503501001

    • Search Google Scholar
    • Export Citation
  • Taleb, N.N. (2007) The Black Swan: The Impact of the Highly Improbable, London: Penguin.

  • Terranova, V.A., Ward, K., Slepicka, J. and Azari, A.M. (2020) Perceptions of pretrial risk assessment: an examination across role in the initial pretrial release decision, Criminal Justice and Behavior, 47(8): 92742. doi: 10.1177/0093854820932204

    • Search Google Scholar
    • Export Citation
  • Ugelvik, T. (2022) The transformative power of trust: exploring tertiary desistance in reinventive prisons, British Journal of Criminology, 62(3): 62338. doi: 10.1093/bjc/azab076

    • Search Google Scholar
    • Export Citation