Pre-problem families: predictive analytics and the future as the present

Authors:
Rosalind Edwards University of Southampton, UK

Search for other papers by Rosalind Edwards in
Current site
Google Scholar
Close
,
Val Gillies University of Westminster, UK

Search for other papers by Val Gillies in
Current site
Google Scholar
Close
,
Sarah Gorin University of Warwick, UK

Search for other papers by Sarah Gorin in
Current site
Google Scholar
Close
, and
Hélène Vannier-Ducasse University of Southampton, UK

Search for other papers by Hélène Vannier-Ducasse in
Current site
Google Scholar
Close
Full Access
Get eTOC alerts
Rights and permissions Cite this article

Predictive analytics is seen as a way of identifying the risk of future problems in families. Integral to such automated predictive analysis is a shift in time frames that redraws the relationship between families and the state, to potentially intervene on an anticipatory basis of ‘what hasn’t happened but might’. In the process, human subjects are reformulated as disembodied objects of data-driven futures. The article explains this process and fills a significant gap in knowledge about parents’ views of this development. We draw on group and individual discussions with parents across Great Britain to consider their understanding of predictive analytics and how comfortable they are with it. Parents’ concerns focused on inaccuracies in the data used for prediction, the unfair risk of false positives and false negatives, the deterministic implications of the past predicting the future, and the disturbing potential of being positioned in what was a pre-problem space. We conclude with policy implications.

Abstract

Predictive analytics is seen as a way of identifying the risk of future problems in families. Integral to such automated predictive analysis is a shift in time frames that redraws the relationship between families and the state, to potentially intervene on an anticipatory basis of ‘what hasn’t happened but might’. In the process, human subjects are reformulated as disembodied objects of data-driven futures. The article explains this process and fills a significant gap in knowledge about parents’ views of this development. We draw on group and individual discussions with parents across Great Britain to consider their understanding of predictive analytics and how comfortable they are with it. Parents’ concerns focused on inaccuracies in the data used for prediction, the unfair risk of false positives and false negatives, the deterministic implications of the past predicting the future, and the disturbing potential of being positioned in what was a pre-problem space. We conclude with policy implications.

Introduction

States have always harnessed a range of strategies to address and deter anti-social and criminological behaviour in families, from the stigmatisation and labelling of parents as morally undeserving built into the poor laws, through positioning children and families as needing early intervention to break genetic, cultural and traumatic intergenerational cycles, to versions of families as carriers of risk that need to be assessed, scored and managed (Gillies et al, 2017; Edwards and Ugwudike, 2023). Latterly, however, there has been a step change in the practice of assessing and monitoring: that of pre-emptive surveillance, involving collecting and linking digital sources of administrative data on families to support identifying potential problem behaviours in the future through algorithmic processing. Integral to such automated predictive analysis is a shift in time frames that redraws the relationship between families and the state (Couldry and Mejias, 2019). There is a move away from the ‘what’s happening’ of established foundations and accepted rationales for state intercession in families, to intervention in family lives on an anticipatory basis of ‘what hasn’t happened but might’. In the process, human subjects are reformulated as disembodied and decontextualised objects of data-driven, high-, medium- or low-risk futures.

This article considers these relational shifts where algorithmic technologies interpolate families into a digital depersonalised pre-problem space that draws the future into the present. Work on the use of predictive analytics in the field of family welfare and child protection, whether enthusiastic or critical, has not ascertained or addressed the views of parents. Here we fill what is a serious ethical and democratic omission given that data practices to inform service interventions are pushing ahead with little attention to parents’ knowledge or even their consent to the use of their administrative and other data. We begin by laying out the territory of predictive analytics in the family and child welfare field, and the key critical concerns raised about this, before moving on to consider how parents view this technological process and its implications. We draw on group and individual discussions with parents across Great Britain to explore the key concerns they express about accuracy and fairness, stereotyping and prejudice, and determinism and its consequences.

Predictive data analytics in the family and child welfare field

Social workers have long been involved in forms of predicting the future, from whether or not a parent will benefit from referral to a particular service to assessing not only if children in a particular family have suffered significant harm but also whether they are at immediate risk of harm (Wilkins and Forrester, 2020). Latterly, severe cuts to public spending in the UK have led to increased targeting of family and child welfare provision (Boddy, 2023), and a major emphasis by the government on digital data innovation in the field through new funding streams specifically for manipulating child and family data (MHCLG, 2021). Encouraged by the availability of additional revenue and claims of budget savings in straitened circumstances, child welfare authorities have been turning to algorithmic systems, often contracting data analytics companies that offer the promise of being able to forecast whether, which and when families need intervention through mass digital monitoring (Gillies et al, 2017; Redden et al, 2020). Such initiatives are being taken up not just in the UK but also more widely (for example, Eubanks, 2018; Glaberson, 2019; Keddell, 2019; Jørgensen et al, 2021).

Algorithms – a sequence of programmed instructions, rules and calculations – can be built that model the tendency for correlations of certain characteristics, behaviours and social associations to occur in families with undesirable outcomes. Everyday routine digital information that is held about all families by various administrative sources – education, health, taxation, benefits, police and so forth – can be linked. The modelling algorithms can automatically scan the merged dataset for families that might partially or wholly match the patterns of factors indicated for families facing and causing social and/or criminal problems at any point in time, with the intention of anticipatory action to avert them from becoming the problematic families their data denote they potentially could be. A simple example provided by US non-governmental agency proponents of the process is: ‘a predictive risk model might indicate that a child under three with fewer than two siblings and a mother with substance abuse problems may be more likely to experience future harm than other children’ (Chadwick Center and Chapin Hall, 2018: 2). In the UK, an automated system to profile and score families for risk proposed by the private data analytic company Sentinel Partners lists ‘warning’ combinations of data about a family for predicting future problems, such as multiple schools, housing association tenancy, benefit claims and single adult family, alongside features such as being stop checked and older siblings with a history of criminal activity (Sentinel Partners and Liverpool City Council (2019).

Predictive analytics, then, involves the construction of families as units of quantifiable digital data points that are ranked as high, medium or low risk, and these computations can become equated with accurate factual projections of future family problems. Advocates for predictive data analytics argue that these systems enable streamlined, efficient and objective decision making, and effective targeting of scarce resources (see for example, Chadwick Center and Chapin Hall, 2018; What Works in Children’s Social Care, 2020; Edwards et al, 2022) in the face of sustained austerity.

Creation of pre-problem families

While predictive analytics is seen by national and local governments as the way forward for identifying risk in families and intervening, there are concerns that data analytics in itself poses a risk to families.1 Notably, the inclusion of flawed proxies for child maltreatment in algorithms can result in prediction errors, as do the systemic biases that result in the overrepresentation of racialised and poor communities in the data about problematic families that algorithms are trained on (Capatosto, 2017; Glaberson, 2019). This overrepresentation of certain groups of families as problematic leads to feedback loops that perpetuate these injustices in algorithmic predictions because errors are not fed back into algorithmic development. There are also concerns about the loss of privacy for marginalised families (Almedom et al, 2021).

Epidemiologists have raised caveats about the ecological fallacy of extending generalised risk to any given individuals where the epidemiological focus is on populations and conditions, not individuals (for example, Kelly-Irving and Delpierre, 2019). Stephanie Glaberson (2019) refers to misapplication in attempts to graft epidemiological prediction methods onto child welfare work. Risks to families posed by algorithmic modelling that Glaberson identifies include that the information about them that the algorithms are working on can be inaccurate, and a failure to account for resilience and change over time, in particular static projections of behaviour from a family’s past into the present and future. Errors in digital information may be impossible to identify retroactively or correct (Capasto, 2017), but even holding accurate data about the past does not guarantee correct prediction (Glaberson, 2019). An international mass collaboration study drawing on extensive longitudinal information about families to test predictive modelling techniques against known lifecourses in the dataset found a lack of accuracy in forecasting future outcomes (Salganik et al, 2020; see also Clayton et al, 2020; Waller and Waller, 2020). In England, while child protection investigations doubled in ten years from 2010, investigations that did not find any harm tripled in that period (Bilson, 2021). There are difficult balances between high-stakes scenarios here. Clearly, there will be tragic consequences borne by children whose families are identified as low risk when the reverse is the case (false negatives) and there is no service intervention. But equally, there can be deleterious implications for children and parents where a family is labelled as high risk but is not (false positives), with unwarranted state intervention and possibly children removed into care. These worst-case scenarios take place within (or even are heightened by) the shifts away from family support and child welfare service responses to needs arising from poverty and marginalisation, towards a central focus on risk assessment and surveillance as the basis for intervention (White and Wastell, 2017; Featherstone et al, 2018), and from state redistribution and public service provision responsibilities.

In the face of risks to families from predictive analytics, some call for epidemiological logic to be followed, anticipating need at the area level of neighbourhoods and towns. Rather than indulging in high-stakes algorithmic predictions about future harms in specific families, agencies should direct preventive and supportive services towards communities (Capatosto, 2017; Glaberson, 2019). Beyond this, responses to disquiet over the accuracy and ethics of predictive data analytics often invoke technological solutions. There are assertions that more extensive and better-quality data about children and families are required for the anticipatory promise of predictive analytics to be born out (for example, Shafiq, 2020), and that bias can be removed to achieve data accuracy and neutrality through improved data handling within the framework of an ethical code (Capastoso, 2017). Ethical values, practical principles and professional virtues, it is claimed, will enable responsible harnessing of predictive data analysis, empowering parents to optimise family functioning and child development through data-driven interventions (What Works for Children’s Social Care, 2020).

Even should such technocratic fixes be possible where the society that generates the data is riven with inequalities (Broussard, 2023) and power imbalance in family-state relations (Dencik et al, 2018), there are other worrying issues associated with the use of predictive algorithms. These concerns are rooted in the adoption of statistical variables as proxies for anticipated rather than actual family difficulties. The population of parents are interpolated into a ‘pre-problem’ space before culpability has occurred, analogous to the ‘pre-criminal’ space that is generated by programmes such as the UK’s anti-terrorism strategy Prevent, which aims to identify and forestall anyone who may support or take part in extremism before anything of a sort has happened. As Jude McCulloch and Dean Wilson (2015) point out in relation to crime, the pre-space is a temporal paradox suggesting simultaneously that something has not yet occurred and that it is a foregone conclusion, so that the future is brought into the present and acted on. Substance and form are given to a hypothetical future through the rendering of families’ characteristics, relationships and actions as sets of depersonalised, decontextualised and quantified digital data points for predictive algorithmic processing. For example, the ‘Hallo Baby’ AI predictive model implemented in Pennsylvania, USA, screens administrative databases of characteristics for all families with a newborn baby to produce risk scores for the likelihood of child welfare intervention in the longer term (Brico, 2019; see also Eubanks, 2018).

While state governance of poor and marginalised families has always involved recording and categorising their behaviour, deductive inquiry into individual family member’s current behaviour followed by corrective intervention has morphed into inductive prediction and pre-emptive intervention. There is a push towards a step change from assessment of and intervention in family problems based on their behaviour in the present and perhaps the short-term future, towards one where the future is constructed based on other families’ propensities, and past experiences in a family are projected forward into a non-imminent anticipation of negative outcomes. In both cases, the aim is to develop algorithmic models that can predict and identify families where child maltreatment is not currently in evidence but may occur in that family sometime in the future. The focus has shifted from actualities to potentialities through a foregrounding of risk and probability (Rouvroy, 2013), and we are taken from the ‘what’s happening’ of established foundations and accepted rationales for state intervention in families to ‘what hasn’t happened but might.’ Human subjects in the present are reframed as projected data objects. Glaberson (2019: 348) poses the question: ‘[W]hen the government obtains and uses our data in ways we might not have expected – such as to influence decisions about the future integrity of our families – questions arise about whether we as a community are comfortable with this new use’.

Both the supportive and critical appraisals of algorithmic modelling and predictive analytics in the field of family welfare and child protection do so from a ‘top-down’ perspective about this use of digitised data profiles. In contrast, the rest of this article considers parents’ views on this issue. What are parents’ assessments and concerns about data analytics systems for service decision making and targeting of resources, and how comfortable are they about families being positioned in a predictive ‘pre-problem’ space?

Research focus and methods

Our analysis draws on focus group and individual interview data from our research project, investigating the views of parents with dependent children about linking up administrative digital data on families and exploring their thoughts about the use of predictive data analytics in operational service delivery.2

We held online discussions with homogeneous groups of parents, with participants in each group sharing an element of the same social location. Dialogue focused on how they viewed operational data practices including predictive analytics to understand how the parents articulated and negotiated their perspectives with each other. Topics covered included generalised assessments of justifications for and oppositions to data linkage and analytics rather than personal experiences, prompted by open questions and hypothetical case examples to facilitate discussion.3 Participants were recruited through social media and via child- and family-focused organisations. There were nine focus groups comprising an average of four parents in each, involving 36 mothers and fathers overall. We held one group discussion with, respectively, parents of children with disabilities, home-maker mothers, and fathers, while there were two groups each with parents in professional occupations, Black mothers, and lone mothers. (There could be overlaps for group parameters; for example, a parent in the professional occupation group might be a Black mother or vice versa.) The majority of parents in the focus groups had not had contact with family- and children-based interventions beyond everyday universal provisions, other than the group of parents of children with disabilities who accessed specialist support. The choice of characteristics for the group discussions draws on findings from a probability-based survey of parents with dependent children, conducted as an initial stage of our investigation (Edwards et al, 2021) and not reported here because the survey itself did not address the use of predictive analytics directly. As parents per se, the group participants are located in the pre-problem space because the data for all families are pulled in for a predictive ‘pre-problem’ algorithmic scan.

To discuss situated lived experience directly, we also held individual online interviews with 23 mothers and fathers who had experience of varying levels and types of family support and intervention services. This ranged from help with aspects of their parenting through to having had children removed from their care for a time. The parents were recruited through family and child welfare services and specific support organisations, which meant that they were likely to have ongoing assistance available if the interviews raised difficult personal issues. Topics covered the parents’ experiences of digital data practices by services and views on predictive analytics, using open questions. For the most part, their circumstances had pulled these parents into a pre-space where their past and present profiles and statistically predicted futures are merged, going beyond the application of algorithms to all parents and into specific interventions in their own family lives.

Both focus group and individual interviews were analysed using inductive code and theme development (Braun and Clarke, 2022). Two members of our research team separately generated codes for each transcript, and together compared and agreed on the codes. The codes for all transcripts were then systematically sorted, reviewed and refined into themes by the whole team. One of the recurrent themes in both the group and individual interviews was ‘the past is not predictive’, and we pursue that understanding and its connection to other themes below. Ethical approval for the research was obtained from University of Southampton (see footnote 2). In the following discussion of parents’ perspectives, we have removed any details that might identify individuals. Our use of numerical indicators for quotes is deliberate, recreating and reminding readers of an estranging feature of algorithmic prediction, where parents become dehumanised and treated as scores.

The past is not predictive

Assertions that the ‘past is not predictive’ relate to the knotty issues of pre-problem families being constructed through predictive analytics, and the pulling of a putative future into the present through anticipatory service intervention. In the following discussion of parents’ understandings and concerns, we look at their struggles with the idea of intervention and prevention, where the notion of anticipatory state monitoring and intercession raised tensions between valuable and harmful purposes and applications. We consider parents’ concerns about predictive data and the pre-space it constructs, raising themes of data inaccuracy and injustices, and challenging the deterministic logic of algorithmic risk modelling and its enduring consequences. We then review the concerns about depersonalisation, both of families and professionals through predictive analytics raised by parents, along with their scepticism about the ability of austerity-diminished services to meet predicted needs in the pre-problem space.

Intervention and prevention

Parents sometimes struggled to understand the idea and viability of algorithmic analysis to predict the future, and of anticipatory action where particular families may not be currently experiencing any difficulties. They wrestled with, variously, ideas they regarded as beneficial, such as early help, where individual families are exhibiting some indicative difficulties that might be signals of future problems; notions that raised ambivalences about prevention in the epidemiological sense, involving identifying populations for more generalised collective strategies; and purposes they regarded as pernicious, notably prediction, where families with particular data profiles or having past difficulties at the least indicates and at the most determines the future.

Some parents acknowledged the positive versions of algorithmic risk modelling and predictive analytics put forward by advocates, as enabling the prevention of harms through service intervention for justified early intervention and collective support. They regarded automated scanning of digital information to highlight families with here-and-now problems as helpful, providing the ability to intervene where difficulties are starting to manifest and as necessary to prevent child mistreatment. But they could also see such monitoring as useful in offering opportunities for help that parents in need in the present may not be aware of:

‘I think if you’re struggling it’s probably good to be flagged up because obviously the kids are probably at risk, aren’t they? … And if you needed help, I guess they could signpost you to the help you wanted. If you didn’t need help, I guess that would be the end.’ (Mother in receipt of services for child with disabilities: interview 10)

Although parents did not express the epidemiological rationale about generalised risk overtly, they could echo it indirectly. The subjecting of administrative data to predictive algorithmic analysis could be viewed in a supportive way, collectively as against individually interventionist, in terms of area-based prevention initiatives and planning services for the future. But there were also ambivalences present in the notion of a particular community or demographic in need of targeting, as captured in the discussion between a group of lone mothers (Group 4), that heralds some of the concerns about the algorithmic generation of a pre-problem family space we look at later:

Mother 4.2: I suppose you’re predicting what will happen in the future, having areas that are slightly deprived where you know that people are below the poverty line or anything like that, it is good for the councils to know to then start putting the odd social clubs and things like that in. And playgrounds or extra police to stop the old ASBOs and things like that …

Mother 4.1: So it’s tricky, yeah, you cannot judge families and predict that they will get worse or better …

Mother 4.4: So we’ve just had, actually, in our school, we’ve just had the police in doing county lines, talking about guns and knives and things that you wouldn’t – we live in a very, you know, we think it’s a very affluent area but actually it does have the same problems … I think it’s narrowing down, so it’s deciding that a particular family – so I remember reading when my ex-husband left, children of lone parents always do worse in school. And it really hit home that this was such a generalisation that because now I was on my own that my children would do so much worse than they would as two parents together.

For the most part, then, parents could see merit in algorithmic applications to administrative data that identified individual families with difficulties in the present and who might mistreat their children. But there were serious concerns raised about predicting the future.

Predictive data and inaccurate feedback loops

In both group discussions and individual interviews, parents expressed concerns about the accuracy of the administrative data that were being used as the basis for algorithmic risk modelling and the injustices that could result. Drawing on their own experiences or those of people they knew, parents did not have a lot of confidence in the accuracy and fairness of the sources of administrative information that predictive analytics were drawing on. They raised examples of recorded information that was incorrect, based on biased assumptions and judgemental views, even fabrications. Black parents had especially strong concerns and recounted instances where they became aware of unjustified racialised labelling of themselves and their children as well as other family and friends, in their records:

‘I had a situation where information was shared … Unfortunately, the information wasn’t accurate … And sadly, [biased stereotypes] kind of happened in my situation, which is why I challenged it. I was like no, no, no, you’re not painting us out to be like that, no we’re not having that … Some workers are perhaps not as, you know they’re not as used to working with families that are of a different background, different culture, so they stereotype. And unfortunately, that was definitely what happened in my case. And I had to challenge it, I had to challenge it, I really did, yeah … Because of the law they weren’t able to actually omit that entry that they put in about my child, okay, which wasn’t true, but they were certainly able to update it and change it, you know, add entries into his record to say, well, this is, like, not correct.’ (Mother 2.3: Group 2 Homemaker mothers)

Parents who had major social care intervention in their families provided some distressing evidence of misinformation about them and their children being recorded, whether by mistake or even malicious intent, and shared. Getting wrong information corrected, however, was not possible where parents had attempted this, as we saw in the account above. Under GDPR (General Data Protection Regulation) Article 5(1)(d) organisations need to ensure they keep accurate and up-to-date records, but if mistakes are made records will not necessarily be erased and replaced; rather, the mistake should be documented (Gorin et al, 2024; Information Commissioner’s Office, n.d.). The flawed data can stay on record, with the potential to be fed endlessly into predictive algorithms, part of the magnification of injustices in predictive data analysis referred to by Glaberson and others, and which can contribute to false flagging for parents generally. In other words, the poor judgements and fabrications that are already evident in administrative data are perpetuated and magnified in the process of algorithmic risk modelling and predictive analytics (Edwards and Ugwudike, 2023).

Predictive data and pre-problem space

In discussing predictive data, parents often defaulted to talking about families experiencing problems in the present because they found the sense of a pre-space, where we move away from what is happening into the territory of what might happen, to be incomprehensible or even dystopian. The unfairness of false positives and false negatives, the deterministic implications of the past predicting the future, and the potentially devastating implications of being propelled into what in effect was a pre-problem space, worried parents.

Algorithmic flagging was a point of concern, where families are identified as low risk when the reverse is the case (false negatives), while others are labelled as high risk but are not (false positives). They worried about data analytics anticipating problems when in fact a family not only does not have any issues but would not go on to develop them in the future. In this exchange as part of a discussion between parents in professional occupations (Group 3), as identified above, the idea of intervention as acceptable where algorithmic modelling identified early signs as against dubious predictive analytics that attempted to model ahead from a non-problematic present into a non-imminent future, is evident. So too were scenarios where something had happened in the past, was not happening in the present but problems were extrapolated into the future:

Mother 3.2: I think I’m kind of not surprised if there are sort of efforts to identify families that might have problems before they happen. I can see why they would really want to do that … and there would have probably been problems happening even a few years before it got to the sort of crisis point … but at the same time, as we’ve all been saying, it can go too far and sort of problems can be sort of found where they shouldn’t be, where actually it’s the family’s fine …

Mother 3.4: I agree … it comes back to that kind of making judgements about people that are not necessarily based on fact and that something might happen to somebody but there might be a million different ways that things then play out.

Similarly, echoing the debate between the mothers in Group 4, concerning the social make-up of areas where collective prevention needed to be focused (discussed in the ‘Intervention and prevention’ section), parents were also bothered about families being be passed over by predictive analytics because they did not fit the data profile of dysfunctional parents that the algorithm worked with. Affluent middle-class families were mentioned several times in this respect: “Algorithms are not flawless either and who will slip through the cracks … if your profile doesn’t have that because, you know, so-and-so is a dentist and so-and-so is a lecturer, but actually who could slip through the gaps of that [predictive analysis]” (Father 7.1: Group 7 Parents in professional occupations). The parents’ concerns about the ability to forecast behaviour and outcomes are not dismissed by the evidence on algorithmic prediction success rates; we noted the lack of accuracy in tests of predictive modelling technique earlier in the article (Clayton et al, 2020; Salganik et al, 2020; Waller and Waller, 2020).

Parents in both group discussions and individual interviews often expressed indignation about the use of predictive data analytics – a positioning of parents and children in a ‘pre-problem’ space and a deterministic pulling together of the past and the future into the present. The earlier assessment of Mother 3.4 about “a million different ways that things then play out” and of Father 7.1 in Group 7 who went on to remark about families that were “trying to break the cycle”, signal this unease about the deterministic labelling involved in the generation of pre-problem families:

‘… really trying to make a difference, be different than what’s gone before them, and yet they’ve still got people going, “Are you sure you’re okay? Are you sure you’re okay? Are you sure you’re okay?” and then they start to get that negative self-view on themselves almost. I think, yeah, computer algorithms and things like that are great but there’s, it’s what you put into them.’ (Father 7.1: Group 7 Parents in professional occupations)

Not only did inaccurate data risk false positives and negatives, but the whole endeavour of anticipatory prediction was subject to question. Parents did not understand the logic of flagging up where families had never experienced any problems at all because their data profile meant it was predicted that sometime in the future they might. They referred to every situation as being individual rather than following the same pattern as other families with whom they may share some characteristics and behaviours: “I think all situations are different and I think it’s quite dangerous to use someone else’s situation and try and predict someone else because you could be right, but you could be massively wrong. And you could start getting services involved that aren’t needed” (father in receipt of services for child with disabilities: interview 12).

Ideas about determinism, and the notion that digitised data speak for themselves, were challenged by parents as part of the questioning of anticipatory algorithmic analysis. Where parents and children had problems in the distant past, they spoke of how circumstances and people change over time, how children get older and adults move on:

‘A criminal record generally only stands for six years if there’s no other issues. And then it’s wiped. Same with debts. If you’ve got debts and they’re on your record, they’re there for six years … But it’s not the same for social services and child services, things like that, that’s there for life. And they treat you as the person that you were when they first met you and it’s so wrong.’ (Mother in receipt of services for a child with disabilities who had extensive intervention in her youth: interview 15)

Further, the importance of context and the inability of administrative digital data to convey situated meaning were evident worries for some parents. Here they echoed arguments that algorithmic risk modelling treats parenting and family behaviour as if they are objectively knowable through data about sets of characteristics and behaviour, separating them from an understanding of the wider social world and structural context (Keddell, 2015):

‘Individuals that might be victim of circumstances, at that point in time had to get in debt just to put food on the table. And that’s a moment in time. Is that going to be reflected? Then you’re taking raw data and creating a meaning without having all the information to hand, which I think potentially could be quite dangerous.’ (Father 5.1: Group 5 Fathers)

Comments from the fathers quoted above indicate the strong apprehensions parents could hold about the potentially devastating implications of predictive data analytics. Father 7.1 from Group 7 expressed worries that parents doing their best may develop a “negative self-view” as a consequence, while the father in interview 12 went on to mention that projecting from long-past misdemeanours into a putative future might make a “fantastic” parent feel worried. The potential damage to families posed by an algorithmic analytic process that was ostensibly to identify damaging families was broached during group discussions and individual interviews. Intervention was posed as equally perilous as the risk from no intervention because it could affect families badly in the short and/or longer term, even to the extent of creating self-fulfilling prophesies – the bringing about of the predicted future that otherwise would not occur:

‘So, if we’re saying the government can pull all these sources of data together, and is that acceptable, and then they can identify families that may be at risk, even though potentially some of them won’t ever end up being at risk, but just because of the algorithm it suggests that they’re ones to sort of look out for, I guess the question for me would be what happens to those families? Are they put on some kind of watch list? Is there kind of extra interference in their lives from any government bodies? Will it affect the way the school treats those children because I think there’s a number of studies that the way that teachers perceive pupils can affect the education they then get or how they actually achieve.’ (Mother: Group 8 Lone mothers in professional occupations)

Further, some parents expressed worries about another form of bringing about the very problems that predictive data analysis was supposed to counter: that the type of prevention that could occur would prevent parents from approaching services they might need to use to avert problems. This echoes a concern expressed among public and third-sector stakeholders (Dencik et al, 2018). Parents felt that some families experiencing problems were already wary of engagement with services when they needed support, which would be amplified: “There’s dozens of forums online, where parents are telling other parents, ‘Don’t ask for help because it will be used against you’” (mother subject to extensive interventions from services: interview 5).

Some parents referred to the notion of flagging up data from the past and then projecting onto present and future scenarios as being “there like a stain” (Mother 2.3: Group 2) and “an unnecessary weight on your shoulders” (mother, false positive intervention: interview 18). Some conjured potentially dystopian scenarios – “Terminator 2 stuff” (Mother 9.2: Group 9) and “Big Brother watching” (mother: interview 11) – to convey their discomfort. These similes and metaphors point towards the depersonalisation that parents sensed would be generated by predictive analytics positioning families in a pre-problem space.

Depersonalisation and decreased resources

Advocates of algorithmic risk modelling and predictive analytics champion the greater efficiency and effectiveness obtained by minimising professional human subjectivity and harnessing the objective power of automated analysis (Edwards et al, 2022). Yet, it is these very benefits – the extent to which digital data analysis technologies for service delivery purposes cut human staff out of decision-making processes or shape the conditions and mindset under which they make the decisions – that form an issue of concern for both researchers (Eubanks, 2018; Redden et al, 2020; Lighthouse Reports, 2022) and parents. Parents feared that families could be reformulated as disembodied and decontextualised, projected data objects through algorithmic processes; an inability to recognise families and their members as individuals and treat them ethically or with compassion. Parents were bothered that predictive analytics would have no sense of the impact that may be wrought on families: it depersonalised them. For those in a group of fathers (Group 5), all working in family support themselves, it was the very objectivity of digital data subject to algorithmic analysis that brought about this dehumanisation:

Father 5.4: It’s this point that algorithms can be great, but, as with all systems and all data systems, it’s about what does that mean, and you lose the individual …

Father 5.3: … for me, I have a thing that it wouldn’t be the whole picture, it’d be numbers, numbers on a spreadsheet …

Father 5.2: … I was just going to say, I think one of its real big limitations is it’s really objective, isn’t it? So, you’re looking at one rule that fits the entire country, and there’s no chance to look into each individual circumstances …

Father 5.4: … It’s about relationships … So, it’s got to be a more nuanced approach than just simply, ‘You popped up on our data system, and, therefore, you’re going to get help now, and this is what you need.’

The fathers’ perspectives as family support workers encompassed concerns about professionals being depersonalised by predictive data analytics as well as parents. Other parents also regarded algorithms as no replacement for doctors, social workers and so on. Data were becoming detached from the professionals who were personally aware of the family concerned:

‘If the people aren’t, for example, working alongside that person or whichever, what is the point of having that [data]? Because you have got information based off somebody else, on somebody you don’t know, and that you are not sort of working alongside, so what is the relevance of that?’ (Mother: interview 2).

Policy makers pose the collection of digital data and innovation in its analysis through predictive algorithmic modelling as the technological solution to dysfunctional families in government initiatives – that is, problems in families will be pre-empted through the knowledge contained in the data (for example, MHCLG, 2021). At the same time, diminished funding for service provision, shortages of staff and other resource dysfunctions in all public sector fields have intensified through policies of austerity. Some parents were sceptical about how families propelled into the pre-problem space were going to receive interventions, with the possibility of more harm than good:

‘But the reality is [services] have been cut till there’s nothing there. There’s just that stump. It’s to say that you’re going to put that information into an algorithm to identify the needs of a young person when there are no services for them to access. There’s no help that they can have … We know that the support for these people isn’t there. We know that. So, you know, to say that it’s in order to help people feels really disingenuous.’ (Mother: Group 6.2 Black mothers)

Even on this pragmatic level of resource availability, then, propelling families into the pre-problem space through predictive data analytics seems unwarranted and misguided, let alone reframing human subjects as projected data objects.

Conclusion

Artificial intelligence offers national and local governments the promise of identifying families at risk by modelling and forecasting potential social and criminal problem behaviours in the future to enable anticipatory preventive intervention. As we have discussed, digitised administrative records from a range of everyday public services can be merged into one extensive dataset. Algorithmic scanning of this dataset made up of whole family populations can be undertaken to identify which of them have data characteristics that deem them to be ‘pre-problem’ families in need of pre-emptive intervention. In effect, this is mass digital monitoring where families are affected by other families’ data just as much as they are by data about themselves. Yet despite it being data about their families that is fed into algorithms for risk modelling and predictive analytics, the views of parents themselves is a significant omission in discussions about data practices in the family welfare and child protection fields. This article provides a much-needed corrective to this ethical and democratic gap in our understanding.

Predictive algorithmic endeavours give a veneer of being scientific and value-free but they replicate and perpetuate stereotypes and inequalities built into the data they are developed from. More than this, as we have elaborated, the relationship between families and the state is redrawn through profound shifts of time and akin to being a human subject. The move away from the conventional ‘what’s happening’ to a ‘what hasn’t happened but might’ rationale for state intervention in family life collapses the future into the present to open up a pre-problem space. In occupying this digital space for predictive algorithmic analysis, families become transformed into quantifiable elements, reformulated as disembodied mass units of data and decontextualised objects of projected data-driven futures. This automated and opaque process of adopting predictive analytics to identify the risk of possible future problems in families is one that parents find deeply uncomfortable and concerning because of the counter-risks that it poses for all families.

A recent report from the Office of the United Nations High Commissioner for Human Rights (OHCHR, 2021) has called for a moratorium on the use of artificial intelligence systems including data profiling, automated decision making and other machine-learning technologies that pose threats to human rights until sufficient safeguards are implemented. OHCHR concerns about algorithmic risk modelling and predictive analytics include: erosion of individual rights to privacy and the potential harms that may be inherent in bringing together datasets; administrative information about individuals being collected, shared, merged and analysed in multiple and opaque ways; and the data and systems that inform their development being discriminatory, flawed, out of date or irrelevant. Parents in our study also wrestled with apprehensions such as data-processing technologies in the field of family and child welfare, viewing early help for those experiencing difficulties in the here and now as beneficial, but regarding intervention based on algorithmic modelling of ‘what hasn’t happened but might’ as potentially dangerous. At the very least, parents’ apprehensions about the erroneous and prejudiced data that can be fed into risk models, and worries about the determinism and accuracy of predictive analytics, should be met by three main provisions: (1) straightforward, enforceable rights for individuals to view all the personal data that is held about them online in local authority databases; (2) robust systems that allow parents to report data errors and receive speedy investigation and correction; and (3) regular independent review and publication of the accuracy rates and utility of predictive models applied to public administrative data. Ultimately though, a serious public conversation needs to take place about the legitimacy of a state construction of a ‘pre-problem family’ space using administrative data held on them. In a pre-problem family space, family members are viewed by the state not as people and citizens but as disembodied and decontextualised digital data objects. It is, moreover, a space in which open-to-doubt anticipated futures are acted on as foregone conclusions in the present through a pre-emptive service intervention in family lives – anticipatory interventions that may not be pre-empting anything.

Notes

1

The European Union Artificial Intelligence Act intends to rate AI-based technologies as themselves being a high or low risk to citizens (https://artificialintelligenceact.eu).

2

The Parental Social Licence for Data Linkage for Service Intervention project. Ethical approval for the focus group and individual interviews was given by University of Southampton ERGO 56997. Data generated by the study is available on registration with the UK Data Archive.

3

An example scenario and follow-up questions: ‘Local councils have been told by the government to gather information to identify families that have multiple problems, such as in trouble with the police, anti-social behaviour, truancy, unemployment, mental health problems and domestic abuse. Once families are identified as having more than one problem, they are given a key worker to work with them. Families may or may not want this support. What do you think about this? Why?’ Follow-ups: ‘What if families don’t want this help?’ ‘Councils are given additional government funding for every family they identify and work with. Do you think that this makes a difference?’

Funding

This work was supported by the Economic and Social Research Council (ESRC RCUK) under grant no. ES/T001623/1.

Conflict of interest

The authors declare that there is no conflict of interest.

References

  • Almedom, E., Sampath, N. and Ma, J. (2021) Algorithms and child welfare: the disparate impact of family surveillance in risk assessment technologies, Berkeley Public Policy Journal, Fall: https://bppj.berkeley.edu/2021/02/02/algorithms-and-child-welfare-the-disparate-impact-of-family-surveillance-in-risk-assessment-technologies.

    • Search Google Scholar
    • Export Citation
  • Bilson, A. (2021) ‘The investigative turn and child protection evidence base’, presentation to Independent Review of Children’s Social Care, 31 May, https://bilson.org.uk/presentations/presentation-to-independent-review/?doing_wp_cron=1681232137.2219009399414062500000.

    • Search Google Scholar
    • Export Citation
  • Boddy, J. (2023) Engaging with uncertainty: studying child and family welfare in precarious times, Families, Relationships and Societies, 12(1): 12741. doi: 10.1332/204674321x16704251101652

    • Search Google Scholar
    • Export Citation
  • Bouchal, P. and Norris, E. (2014) Implementing Sure Start Children’s Centres, York: Joseph Rowntree Foundation/Institute for Government, www.instituteforgovernment.org.uk/sites/default/files/publications/Implementing%20Sure%20Start%20Childrens%20Centres%20-%20final_0.pdf.

    • Search Google Scholar
    • Export Citation
  • Braun, V. and Clarke, V. (2022) Thematic Analysis: A Practical Guide, London: Sage.

  • Brico, E. (2019) How an algorithm meant to help parents could target poor families instead, Talk Poverty, 26 November, https://talkpoverty.org/2019/11/26/algorithms-parents-target-low-income/index.html.

    • Search Google Scholar
    • Export Citation
  • Broussard, M. (2023) More than a Glitch: Confronting Race, Gender and Ability Bias in Tech, Cambridge, MA: MIT Press.

  • Capatosto, K. (2017) Foretelling the Future: A Critical Perspective on the Use of Predictive Analytics in Child Welfare, Kirwan Institute Research Report, Columbus, OH: The Ohio State University, February: www.researchgate.net/publication/318405111_Foretelling_the_Future_A_Critical_Perspective_on_the_Use_of_Predictive_Analytics_in_Child_Welfare.

    • Search Google Scholar
    • Export Citation
  • Chadwick Center and Chapin Hall (2018) Making the most of predictive analytics: response and innovative uses in child welfare policy and practice, Policy Brief, September, www.chapinhall.org/research/5022.

    • Search Google Scholar
    • Export Citation
  • Clayton, V., Sanders, M., Schoenwald, E., Surkis, L. and Gibbons, D. (2020) Machine learning in children’s services: summary report, September, What Works for Children’s Social Care, whatworks-csc.org.uk/wp-content/uploads/WWCSC_machine_learning_in_childrens_services_does_it_work_Sep_2020.pdf.

  • Couldry, N. and Mejias, U.A. (2019) The Costs of Connection, Redwood City, CA: Stanford University Press.

  • Dencik, L., Hintz, A., Redden, J. and Warne, H. (2018) Data Scores as Governance: Investigating Uses of Citizen Scoring in Public Services: Project Report, Cardiff: Data Justice Lab/Open Society Foundations, https://orca.cardiff.ac.uk/id/eprint/117517/1/data-scores-as-governance-project-report2.pdf.

    • Search Google Scholar
    • Export Citation
  • Edwards, R. and Ugwudike, P. (2023) Governing Families: Problematising Technologies in Social Welfare and Justice Systems, Abingdon: Routledge.

    • Search Google Scholar
    • Export Citation
  • Edwards, R., Gillies, V. and Gorin, S. (2021) Data linkage for early intervention in the UK: parental social licence and social divisions, Data & Policy, 3(e34): 115.

    • Search Google Scholar
    • Export Citation
  • Edwards, R., Gillies, V. and Gorin, S. (2022) Problem-solving for problem-solving: data analytics to identify families for service intervention, Critical Social Policy, 42(2): 26584. doi: 10.1177/02610183211020294

    • Search Google Scholar
    • Export Citation
  • Eubanks, V. (2018) Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor, New York: St Martin’s Press.

  • Featherstone, B., Gupta, A., Morris, K. and Warner, J. (2018) Let’s stop feeding the risk monster: towards a social model of ‘child protection’, Families, Relationships and Societies, 7(1): 722. doi: 10.1332/204674316x14552878034622

    • Search Google Scholar
    • Export Citation
  • Gillies, V., Edwards, R. and Horsley, N. (2017) Challenging the Politics of Early Intervention: Who’s ‘Saving’ Children and Why?, Bristol: Policy Press.

    • Search Google Scholar
    • Export Citation
  • Glaberson, S.K. (2019) Coding over the cracks: predictive analytics and child protection, Fordham Urban Law Journal, 46(2:3), https://ir.lawnet.fordham.edu/ulj/vol46/iss2/3.

    • Search Google Scholar
    • Export Citation
  • Gorin, S., Edwards, R., Gillies, V. and Vannier Ducasse, H. (2024) ‘Seen’ through records: parents’ access to children’s social care records in an age of increasing datafication, British Journal of Social Work, 54(1) 22845. doi: 10.1093/bjsw/bcad192

    • Search Google Scholar
    • Export Citation
  • Information Commissioner’s Office (n.d.) Guide to General Data Protection Regulation (GDPR), https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/accuracy.

    • Search Google Scholar
    • Export Citation
  • Jørgensen, A.M., Webb, C., Keddell, E. and Ballantyne, N. (2021) Three roads to Rome? Comparative policy analysis of predictive tools in child protection services in Aotearoa New Zealand, England and Denmark, Nordic Social Work Research, 12(3): 37991. doi: 10.1080/2156857x.2021.1999846

    • Search Google Scholar
    • Export Citation
  • Keddell, E. (2015) The ethics of predictive risk modelling in the Aotearoa/New Zealand child welfare context: child abuse prevention or neo-liberal tool?, Critical Social Policy, 35(1): 6988. doi: 10.1177/0261018314543224

    • Search Google Scholar
    • Export Citation
  • Keddell, E. (2019) Harm, care and babies: an inequalities and policy discourse perspective on recent child protection trends in Aotearoa New Zealand, Aotearoa New Zealand Social Work, 31(4): 1834. doi: 10.11157/anzswj-vol31iss4id668

    • Search Google Scholar
    • Export Citation
  • Kelly-Irving, M. and Delpierre, C. (2019) A critique of the adverse childhood experiences framework in epidemiology and public health: uses and misuses, Social Policy and Society, 18(3): 44556. doi: 10.1017/s1474746419000101

    • Search Google Scholar
    • Export Citation
  • Lighthouse Reports (2022) The algorithm addiction, 20 December: www.lighthousereports.com/investigation/the-algorithm-addiction.

  • McCulloch, J. and Wilson, D. (2015) Pre-Crime: Pre-Emption, Precaution and the Future, Abingdon: Routledge.

  • MHCLG (Ministry of Housing, Communities and Local Government) (2021) Vulnerable children and families being better supported through new data sharing projects, Press release, 3 September: www.gov.uk/government/news/vulnerable-children-and-families-better-supported-through-new-data-sharing-projects.

    • Search Google Scholar
    • Export Citation
  • OHCHR (Office of the United Nations High Commissioner for Human Rights (2021) The right to privacy in the digital age, 15 September, www.ohchr.org/EN/Issues/DigitalAge/Pages/cfi-digital-age.aspx.

    • Search Google Scholar
    • Export Citation
  • Redden, J., Dencik, L. and Warne, H. (2020) Datafied child welfare services: unpacking politics, economics and power, Policy Studies, 41(5): 50726. doi: 10.1080/01442872.2020.1724928

    • Search Google Scholar
    • Export Citation
  • Rouvroy, A. (2013) The end(s) of critique: data behaviourism versus due process, in M. Hilderand and K. de Vries (eds) Privacy, Due Process and the Computational Turn, Milton Park: Routledge, pp 14368.

    • Search Google Scholar
    • Export Citation
  • Salganik, M.J., Lundberg, I. and Kindel, A.T. + 108 others (2020) Measuring the predictability of life outcomes with a scientific mass collaboration, PNAS, 117(15): 8398403. doi: 10.1073/pnas.1915006117

    • Search Google Scholar
    • Export Citation
  • Sentinel Partners and Liverpool City Council (2019) Integrated data: the foundation for innovation, presentation, October, www.ukauthority.com/media/8505/liverpool-cc-sentinel-partners-how-to-master-your-data_the-liverpool-evolution.pdf.

    • Search Google Scholar
    • Export Citation
  • Shafiq, W. (2020) Data sharing, supported by machine learning, can deliver better outcomes for children and families, CommunityCare, 21 September: www.communitycare.co.uk/2020/09/21/data-sharing-supported-machine-learning-can-deliver-better-outcomes-children-families.

    • Search Google Scholar
    • Export Citation
  • Waller, M. and Waller, P. (2020) Why predictive algorithms are so risky for public sector bodies, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3716166.

    • Search Google Scholar
    • Export Citation
  • What Works for Children’s Social Care (2020) Ethics Review of Machine Learning in Children’s Social Care, Oxford: WWCSC/The Alan Turing Institute/Rees Centre, https://whatworks-csc.org.uk/research-report/ethics-review-of-machine-learning-in-childrens-social-care.

    • Search Google Scholar
    • Export Citation
  • White, S. and Wastell, D. (2017) The rise and rise of prevention science in UK family welfare: surveillance gets under the skin, Families, Relationships and Societies, 6(3): 42745. doi: 10.1332/204674315x14479283041843

    • Search Google Scholar
    • Export Citation
  • Wilkins, D. and Forester, D. (2020) Predicting the future in child and family social work: theoretical, ethical and methodological issues for a proposed research programme, Child Care in Practice, 26(2): 196209. doi: 10.1080/13575279.2019.1685463

    • Search Google Scholar
    • Export Citation
  • Almedom, E., Sampath, N. and Ma, J. (2021) Algorithms and child welfare: the disparate impact of family surveillance in risk assessment technologies, Berkeley Public Policy Journal, Fall: https://bppj.berkeley.edu/2021/02/02/algorithms-and-child-welfare-the-disparate-impact-of-family-surveillance-in-risk-assessment-technologies.

    • Search Google Scholar
    • Export Citation
  • Bilson, A. (2021) ‘The investigative turn and child protection evidence base’, presentation to Independent Review of Children’s Social Care, 31 May, https://bilson.org.uk/presentations/presentation-to-independent-review/?doing_wp_cron=1681232137.2219009399414062500000.

    • Search Google Scholar
    • Export Citation
  • Boddy, J. (2023) Engaging with uncertainty: studying child and family welfare in precarious times, Families, Relationships and Societies, 12(1): 12741. doi: 10.1332/204674321x16704251101652

    • Search Google Scholar
    • Export Citation
  • Bouchal, P. and Norris, E. (2014) Implementing Sure Start Children’s Centres, York: Joseph Rowntree Foundation/Institute for Government, www.instituteforgovernment.org.uk/sites/default/files/publications/Implementing%20Sure%20Start%20Childrens%20Centres%20-%20final_0.pdf.

    • Search Google Scholar
    • Export Citation
  • Braun, V. and Clarke, V. (2022) Thematic Analysis: A Practical Guide, London: Sage.

  • Brico, E. (2019) How an algorithm meant to help parents could target poor families instead, Talk Poverty, 26 November, https://talkpoverty.org/2019/11/26/algorithms-parents-target-low-income/index.html.

    • Search Google Scholar
    • Export Citation
  • Broussard, M. (2023) More than a Glitch: Confronting Race, Gender and Ability Bias in Tech, Cambridge, MA: MIT Press.

  • Capatosto, K. (2017) Foretelling the Future: A Critical Perspective on the Use of Predictive Analytics in Child Welfare, Kirwan Institute Research Report, Columbus, OH: The Ohio State University, February: www.researchgate.net/publication/318405111_Foretelling_the_Future_A_Critical_Perspective_on_the_Use_of_Predictive_Analytics_in_Child_Welfare.

    • Search Google Scholar
    • Export Citation
  • Chadwick Center and Chapin Hall (2018) Making the most of predictive analytics: response and innovative uses in child welfare policy and practice, Policy Brief, September, www.chapinhall.org/research/5022.

    • Search Google Scholar
    • Export Citation
  • Clayton, V., Sanders, M., Schoenwald, E., Surkis, L. and Gibbons, D. (2020) Machine learning in children’s services: summary report, September, What Works for Children’s Social Care, whatworks-csc.org.uk/wp-content/uploads/WWCSC_machine_learning_in_childrens_services_does_it_work_Sep_2020.pdf.

  • Couldry, N. and Mejias, U.A. (2019) The Costs of Connection, Redwood City, CA: Stanford University Press.

  • Dencik, L., Hintz, A., Redden, J. and Warne, H. (2018) Data Scores as Governance: Investigating Uses of Citizen Scoring in Public Services: Project Report, Cardiff: Data Justice Lab/Open Society Foundations, https://orca.cardiff.ac.uk/id/eprint/117517/1/data-scores-as-governance-project-report2.pdf.

    • Search Google Scholar
    • Export Citation
  • Edwards, R. and Ugwudike, P. (2023) Governing Families: Problematising Technologies in Social Welfare and Justice Systems, Abingdon: Routledge.

    • Search Google Scholar
    • Export Citation
  • Edwards, R., Gillies, V. and Gorin, S. (2021) Data linkage for early intervention in the UK: parental social licence and social divisions, Data & Policy, 3(e34): 115.

    • Search Google Scholar
    • Export Citation
  • Edwards, R., Gillies, V. and Gorin, S. (2022) Problem-solving for problem-solving: data analytics to identify families for service intervention, Critical Social Policy, 42(2): 26584. doi: 10.1177/02610183211020294

    • Search Google Scholar
    • Export Citation
  • Eubanks, V. (2018) Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor, New York: St Martin’s Press.

  • Featherstone, B., Gupta, A., Morris, K. and Warner, J. (2018) Let’s stop feeding the risk monster: towards a social model of ‘child protection’, Families, Relationships and Societies, 7(1): 722. doi: 10.1332/204674316x14552878034622

    • Search Google Scholar
    • Export Citation
  • Gillies, V., Edwards, R. and Horsley, N. (2017) Challenging the Politics of Early Intervention: Who’s ‘Saving’ Children and Why?, Bristol: Policy Press.

    • Search Google Scholar
    • Export Citation
  • Glaberson, S.K. (2019) Coding over the cracks: predictive analytics and child protection, Fordham Urban Law Journal, 46(2:3), https://ir.lawnet.fordham.edu/ulj/vol46/iss2/3.

    • Search Google Scholar
    • Export Citation
  • Gorin, S., Edwards, R., Gillies, V. and Vannier Ducasse, H. (2024) ‘Seen’ through records: parents’ access to children’s social care records in an age of increasing datafication, British Journal of Social Work, 54(1) 22845. doi: 10.1093/bjsw/bcad192

    • Search Google Scholar
    • Export Citation
  • Information Commissioner’s Office (n.d.) Guide to General Data Protection Regulation (GDPR), https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/accuracy.

    • Search Google Scholar
    • Export Citation
  • Jørgensen, A.M., Webb, C., Keddell, E. and Ballantyne, N. (2021) Three roads to Rome? Comparative policy analysis of predictive tools in child protection services in Aotearoa New Zealand, England and Denmark, Nordic Social Work Research, 12(3): 37991. doi: 10.1080/2156857x.2021.1999846

    • Search Google Scholar
    • Export Citation
  • Keddell, E. (2015) The ethics of predictive risk modelling in the Aotearoa/New Zealand child welfare context: child abuse prevention or neo-liberal tool?, Critical Social Policy, 35(1): 6988. doi: 10.1177/0261018314543224

    • Search Google Scholar
    • Export Citation
  • Keddell, E. (2019) Harm, care and babies: an inequalities and policy discourse perspective on recent child protection trends in Aotearoa New Zealand, Aotearoa New Zealand Social Work, 31(4): 1834. doi: 10.11157/anzswj-vol31iss4id668

    • Search Google Scholar
    • Export Citation
  • Kelly-Irving, M. and Delpierre, C. (2019) A critique of the adverse childhood experiences framework in epidemiology and public health: uses and misuses, Social Policy and Society, 18(3): 44556. doi: 10.1017/s1474746419000101

    • Search Google Scholar
    • Export Citation
  • Lighthouse Reports (2022) The algorithm addiction, 20 December: www.lighthousereports.com/investigation/the-algorithm-addiction.

  • McCulloch, J. and Wilson, D. (2015) Pre-Crime: Pre-Emption, Precaution and the Future, Abingdon: Routledge.

  • MHCLG (Ministry of Housing, Communities and Local Government) (2021) Vulnerable children and families being better supported through new data sharing projects, Press release, 3 September: www.gov.uk/government/news/vulnerable-children-and-families-better-supported-through-new-data-sharing-projects.

    • Search Google Scholar
    • Export Citation
  • OHCHR (Office of the United Nations High Commissioner for Human Rights (2021) The right to privacy in the digital age, 15 September, www.ohchr.org/EN/Issues/DigitalAge/Pages/cfi-digital-age.aspx.

    • Search Google Scholar
    • Export Citation
  • Redden, J., Dencik, L. and Warne, H. (2020) Datafied child welfare services: unpacking politics, economics and power, Policy Studies, 41(5): 50726. doi: 10.1080/01442872.2020.1724928

    • Search Google Scholar
    • Export Citation
  • Rouvroy, A. (2013) The end(s) of critique: data behaviourism versus due process, in M. Hilderand and K. de Vries (eds) Privacy, Due Process and the Computational Turn, Milton Park: Routledge, pp 14368.

    • Search Google Scholar
    • Export Citation
  • Salganik, M.J., Lundberg, I. and Kindel, A.T. + 108 others (2020) Measuring the predictability of life outcomes with a scientific mass collaboration, PNAS, 117(15): 8398403. doi: 10.1073/pnas.1915006117

    • Search Google Scholar
    • Export Citation
  • Sentinel Partners and Liverpool City Council (2019) Integrated data: the foundation for innovation, presentation, October, www.ukauthority.com/media/8505/liverpool-cc-sentinel-partners-how-to-master-your-data_the-liverpool-evolution.pdf.

    • Search Google Scholar
    • Export Citation
  • Shafiq, W. (2020) Data sharing, supported by machine learning, can deliver better outcomes for children and families, CommunityCare, 21 September: www.communitycare.co.uk/2020/09/21/data-sharing-supported-machine-learning-can-deliver-better-outcomes-children-families.

    • Search Google Scholar
    • Export Citation
  • Waller, M. and Waller, P. (2020) Why predictive algorithms are so risky for public sector bodies, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3716166.

    • Search Google Scholar
    • Export Citation
  • What Works for Children’s Social Care (2020) Ethics Review of Machine Learning in Children’s Social Care, Oxford: WWCSC/The Alan Turing Institute/Rees Centre, https://whatworks-csc.org.uk/research-report/ethics-review-of-machine-learning-in-childrens-social-care.

    • Search Google Scholar
    • Export Citation
  • White, S. and Wastell, D. (2017) The rise and rise of prevention science in UK family welfare: surveillance gets under the skin, Families, Relationships and Societies, 6(3): 42745. doi: 10.1332/204674315x14479283041843

    • Search Google Scholar
    • Export Citation
  • Wilkins, D. and Forester, D. (2020) Predicting the future in child and family social work: theoretical, ethical and methodological issues for a proposed research programme, Child Care in Practice, 26(2): 196209. doi: 10.1080/13575279.2019.1685463

    • Search Google Scholar
    • Export Citation
Rosalind Edwards University of Southampton, UK

Search for other papers by Rosalind Edwards in
Current site
Google Scholar
Close
,
Val Gillies University of Westminster, UK

Search for other papers by Val Gillies in
Current site
Google Scholar
Close
,
Sarah Gorin University of Warwick, UK

Search for other papers by Sarah Gorin in
Current site
Google Scholar
Close
, and
Hélène Vannier-Ducasse University of Southampton, UK

Search for other papers by Hélène Vannier-Ducasse in
Current site
Google Scholar
Close

Content Metrics

May 2022 onwards Past Year Past 30 Days
Abstract Views 506 506 0
Full Text Views 552 552 63
PDF Downloads 423 423 24

Altmetrics

Dimensions