Between algorithmic and analogue harms: the case of automation in Finnish Immigration Services

Authors:
Hanna Maria MalikUniversity of Turku, Finland

Search for other papers by Hanna Maria Malik in
Current site
Google Scholar
Close
and
Nea LepinkäinenUniversity of Turku, Finland

Search for other papers by Nea Lepinkäinen in
Current site
Google Scholar
Close
Full Access
Get eTOC alerts
Rights and permissions Cite this article

In this paper, we take the management crisis in the Finnish Immigration Service, Migri, as an example to illustrate ambiguous qualities of automated decision making in the context of the production and alleviation of social harm. The case lies at the crossroads of political and legal discussions on immigration and artificial intelligence (AI) transformation. As a result of the persistent backlog of cases held by Migri for processing, since the ‘refugee crisis’ of 2015, numerous asylum seekers remain in a state of bureaucratic limbo. Automating part of the decision-making process offered a potential solution to the harms caused by prolonged processing; however, it was hampered by features of the Finnish constitutional system. The applicants most likely avoided the potential algorithmic harms of prematurely implemented automated systems. However, possible algorithmic solutions to preexisting analogue harms have also been prevented. Through the analysis of policy and legal documents related to immigration and automation, we show that the disconnect between distinct political priorities leaves a variety of harms unaccounted for and may cause fractures in the Finnish harm reduction regime. Given that the development of algorithmic systems is subject to a constant struggle between contradictory values and expectations placed on these systems in terms of the alleviation of harm(s), we argue that a holistic view of harms and solutions to these harms in digitalised societies may facilitate the harm reduction potential of algorithmic systems.

Abstract

In this paper, we take the management crisis in the Finnish Immigration Service, Migri, as an example to illustrate ambiguous qualities of automated decision making in the context of the production and alleviation of social harm. The case lies at the crossroads of political and legal discussions on immigration and artificial intelligence (AI) transformation. As a result of the persistent backlog of cases held by Migri for processing, since the ‘refugee crisis’ of 2015, numerous asylum seekers remain in a state of bureaucratic limbo. Automating part of the decision-making process offered a potential solution to the harms caused by prolonged processing; however, it was hampered by features of the Finnish constitutional system. The applicants most likely avoided the potential algorithmic harms of prematurely implemented automated systems. However, possible algorithmic solutions to preexisting analogue harms have also been prevented. Through the analysis of policy and legal documents related to immigration and automation, we show that the disconnect between distinct political priorities leaves a variety of harms unaccounted for and may cause fractures in the Finnish harm reduction regime. Given that the development of algorithmic systems is subject to a constant struggle between contradictory values and expectations placed on these systems in terms of the alleviation of harm(s), we argue that a holistic view of harms and solutions to these harms in digitalised societies may facilitate the harm reduction potential of algorithmic systems.

The elementary task of the administration, taking care of people’s basic and human rights, will be significantly enhanced when it is possible to meet people’s personal needs without committing to time and place, mechanically. MoE, 2017: 34 (Era of AI)

In line with the goal of Finnish policies on AI to harness the potential of algorithms in business and everyday life (MoE, 2019), several Finnish public authorities have, in recent years, adopted automation to enhance the quality of public services (Chiusi et al, 2020). A key objective of these developments has been to centralise and accelerate decision making to improve efficiency, cut costs and enhance customer service. However, at the time of writing, the Finnish Constitutional Law Committee (CLC) has hampered these ‘automation experiments’. Given that algorithmic systems may violate constitutional rights, the legal status quo was deemed neither clear nor sufficient to ensure the legality of the automated decision-making (ADM) process (Koulu et al, 2019; Suksi, 2020). Thus, a preemptive approach, in addition to adherence to the rule of law and transparency principles intrinsic in the Finnish legal system, has served as a safeguard to shield society from the potential harms caused by automated systems, which we call algorithmic harms (Malik et al, 2022). On the other hand, possible algorithmic solutions have also been prevented. The lack of alternatives has left those in receipt of public services exposed to a variety of analogue harms, which we understand to be harms mediated by preexisting political, economic, and societal structures.

We position this dilemma in a zemiological framework, which supports a ‘progressive struggle to remove obstacles to human realisation’ (Soliman, 2019: 14) by developing novel modes of intervention to achieve positive social change (Canning and Tombs, 2021). The emergence and proliferation of ADM and AI have inevitably led to social change, transforming everything from work and leisure to communication patterns and social and political organisations (Floridi, 2014). Significantly, the design and deployment of these systems are subject to a constant struggle between contradicting values and expectations. Such values and expectations influence the relationship between these systems and different dimensions of social harm, and thus further shape the features of what Pemberton (2015) terms ‘harm reduction regimes’. If conceptualised as a multidisciplinary, cross-sectional matter, the struggle to produce a responsible design and an accountable development for AI and ADM necessarily overlaps with racial, gender, ecological and class struggles (Crawford et al, 2019; Benjamin, 2019).

In this paper, we use the case of the Finnish Immigration Service (Migri) to illustrate different dimensions of potential and documented social harms. Migri deals with a diverse immigrant population. We focus on the temporal harms experienced by asylum seekers and the potential algorithmic harms to the entire population of applicants, including refugees and asylum seekers, international students and researchers, low and high-skill workers. We pose two research questions: how the current approach to automation in public administration corresponds with the traditions of the Finnish harm reduction regime, and whether and how the intertwining of harms is accounted for in the policy and legislation affecting the current and upcoming reality of asylum seekers and Migri’s operations.

We begin this paper by introducing the Migri management crisis and the proposed analogue and algorithmic solutions. The current picture will then be contextualised within a zemiological framework, revealing the intertwining of analogue and algorithmic harms. This leads us to the main body, where we examine features of Finnish society in algorithmic context, drawing special attention to the differences between vision and reality of Finnish AI policies, as well as the disconnected approaches to harm framing in policy and legislation. We conclude by discussing harm reduction regimes in the algorithmic context. In the past, Finland has been considered relatively resistant to a neoliberal turn in diverse social domains and, therefore, it has been viewed as an example of a less harmful version of a capitalist society (Pemberton, 2015). However, through analysis of policy and legal documents related to immigration and automation, we reveal possible gaps between theory and practice and, by extension, fractures in Finnish harm reduction regime.

Data and method

Our analysis is conceptual in nature. Firstly, we offer an interpretation of Migri’s attempt at automation, based on the reports of investigative journalists, press releases and previous research, as well as publicly available official documents including legislation, official statements from Finnish ministries and reports from overseeing authorities.

To answer our first research question and to map the evolution of Finnish AI polices and their implementation, we reread policy documents concerning AI and automation, paying special attention to the priorities and values promoted by the reports produced by the projects Era of AI and AI 4.0 (MoE, 2017; 2019; 2021a and 2021b) as well as the Government Programmes of 2015 and 2019 – those which set the basic goals for sectoral policies.

To answer the second question and gain understanding of the relationship between immigration policies, AI policies, the constitutional framework for automation in public administration and the relevant regulatory suggestions, we explore how risks and harms are accounted for in selected documents. Our goal is to provide a holistic view of the perceived harms and solutions by combining legal and policy documents often examined in isolation. Thus, we concentrate here on the Action Plan for the Prevention of Irregular Entry and Stay for 2021–2024 (MoI, 2021, hereafter the Action Plan); the Report on Possible National Solutions to the Situation of People without a Right of Stay in Finland (MoI, 2022, hereafter the Report); two government bills on the data processing and automation of Migri (HE 224/2018 and HE 18/2019); the Finnish Strategy on AI (MoE, 2019) and newer, cross-sectoral AI policy documents (MoE, 2021a and 2021b); the statement by the CLC (PeVL, 7/2019); and, finally, the first draft of the Government Bill for the ADM Law (MoJ, 2022)1.

We chose these documents as they provide a broad sample of the law and policy affecting the current and upcoming reality of asylum seekers and Migri’s attempt to automate its processes. We started our analysis by perusing the documents to understand the general picture and the weight given to harms and risks. We then searched terms such as risk (riski, risk*) and harm (haitta, hait*), after which we examined the context in which they occur. As the term risk is often used when referring to legal requirements, for example by the EU’s General Data Protection Regulation (GDPR), we limited our enquiry to search results referring not to specific legal phrases but to possible societal or human implications. These parts of the texts were read meticulously, fostering understanding of how possible risks and harms are evaluated or considered. In the case of the government bills and the statement by the CLC (2019), we paid special attention to sections directly concerning automation. Despite the additional availability of some of the documents in English, we concentrate on the original Finnish versions to ensure consistency of analysis. However, when possible, the quotes we provide come from the official English versions of the documents. Our analysis is limited in scope and exploratory in nature. By bringing together these disconnected elements of the Finnish harm-reduction regime, we hope to provide a context for future, in-depth empirical research. Such research could include, for instance, textual analysis of diverging narratives emerging from legal and policy documents concerning immigration and automation.

Crisis in the Finnish Immigration Services

In 2015, many European countries faced an unprecedented spike in asylum applications due to the enormous number of people fleeing war and conflicts. In Finland, at the forefront of the crisis was Migri, the agency that grants residence permits to foreign nationals entering Finland, registers the right of residence of EU citizens and determines citizen status. It also processes asylum applications and decides on refusals of entry and deportations. With the remarkable rise in the number of applications for international protection, Migri’s operations became subject to public scrutiny. At the time, the number of asylum applications peaked from a typical 1,500–6,000 annually to over 32,000. Admittedly, in subsequent years, fewer people sought asylum in Finland. However, Migri still had to deal with the backlog of cases from the period of crisis.

At the beginning of 2017, matters related to the extension of residence permits and the residence of EU citizens were transferred from the police to Migri (HE 64/2016). In 2018, population growth in Finland was the smallest since 1970 (MoI, 2019: 15). Thus, both the incumbent right-wing government led by Sipilä and the subsequent left-leaning government, led first by Rinne and then by Marin, highlighted the significance of work-based migration. To support population growth and remedy labour shortage in key industries, both governments employed multiple initiatives to attract foreign workers, students, and researchers, and facilitate migrant workers’ access to the labour market (MoI, 2019: 9). This, in turn, led to an ongoing rise in applications for residence permits.

In 2019, the number of submitted work-based residence permit applications had risen from 10,805 in 2018, to 12,687 (MoI, 2019: 9). At the time, both the Chancellor of Justice (K 12/2020: 150–152) and the Parliamentary Ombudsman (K 15/2020: 224) reprimanded Migri for repeatedly violating the legal limits on processing time. The same problems transpired in asylum procedures, where the average processing times in general are considerably longer than with work-based residence permit applications (Yle News, 2020b). Hence, the MoI ordered an investigation into Migri’s asylum process by an independent research and consulting company. The investigation found that, on average, in 2017, the decision in a case where the applicant applied for international protection took about a year. In the following year, this period had decreased to 237 days (OwalGroup, 2019: 13). From July 2018 onwards, Migri was required to provide decisions for each case regarding international protection within six months. Accordingly, the processing times of new applications have shortened. However, due to changing priorities, the situation of older cases still pending has worsened (OwalGroup, 2019: 95).

Analogue harms of long processing periods

The social impact of long processing periods varies depending on the type of applicant. While in the case of a first working permit, prolonged waiting periods may lead to applicants forgoing the original job offer and other job opportunities (Yle News, 2018b; 2019c), in the case of asylum applications the consequences could be more severe. In 2022, MoI estimated that about 3000 asylum seekers from the year 2016 or earlier were still in Finland, waiting for a decision on international protection (MoI, 2022: 15). During the asylum process, state-funded reception centres offer accommodation and essential welfare and healthcare services (Reception Act section 26.1) to applicants waiting for their cases to be handled. Applicants are free to find accommodation outside of the reception centres (Reception Act section 18). However, given their limited right to work (Aliens Act 301/2004 section 79.2) and lack of state support for private accommodation, their options are scarce. Only a residence permit secures possibilities for working or studying, guarantees full access to healthcare services and enables full societal integration. In the case of a failed asylum application, the applicant can no longer use the services of the reception centre after a certain time (Reception Act section 14a), and healthcare becomes limited to urgent situations (Health Care Act, 1326/2010 section 50) and essential social welfare services (Social Welfare Act 710/1982 section 12). In addition, the applicants often cannot continue their studies or work previously carried out (Aliens Act section 73).

Harmful analogue solutions

To deal with the management crisis in Migri, multiple analogue solutions were proposed. At first, Migri was allocated additional resources, and 500 new employees were hired to process applications (Yle News, 2017). However, the authority’s operations raised doubts about the quality and, by extension, the fairness of asylum processing. Reports of investigative journalists at the beginning of the crisis indicated that caseworkers employed as a result of the emergency appointments in 2016 were given templates of negative asylum decisions (Yle News, 2017). Indeed, a comparison between the asylum decisions before and after the so-called crisis (Vanto et al, 2022) revealed that asylum caseworkers’ assessment of similar facts changed considerably. In particular, trust in the credibility of applicants’ claims decreased. Notably, there were no legal changes that could explain this interpretative shift. The considerable scope of discretion given to immigration officers allowed them to draw on legal frameworks in a flexible and instrumental manner and, under the political pressure to curb the increased number of arrivals, generated a shift in asylum decision making on a grand scale (Vanto et al, 2022).

Efforts to mitigate the crisis continued with more restrictive regulations introduced into the asylum process (Finnish Government, 2015) that limited the right to free legal aid, shortened time frames for appeal, increased the number of deportations and intensified the scrutiny of immigrants’ income (Yle News, 2020a). The multiple legal changes implemented in a brief period constituted a part of the ‘race to the bottom’, leading to undesirable consequences in the form of reduced attention to the basic human rights of asylum seekers (Pirjatanniemi et al, 2021: 229). The unprecedented situation contributed to the erroneous processing practices of Migri, shifting the burden to the administrative courts (Yle News, 2018a). The need for appeal procedures and repeated asylum processes increased, especially in the first years following the crisis, when the courts dealt with a peak in asylum cases: 8854 cases in 2017 and 6821 cases in 2018. About a third of these cases were returned to Migri for new processing (Owal Group, 2019: 13).

Concurrently, however, the number of first-time asylum seekers decisively lowered and, at the end of 2018, Migri reduced its personnel resources, leaving fewer than 40 employees to process asylum applications (Yle News, 2020b). Due to the concomitant backlog of about 20,000 cases and additional tasks connected to the rising numbers of residence permit applications, the processing times have further increased (Yle News, 2019a), and the management crisis has continued. Nevertheless, Migri’s budget allocation did not increase. Citing disagreement with the Ministry of Finance (MoF) ‘over what constitutes adequate resourcing for the agency’, Migri planned to lay off a further ten percent of its staff at the end of 2019 (Yle News, 2019b). Meanwhile, investment in information and communications technology services to develop Migri’s capacities for ADM was considered a solution to the problem.

Promise of algorithmic solutions

The right-wing government, elected just before the Migri crisis in 2015, placed digitalisation at the forefront of their political programme. The programme promoted a culture of experimentation as contributing to societal issues such as health and well-being, employment, competitiveness and growth, education and skills. Digitalisation and the dismantling of regulatory boundaries were considered essential to cultivate experimentation (Government Programme, 2015: 27). To this end, the MoE began to develop the Finnish AI strategy under the auspices of the project Era of AI (MoE, 2017). The concluding report of this project (MoE, 2019), often referred to as the Finnish Strategy on AI, promoted self-regulation among private and public actors, and encouraged an ‘AI sandbox’, a limited area for experimentation on AI and the development of new regulatory solutions (MoE, 2019: 54). Despite the neoliberal orientation of the right-wing politics, the initial AI policies often referred to the notion of ‘human-centric AI’, highlighting the trustworthiness of AI development and its ethical operation in the private as well as the public sector.

In line with these strategic goals, digitalisation and automation were considered a solution to the management crisis in Migri. First, basic customer service was transferred to chatbots, launched in May 2018 to reduce human service interactions. Then, to ensure compliance with the constitutional principle of legality, that is, that all use of administrative power requires a legal basis established by a parliamentary act, the MoI prepared a bill (HE 224/2018) concerning the use of personal data processing and automation by Migri. The idea was to fully automate a portion of the decision-making processes in straightforward cases in order to allocate the scarce human resources to cases requiring discretion and case-by-case assessment (HE 224/2018). It is important to note that the Finnish Strategy on AI refers to AI as systems ‘that are able to learn and to make decisions in almost the same manner as humans’ (MoE, 2019: 16). However, the discussion on automation relating to Migri concerned systems with pre-coded rules derived from parliamentary acts, simple calculations and unambiguous reasoning, hence with no AI component (MoJ, 2020: 63).

Regardless of the limited scope of the proposed automation, the bill did not pass constitutional review, which in the Finnish legal system occurs ex ante, during the legislative process. According to the CLC, the proposal failed to adequately address the principles of good governance or comply with GDPR rules. Eventually, the work on the first Migri bill was discontinued due to a change of government. The new left-leaning government, similarly to its predecessor, championed the possibilities of AI and encouraged experimentation and broad utilisation of AI (Government Programme, 2019). Thus, the work on legislation enabling ADM by Migri continued (HE 18/2019: 30). The goal was again to modernise and streamline Migri’s processing to mitigate the increased number of applications, while guaranteeing the constitutional requirements of good governance, in particular the right to be tried without undue delay (HE 18/2019: 29). Nevertheless, even though the updated bill delineated the scope of automation more clearly, to ensure compatibility with constitutional principles, the CLC deemed the suggested regulation too vague and therefore insufficient to fully comply with the constitutional principles of transparency, justifiability and accountability of administrative decisions. Instead, the CLC commissioned an assessment to evaluate the need to launch the legislative project (CLC, PeVL 7/2019).

The statement of the CLC effectively meant a total ban on automation in all branches of public administration until appropriate legislation could be implemented. Given that many public agencies considered automation practices mandatory for their operation (Yle News, 2019d), the preparatory process of the General Law on Automated Decision Making in Public Administration (the ADM Law) started at the end of 2019. After extensive preparatory work, the first draft for a bill on the ADM Law was published in February 2022 (MoJ, 2022). The draft leans towards relatively strict regulation and limited utilisation of ADM in public administration, allowing rule-based ADM systems in cases that do not require discretion, or when the required discretion is executed on a case-by-case basis by caseworkers prior to the decision (MoJ, 2022: 47).

The legislative process has progressed slowly. Meanwhile, Migri’s problems with high expenditures and a scarce budget have continued. Hence, at the end of 2019, in its fourth supplementary budget proposal, the Government allocated 4.7 million euros to the agency to guarantee sufficient resources for its tasks and personnel costs. The decision enabled Migri to prevent the planned reduction of its personnel. An additional 1.1 million euros were set aside for maintaining the agency’s compulsory IT systems (HE 67/2019: 37). However, the additional funds were designated for securing the continuity of the agency’s operations (HE 67/2019: 37) and thus offered no new means to solve the ongoing management crisis. Consequently, given the persistent backlog of cases, numerous applicants, especially asylum seekers, remain in a bureaucratic limbo, a state of ‘in-betweenness’ that pushes them towards surviving rather than living (Hartonen et al, 2021: 41). Next, we structure the harms connected to the Migri case within a zemiological framework, forming a basis from which to analyse the Finnish harm-reduction regime in the context of algorithmic transformation.

Dimensions of harm in zemiological framing

Analogue harms

Social harm studies (Hillyard et al, 2004), that later developed into zemiology, emerged within critical criminology, arguing that mainstream accounts of criminology fail to capture harmful conditions embedded in social structures. To achieve meaningful social change (Canning and Tombs, 2021), social harm scholars shift the focus from individuals to collective entities and socioeconomic structures, exposing how individuals and groups are harmed in the course of different social ordering practices.

Significant social harms are attributed to restrictive immigration policies (Canning and Tombs, 2021: 83); in particular to the ‘illegalisation of movement’ (Soliman, 2019). Border controls force people, especially those deemed ‘undeserving’ (Ustek-Spilda and Alastalo, 2020), to take riskier, often illegal journeys causing harms ranging from financial dependency on smugglers to an increased likelihood of physical harm or death. Those who manage to enter the country experience what Tervonen et al (2018) refer to as ‘internal everyday bordering practices’, that is, the confusing if not harmful bureaucracy of ‘endoborders’: ‘micro and macro level controls which restrict everyday interactions and civil liberties’ (Canning, 2019: 43). While some of these ‘borders’ are entrenched in legal frameworks of immigration management, others stem from defective processing procedures during which migrants are exposed to anguish and uncertainty.

Western states’ immigration policies are frequently explored in the context of neoliberalism (Joppke, 2021) and securitisation (Palander and Pellander, 2019). These trends have shaped bifurcated models of internal and external bordering practices directed at different categories of migrants, combining competitive recruitment of high-skilled migrants with restrictive attitudes towards low-skilled, family, asylum, or irregular migration (Joppke, 2021). However, the role of immigration is also dichotomous in the rhetoric of welfare nationalism (Heino and Jauhiainen, 2020). Accordingly, since the early 2000s, labour migration has been presented in Finnish governmental policies as a solution to demographic problems and concomitant labour shortage (Ollus and Alvesalo-Kuusi, 2012), and thus as beneficial, if not necessary, for the Finnish economy. Simultaneously, the restrictive governmental policies following the crisis in 2015 aimed at ‘making Finland less appealing to potential asylum seekers’ (Palander and Pellander, 2019), consequently limiting the number of arrivals through the external borders.

The restrictive regulation of the asylum process (Pirjantanniemi et al, 2021) and legal changes, such as removing humanitarian protection status from the Aliens Act and tightening the criteria for family reunification (Heino and Jauhiainen, 2020), created a hostile environment for those deemed inherently unwanted from the perspective of the Finnish welfare state. While these were undoubtedly harmful to people seeking refuge in Finland, in this paper, we focus on the role of defective processing procedures in Migri, in particular the role of long waiting periods and the increased number of erroneous decisions. The long waiting periods of immigration processes restrict applicants’ ability to fully participate in social relations and thus create the aforementioned ‘endoborders’. Beyond the regular processing periods, erroneous decisions lead to complicated appeal procedures that may span years. Uncertainty over the future and the constant sense of insecurity form the basis of what Canning (2019) calls temporal harms. These harms are connected to the prolonged lack of permanence that amplifies feelings of worry and anxiety (Hartonen et al, 2021: 42).

Temporal harms may manifest as mental health problems and psychological drama as well as cultural harms often framed as harms of misrecognition (Soliman, 2019; Pemberton, 2015: 24). In the Finnish context, research on the subjective well-being of asylum seekers conducted in one of the reception centres found that over half of the asylum seekers in the centre experienced suffering, a third were struggling, and only about ten percent considered themselves to be thriving (Hartonen et al, 2021: 41). The insecurity of the immigration process impairs applicants’ ability to formulate choices and act on these effectively. This leads to systematic dismissal of their wants, needs and concerns, amounting to the disablement of their attempt at self-actualisation and causing autonomy and relational harms (Pemberton, 2015: 29). In the case of Migri, partial automation was considered a plausible solution to the long processing periods (Migri, 2021). However, the feasibility of algorithmic decision making in public administration has been contested.

Algorithmic harms

Given the ubiquity of algorithmic transformation, attempts to use algorithmic systems to coordinate human activity have been conceptualised as a distinctive form of social ordering (Yeung, 2018). Hence, we argue that social harm scholars should pay attention to socio-techno-economic structures, which point to the technological dimension of harm (Malik et al, 2022). In official policies, the social change caused by algorithmic transformation has been couched in positive terms as driving social progress, prompting economic growth, and creating efficiency in public administration. This framing links the increased use of algorithmic technologies and datafication used by the welfare state (Dencik and Kaun, 2020) to a broader paradigm shift that has, since the 1970s, led to a transformation of welfare services. At first, the proliferation of new public management (NPM) approaches, which emphasise competition and incentivisation (Dunleavy et al, 2006, resulted in a fragmentation and decentralisation of decision-making processes (Temmes, 1998). Later, it was hoped that the turn to digital governance – a complex of changes centred around IT and information handling that preceded algorithmic transformation (referred to as post-NPM) – would reverse the ‘gains’ of the NPM era (Dunleavy et al, 2006). Optimistically, the post-NPM approach promised reintegration of governmental tasks and the creation of larger needs-oriented administrative blocks to simplify access to public services (Dunleavy et al, 2006: 478, 480). This framing aligns with the deeply-rooted Finnish optimism towards new technologies that have enabled the country’s economic progress in the past (Schienstock, 2007: 99).

In the pessimistic narratives, the gains from digitalisation and automation are irrelevant, as automation creates new power relationships leading to severe harms. For instance, in public decision making, automated systems comprehend individual cases via reductive models and thus are likely to exacerbate socially embedded bias and discrimination (Eubanks, 2017). Furthermore, the underlying problems in data quality, models of data collection or analysis may lead to false positives or false negatives (Whittaker et al, 2018; Chiusi et al, 2020) and, while accelerating decision-making processes, may also restrict avenues for redress (Citron and Pasquale, 2014), violating the fundamental rights of affected individuals (Wachter et al, 2021).

From this vantage point, ADM and the growing datafication of public administration are framed as the natural outgrowth of neoliberal managerialism, which prioritises deregulation, cost efficiency and the individualistic logic of competition (Waldman, 2019). Hence, technologies are inherently value-laden, serving particular political ends. In line with this, for example, Ustek-Spilda and Alastalo (2020), when analysing the Norwegian and Finnish practices of population registration, show that the software utilised in modern welfare services is built with the logic of capitalism, prioritising lower costs and increased profits over universal access to welfare or decommodification as a human right (Ustek-Spilda and Alastalo, 2020). In zemiological framing, the use of ADM in the public domain, especially without adequate human oversight, may exacerbate the damage already caused by privatisation, deregulation and other neoliberal policies frequently explored in social harm studies (Malik et al, 2022).

Similarly, Mann (2020) shows how ‘technology facilitates and enables the punitive, unfair, and unjust policies of the neoliberal state to be executed at great(er) scale and speed’. However, Mann argues that the centrality of technology should not overshadow the non-technological dimension. As ‘algorithmic fairness’ can never be achieved in a fundamentally unfair and unequal society (Mann, 2020: 7), technology should be decentred in discourses on social injustice. The crucial point she rejects is the capacity of technology to contribute to positive social change. In particular, she rejects the optimistic idea that harms can be mitigated by a better design of technology: without interventions into the underlying social organisation, technological innovations are deemed to perpetuate harmful conditions.

However, the deterministic framing of technology as working merely for the powerful presents an insufficient conceptualisation of human-technology relations (Wood, 2020). An abundance of progressive proposals has been made employing ex ante perspectives that centre on greater democratic oversight and deliberation about the deployment of algorithmic systems (for example, Pasquale, 2015). These approaches highlight the importance of the cross-sectional struggle over the design and deployment of algorithmic systems. In the same vein, McCarthy (2013) recognises the development of technology resulting from a political struggle over the gestalt of certain technologies to solve certain societal problems. This politically-contested process is ‘embedded within structured relations of social power’ and ‘cannot be fully closed due to the necessary ambivalence of technological objects’ (McCarthy, 2013: 477). Even if technology is a creation of the powerful, there are forces that champion alternative solutions and resist potentially oppressive technologies. These forces are sometimes strong enough to enable a more just reality to be built. However, the forms, extent and results of these struggles vary across societies. In what follows, we describe how the current approach to automation in public administration corresponds with features of Finnish society, and whether and how algorithmic and analogue harms are accounted for in the policy and legal documents.

Features of Finnish society in algorithmic context

Finnish society is an interesting example through which to explore the impact of algorithmic transformation on harm production and alleviation. The country is considered one of the front runners in the ethical development and implementation of AI systems. With a strong technology sector, a good infrastructure, and extensive availability of data representative of the population, Finland scores high in the ranking of AI readiness (Shearer at al, 2020: 10, 135). At the same time, the country is praised for its responsible approach to AI in terms of transparency and privacy. The low levels of socioeconomic inequality, protected access to information, and citizens’ ability to challenge irresponsible use of AI systems by governments have the potential to result in stronger accountability and inclusivity (Shearer at al, 2020: 21, 141). These qualities build on the preexisting sociolegal environment and cultural conditions (van Berkel et al, 2020) that amount to a less harmful version of capitalist society (Pemberton, 2015).

While the nature of capitalist harms remains the same, the distinct characteristics of socioeconomic governance, institutional architecture, and political traditions across advanced capitalist systems influence the extent and experience of capitalist harms (Pemberton, 2015). In this framing, Nordic welfare states have proven to be relatively resilient to the pressures of globalisation, the neoliberal rationale of economic restructuring, and extensive austerity strategies that disproportionately affect marginalised groups and lead to an array of social harms in other contexts (Hillyard et al, 2004). Finnish political-economic governance is broadly framed as being a ‘social-democratic’ system with relatively strong national regulation, generous welfare services, a lenient criminal policy, well-grounded trade union traditions and high levels of social solidarity and trust. It therefore presents a less harmful alternative to more ‘liberal’ systems categorised by individualism and competition (Pemberton, 2015). In AI policy research, such countries with higher levels of socioeconomic cohesion and more respect for the rule of law are expected to provide a better environment for citizens to voice their concerns and secure an ethical character for algorithmic transformations (Shearer at al, 2020). Correspondingly, the commitment to trust, citizens’ well-being, and the rule of law are all pronounced in Finland’s approach to AI. Unfortunately, a closer look at the evolution of Finnish AI polices and their implementation reveals a gap between policy and practice. These contradictions reveal fractures in the features of the Finnish harm-reduction regime, in addition to those already demonstrated by the tension between restrictive immigration and welfare policies.

Between vision and reality of Finnish AI policies

As in other Nordic countries, the Finnish legal system and administrative culture are marked by simplicity, transparency, equality, and avoidance of extremes (Letto-Vanamo and Tamm, 2019). This translates into a shared sense of responsibility, a relatively broad awareness of rights, and trust in public administration (Husa et al, 2007). In addition, the civil society sector is large and active (Helminen, 2019). Building on the preexisting culture of trust and citizens’ participation, the Finnish Strategy on AI (MoE, 2019: 103, 109, 111–112) underscores both the need to promote the ethical use of AI and to foster citizens’ trust and capacity to comprehend the transformation.

However, the practice differs from the ideal. The Finnish approach to automation in public administration has also been marked by tension between strong commitment to both competitiveness and good governance. The governmental work on AI, from the Government Programme of 2015 to the newest action plans under the project AI 4.0 (MoE, 2021a and 2021b), adheres surprisingly strongly to neoliberal values. Even though the change of government in 2019 drove political reality to the left with the most left-wing government elected in Finland in decades, neoliberal values became even more prevalent in the new AI policies. While the first project, Era of AI (MoE, 2017 and 2019), focused on society more broadly, the following project – AI 4.0, developed under Marin’s Government – sharpened the focus on enterprises, competitiveness, and economic growth. In the project AI 4.0, AI and other advanced digital technologies are framed as tools to accelerate systemic change of the business community (MoE, 2021a: 73). In the second report of the project AI 4.0 (MoE, 2021b), citizens and civil society are not mentioned at all. Instead, following neoliberal ethos, emphasis is placed on economic values and efficiency. As the approach considers well-being from the perspective of employment and economical gains, it diminishes the complex relations between different areas of life that contribute to the well-being of society. Thus, it fails to evaluate the crosscutting character of AI transformation and the experiences of this transformation of diverse social groups.

The initial Finnish strategies highlighted the need for active, multidisciplinary discussion between diverse societal actors on how AI can be applied in a trust-generating manner (MoE, 2019: 102, 106). Despite this ambition to involve civil society in the development of algorithmic solutions, the national Strategy on AI (MoE, 2019), as well as the following action plans (MoE, 2021a and 2021b), have been curated under the MoE. This places the emphasis strongly on the benefit to enterprises and corporations, economic growth and cutting the red tape for innovation. Although during the project Era of AI, citizens had a chance to comment and attend workshops, the project was carried out primarily by enterprise representatives, ministry advocates and academic experts.

The largest digitalisation project in the country, AuroraAI (MoF, 2020), which aims to develop human-centric public services, is being headed by the MoF. The project enables citizens to take part in the development process through open, real-time communication channels; however, the responsible stakeholders are limited to municipalities, ministry advocates and enterprises. Given the ministry’s objective ‘to stabilise general government finances, safeguard sustainable economic growth and to ensure that public services and administration are effective and efficient’ (MoF, 2022), the danger ensues that human-centric, needs-based services will be subordinated to financial policies aiming to ensure efficiency and balancing state expenditure. In addition, the lack of civil society involvement in the legislative process of the ADM Law is noticeable. Even with active participation at the consultation stage, no civil society organisations were chosen to participate in the drafting of the bill. This choice reveals the priorities in the ongoing regulatory process, as the bill establishes the grounds of future AI governance in Finland (Lepinkäinen and Malik, 2022).

This overview shows that while the current approach to automation corresponds with the Finnish socioeconomic, political and legal system as regards initial policies, a shift towards more liberal values is notable. Similarly, the incremental turn from human-centric to business-oriented frameworks could mean readjusting the interpretation of the Finnish Constitution (731/1999) (Lepinkäinen and Malik, 2022), which currently balances between liberal and social ideals on the meaning of the welfare state (Länsineva, 2012). The former emphasises the principle of individual freedom and negative protection of rights, whereas the latter highlights the promotion of justice in society, solidarity, and concomitant affirmatory actions of the government to ensure equality and social security (Länsineva, 2012: 119). Between these two ideals, reflected in the broad catalogue of constitutional rights, the Finnish constitutional system leaves space for the negotiation of welfare policies and their implementation leaning towards either social or liberal values. This includes the possibility to shift the priorities of automation in public administration.

Disconnected approaches: harm framing in Finnish official documents

The case of automation in Migri reveals contradictions between political priorities in different societal areas, as it lies at the crossroads of discussions on immigration and AI transformation in the political and legal dimensions. To foster the alleviating qualities of algorithmic technologies, these overlapping issues should not be considered in isolation. Hence, we explore how the risks and harms connected to these societal areas are accounted for in Finnish immigration policies, AI policies, the constitutional framework for automation of public administration and relevant regulatory suggestions. Our short analysis shows that the intertwining of analogue and algorithmic harms and solutions is considered inadequately in these documents, thus creating a fragmented political-legal reality.

In immigration policies, the Action Plan (MoI, 2021) as well as the Report (MoI, 2022) consider harms to people without a right of stay in Finland in a broad sense. These considerations form a rather significant part of the documents, starting from the risks following from strict border controls and attempts to evade them (MoI, 2021: 15). The Report (MoI, 2022: 15) describes how ‘the vulnerability of persons without a right of residence to exploitation and even human trafficking came to the fore’. The framing of the risks that newcomers face after arriving in Finland is often connected to different types of criminal abuse of individuals. However, the documents also acknowledge feelings of insecurity and fear (MoI, 2021: 28; MoI, 2022: 36). Different harms mentioned could lead, especially if prolonged, to mental health issues and a lessening of the quality of life altogether (MoI, 2022: 14). This is associated with risks for society, as the problems these individuals face may lead to difficulties in integrating into society (MoI, 2022: 36).

In the AI strategies, explicit mentions of risks or harms are rare. Moreover, while immigration policies consider harms and risks from the perspective of individuals who are conceptualised as active subjects, in AI policies, harms are perceived from the perspective of institutions and business life and framed as problems of efficiency due to the high costs of personnel or revenue risks (MoE, 2021a: 65). Individuals are conceptualised as customers, objects benefitting from the utilisation of modern technologies.

The Finnish Strategy on AI (MoE, 2019) contemplates the risks of an AI system strictly from a technological perspective. It examines the security of chosen technologies and the risks they pose to society, but harms for individuals are not considered. In addition, the distinction made in the report between risks ignores the possibilities of what Wood (2020) refers to as the technicity level of harms. In this account, technology has agency of its own in facilitating the unexpected and unintended and can cause an unintended impact on its users. In contrast, the Strategy on AI (MoE, 2019) only differentiates between ‘harmful AI’ (AI meant to cause harm), ‘harmful action on AI’ (weaponising the vulnerabilities of AI by a malicious actor) and ‘fallible AI’ (AI making inaccurate or biased decisions) (MoE, 2019: 109). Of these three, only the last group is considered to include possible unintended harms. However, even this last group of harms is simplified as being a consequence of insufficient training data. The strategy fails to consider the social implications of harmful AI, as it remains limited to technological risks and ways to prevent them.

The reports published in project AI 4.0 (MoE, 2021a and 2021b) take no stance at all on the risks or harms to individuals or society. Instead, taking risks is described as desirable in business life: ‘Growing and new innovative companies renew industry structures, challenge large companies to innovate and create role models for business that is flexible, risk-taking, using modern technologies and growing’ (MoE, 2021a: 64).

In legal documents framing the automation of public administration, risks and harms are often portrayed as questions of legality, constitutional rights, and freedoms. This is not surprising, as the documents adhere to legal terminology, and the perspective is constructed through requirements found in written law. The sections of the Migri bills (HE 224/2018 and HE 18/2019) considering automatisation adopt a fairly similar perspective to the AI policies, arguing that the use of ADM systems will make the institution more efficient and enhance its processes, which will also bring benefits to customers (HE 224/2018: 26; HE 18/2019: 29). The possible risks are rather implicitly equated to the risks of due process (HE 18/2019: 40–41), and restricting the use of automation is seen as potentially amplifying the risk-potential: ‘Automated decision-making improves efficiency, for example, by allowing human resources to be allocated to decision-making activities that require discretion. There is a possibility of error in decisions made by natural persons, while the possibility of human error is minimised in automatic decisions’ (HE 224/2018: 64; see also HE 224/2018: 30; HE 18/2019: 41).

The careful approach shown in the statement CLC PeVL, 7/2019, which sets a ban on the use of ADM systems in public administration, could be interpreted as being oriented towards the mitigation of the potential threat of algorithmic harms. Nevertheless, the harm reduction goal is merely implicit, as the CLC focuses on compliance with the constitutional standards of administrative processes (CLC PeVL, 7/2019: 11). The CLC statement includes no direct mention of risks and harms in the sections connected to automation. The risks and harms mentioned in the other sections of the statement predominantly relate to questions of privacy and data protection, which follows the main focus of the Government Bill 18/2019 on data processing.

Finally, the draft for the Government Bill on the ADM Law (MoJ, 2022) rarely uses the terms risk and harm without signposting the specific legal requirement on which the risk falls, or where the legal requirement for risk minimisation comes from. Importantly, the draft repeatedly relies on the notion of risks to individual rights and freedoms, and includes a reminder that the EU’s GDPR requires a risk-based approach to minimise risks in data processing.

Conversely, the immigration policies, despite the broad understanding of harms for individuals, show a rather weak understanding of automation, ADM and AI. Admittedly, automation is mentioned among the 52 actions to improve immigration management (MoI, 2021, Action 33). Yet, the goal of automation is framed from the perspective of harms to institutions, as it will be used to monitor and prevent abuses of the system. In addition, the use of automation is conceptualised as a part of the efforts to enhance the agency’s permit procedures. Automation is considered crucial to meet the objective set in the Government Programme of 2019, which is to process work-based residence in (on average) one month (MoI, 2021: 53). The Action Plan takes no stance on the question of how automation could or should be utilised in the agency’s decision making. The only mentioned risk related to automation is connected to possible irregular entries and attempts to evade entry provisions (MoI, 2021: 48), which follows the logic of the Finnish Strategy on AI (MoE, 2019) and connects risks with intentional malicious actions. Otherwise, the risks of automation or algorithmic harms are not accounted for, and the plan offers no guidance for the utilisation of algorithmic systems in the specific context of immigration matters. The following Report (MoI, 2022) does not mention AI, automatisation or digitalisation at all. It concentrates on the legal possibilities to solve the problem of irregular stay in Finland, but takes no stance on the question of algorithmic harms or solutions. This may follow from the fact that the Report was written and published at a time when the ADM Law was already under preparation. Still, the contextual understanding of the possibilities or the need for the utilisation of algorithmic technologies would be valuable.

Given the scale and unprecedented character of the 2015 immigration management crisis and the early stages of the concurrent digitalisation process, the disconnected framing of harms and solutions in these documents may not directly relate to the poor handling of immigration and asylum cases. However, more broadly, the management crisis in Migri illustrates the overlap of pressing societal issues: approaches to different forms of immigration, the future of the welfare state, and the algorithmic transformation of public tasks. This is emblematic of the embedded character of algorithmic technologies in preexisting societal structures, where algorithmic harms are only one layer in a broader stratigraphy of harms (Wood, 2020). This leads to the question of what the hierarchy of social harms is. More importantly, it also points to the cross-sectional character of AI transformation, which overlaps with racial, gender, ecological and class struggles, and should not be analysed in isolation. The disconnect between different political priorities pursued in separate policy sectors leaves a variety of harms completely unaccommodated. Bringing together legal and policy documents that regard these issues is necessary to gain a holistic view of the intertwining of possible harms and their prospective solutions. A deeper understanding of how the elements of immigration and automation policies affect each other enables solutions to become more operational. Still, the most recent immigration policy documents (MoI, 2021 and 2022), while focusing on different dimensions of analogue harms, take no stance on the question of algorithmic harms. The Action Plan considers automation in general, but the sectoral characteristics are not reflected, and even though many risks for individuals are recognised, no consideration of the impact of AI or ADM on the applicants can be found. If AI policies are handled without reference to the area to which they will ultimately be applied, the danger arises that disconnected approaches will potentially distort the perception of harms and prompt inadequate solutions.

Concluding discussion: harm reduction regimes in algorithmic context

The case of the Finnish Immigration Services illustrates different dimensions of social harms. The restrictive immigration policies implemented in response to the perceived ‘refugee crisis’ produced several undeniable harms to those seeking asylum in Finland. The defective processing procedures, including the long processing periods and the erroneous decision-making practices, are but a few examples of internal, everyday bordering practices, a form of ‘endoborders’ that shifts mobility control from external borders to the level of employment, education, welfare and healthcare services. The state of in-betweenness experienced by those awaiting asylum decision limits access to non-essential welfare and healthcare services, education, and employment. This creates a hostile environment for those implicitly deemed unwanted by Finnish immigration policies and produces a range of temporal and other social harms. If these analogue harms are systematised and accelerated by an increasing use of algorithmic technologies, they amount to what we call algorithmic harms (Malik et al, 2022). In other words, automation creates a new relationship of power exacerbating and perpetuating harms already caused by neoliberalism.

However, the Migri management crisis also illustrates the ambiguity of algorithmic transformation, creating potential for both harm production and harm alleviation. In line with Finnish AI strategies, automating part of Migri’s decision-making process offered a potential solution to the crisis; yet it was hampered by features inherent in the Finnish constitutional system. This approach highlights that caution and legality have possibly prevented algorithmic harms. Simultaneously, however, the lack of alternative solutions leaves a great number of people exposed to temporal harms while awaiting a decision on their cases. Applicants are left in uncertainty and without access to full welfare and other services. Thus, the management calamity becomes a humanitarian crisis.

Against this background, the underlying tone of our analysis is decidedly pessimist. Despite the fact that the Finnish system is conducive to the responsible development and use of AI and automation, our analysis reveals contradictions between vision and reality in the approach to AI transformation, and disconnected approaches to the intertwining of analogue and algorithmic harms and solutions. The Finnish AI policies recognise the embedded nature of AI transformation in society, yet conceptualise it in isolation from important societal issues. At present, the AI policy documents frame automation in abstract terms, leaving their operationalisation to lower-level bureaucrats responsible for other societal issues, such as the implementation of immigration policies. Moreover, the gaps between AI policies and their implementation show that, in Finland, neoliberal voices are garnering stronger support.

The incremental shift in priorities of digitalisation points to fractures in the features of Finnish social organisation that are usually deemed less harmful. While Finland previously proved more resistant to a neoliberal turn in diverse social domains, it may be less resistant to algorithmic transformation. Indeed, the drift towards neoliberal approaches in public administration, in particular the turn to new public management and other neoliberalist result-oriented management strategies, could be observed in Finland since the 1990s. In addition, the analogue harms signposted in this paper are connected to inherent contradictions underpinning the welfare paradigm. Often, welfare services are limited to those who have the right to remain in the country in question, while those with limited (or no) access to the labour market or social assistance, due to their residency status, are neglected (Ustek-Spilda and Alastalo, 2020).

The contradictions between immigration policies and the welfare paradigm remain a pressing problem that exceeds the scope of this analysis. However, the contradictions between vision and reality of AI transformation prove that the design and deployment of algorithmic systems are subject to a constant struggle concerning the hierarchy of societal values and, by extension, the hierarchy of harms. The development and use of algorithmic systems is a politically-contested process of continuous negotiation and renegotiation of values, and expectations placed on them, in terms of harm(s) alleviation. To foster the alleviating qualities of AI transformation, constant regulatory pressure is needed. In this regard, we ultimately assume the position of technology pragmatists by considering both the positive and negative sides of technological innovations.

In modern societies, where actions take place in highly technology-intensive environments, social harm scholars and, most importantly, policymakers and legislators, should be attuned to the ambiguous relationship between harm and algorithmic systems and the role of automation in harm-reduction regimes. Even if, citing Mann (2020), technology will not solve the problems underlying advanced capitalist societies, the intertwining of algorithmic and analogue harms and solutions requires explicit attention to the value choices that shape algorithmic or analogue alternatives. Thus, while we agree with Mann’s argument that neoliberal rationalities lead the design, development, and deployment of algorithmic technologies, we argue that ignoring the struggle over these issues would be a mistake.

The danger that ensues from such deterministic framings is the abandonment of the bottom-up efforts by social movements, pressure groups, academics, and individual citizens. In order to achieve social mobilisation, it is necessary to facilitate the digital literacy of citizens and raise awareness about AI as a collective issue. In this context, a part of the zemiological agenda is to empower bottom-up social change (Canning and Tombs, 2021). By developing conceptual and empirical accounts of harm and harm reduction in the analogue and algorithmic dimension, future research has the potential to enhance the position of marginalised groups. Thus, we argue that understanding the gaps and overlapping areas is a prerequisite for a holistic view of harms and solutions to these harms in a digitalised society.

Note

1

After submitting this article, the MoI has initiated a legislative project (SM025:00/2022) in order to enable ADM in Migri. Moreover, the latest draft for the Government Bill for the ADM law is to be presented to Parliament during Autumn 2022.

Funding information

This research was conducted under AALAW (Academy of Finland project number 315007) and ETAIROS (AoF pn 327357).

Conflict of interest

The authors declare that there is no conflict of interest.

Legal references

Aliens Act. Ulkomaalaislaki 301/2004.

The Finnish Constitution. Suomen perustuslaki 731/1999.

Health Care Act. Terveydenhuoltolaki 1326/2010.

Reception Act. Laki kansainvälistä suojelua hakevan vastaanotosta sekä ihmiskaupan uhrin tunnistamisesta ja auttamisesta 746/2011.

Social Welfare Act. Sosiaalihuoltolaki 1301/2014.

References

  • Aradau, C. and Blanke, T. (2018) Governing others: anomaly and the algorithmic subject of security, European Journal of International Security, 3(1): 121, doi: 10.1017/eis.2017.14.

    • Search Google Scholar
    • Export Citation
  • Benjamin, R. (2019) Race After Technology, Cambridge: Polity Press.

  • Canning, V. (2019) Abject asylum: degradation and the deliberate infliction of harm against refugees in Britain, Justice, Power and Resistance, 3(1): 3760, https://research-information.bris.ac.uk/ws/portalfiles/portal/207660459/Abject_Asylum.pdf.

    • Search Google Scholar
    • Export Citation
  • Canning, V. and Tombs, S (2021) From Social Harm to Zemiology, Oxford: Routledge.

  • Chancellor of Justice (K 12/2020) Oikeuskanslerin kertomus vuodelta 2019, https://www.eduskunta.fi/FI/vaski/Kertomus/Documents/K_12+2020.pdf.

    • Search Google Scholar
    • Export Citation
  • Chiusi, F., Fischer, S., Kayser-Bril, N. and Spielkamp, M. (2020) Automating society Report 2020, https://automatingsociety.algorithmwatch.org.

    • Search Google Scholar
    • Export Citation
  • Citron, D.K. and Pasquale, F. (2014) The scored society: due process for automated predictions, Washington Law Review, 89(1): 133.

  • CLC (Constitutional Law Committee) PeVL (7/2019) vp. Perustuslakivaliokunnan lausunto, HE 18/2019 eduskunnalle laiksi henkilötietojen käsittelystä maahanmuuttohallinnossa ja eräiksi siihen liittyviksi laeiksi.

    • Search Google Scholar
    • Export Citation
  • Crawford, K. et al. (2019) AI Now 2019 Report. New York: AI Now Institute, https://ainowinstitute.org/AI_Now_2019_Report.html.

  • Dencik, L. and Kaune, A. (2020) Datafication and the welfare state, Global Perspectives, 1( 1): 12912, doi: 10.1525/gp.2020.12912.

  • Dunleavy, P., Margetts, H., Bastow, S. and Tinkler, J. (2006) New public management is dead: long live digital-era governance, Journal of Public Administration Research and Theory, 16(3): 46794, doi: 10.1093/jopart/mui057.

    • Search Google Scholar
    • Export Citation
  • Eubanks, V. (2017) Automating Inequality, New York: St. Martin’s Press.

  • Finnish Government (2015) Action plan on asylum policy, https://vnk.fi/documents/10184/1058456/Hallituksen_turvapaikkapoliittinen_toimenpideohjelma_08122015+EN.pdf/3e555cc4-ab01-46af-9cd4-138b2ac5bad0.

    • Search Google Scholar
    • Export Citation
  • Finnish Immigration Service (2021) Strategy 2021, https://migri.fi/documents/5202425/9320472/strategia2021_en/b7e1fb27-95d5-4039-b922-0024ad4e58fa/strategia2021_en.pdf.

    • Search Google Scholar
    • Export Citation
  • Floridi, L. (2014) The Fourth Revolution: How the Infosphere is Reshaping Human Reality, Oxford: Oxford University Press.

  • Government Programme (2015) Ratkaisujen suomi, publications of the finnish government, 10/2015.

  • Government Programme (2019) Osaava ja osallistava Suomi – Sosiaalisesti, taloudellisesti ja ekologisesti kestävä yhteiskunta, publications of the finnish government, 2019:25.

    • Search Google Scholar
    • Export Citation
  • Hartonen, V.R., Väisänen, P., Karlsson, L. and Pöllänen, S. (2021) ‘Between heaven and hell’: subjective well‐being of asylum seekers, International Journal of Social Welfare, 30(1): 3045. doi: 10.1111/ijsw.12435

    • Search Google Scholar
    • Export Citation
  • HE (18/2019) vp. Hallituksen esitys eduskunnalle laiksi henkilötietojen käsittelystä maahanmuuttohallinnossa ja eräiksi siihen liittyviksi laeiksi, Government Bill.

    • Search Google Scholar
    • Export Citation
  • HE (224/2018) vp. Hallituksen esitys eduskunnalle laiksi henkilötietojen käsittelystä maahanmuuttohallinnossa ja eräiksi siihen liittyviksi laeiksi, Government Bill.

    • Search Google Scholar
    • Export Citation
  • HE (64/2016) vp. Hallituksen esitys eduskunnalle laeiksi ulkomaalaislain ja eräiden siihen liittyvien lakien muuttamisesta, Government Bill.

    • Search Google Scholar
    • Export Citation
  • HE (67/2019) vp. Hallituksen esitys eduskunnalle vuoden 2019 neljänneksi lisätalousarvioksi, Government Bill.

  • Heino, H. and Jauhiainen, J.S. (2020) Immigration in the strategies of municipalities in Finland, Nordic Journal of Migration Research, 10(3): 7389, doi: 10.33134/njmr.345.

    • Search Google Scholar
    • Export Citation
  • Helminen, M. (2019) Finnish Civil Society Organizations in Criminal Justice: Exploring Their Possibilities to Fulfil Mission Values and Maintain Autonomy From a Comparative Perspective, Doctoral Dissertation, Turku: Turun yliopisto [University of Turku], https://www.utupub.fi/bitstream/handle/10024/148177/AnnalesBHelminen.pdf?sequence=1&isAllowed=y.

    • Search Google Scholar
    • Export Citation
  • Hillyard, P., Pantazis, C., Tombs, S. and Gordon, D. (2004) Beyond Criminology, London: Pluto Press.

  • Husa, J., Nuotio, K. and Pihlajamäki, H. (eds) (2007) Nordic Law: Between Tradition and Dynamism, Antwerp-Oxford: Intersentia.

  • Joppke, C. (2021) Immigration policy in the crossfire of neoliberalism and neonationalism, Swiss Journal of Sociology, 47(1): 7192, doi: 10.2478/sjs-2021-0007.

    • Search Google Scholar
    • Export Citation
  • Koulu, R., Mäihäniemi, B., Kyyrönen, V, Hakkarainen, J. and Markkanen, K. (2019) Algoritmi päätöksentekijänä? Tekoälyn hyödyntämisen mahdollisuudet ja haasteet kansallisessa sääntely-ympäristössä, Valtioneuvoston selvitys- ja tutkimustoiminnan julkaisusarja [Publication series of the Government’s investigation and research activities], 44/2019, https://julkaisut.valtioneuvosto.fi/handle/10024/161700.

    • Search Google Scholar
    • Export Citation
  • Länsineva, P. (2012) Fundamental principles of the constitution of Finland, in K. Nuotio, S. Melander and M. Huomo-Kettunen (eds) Introduction to Finnish Law and Legal Culture, Turku: Forum Iuris, pp 11126.

    • Search Google Scholar
    • Export Citation
  • Lepinkäinen, N. and Malik, H.M. (2022) Discourses on AI and regulation of automated decision-making, Global Perspectives, 3(1): 33707, doi: 10.1525/gp.2022.33707.

    • Search Google Scholar
    • Export Citation
  • Letto-Vanamo, P. and Tamm, D. (2019) Nordic Legal Mind, in P. Letto-Vanamo, D. Tamm and B.O.G. Mortensen (eds) Nordic Law in European Context, Berlin: Springer, pp 119.

    • Search Google Scholar
    • Export Citation
  • Malik, H.M., Viljanen, M., Lepinkäinen, N. and Alvesalo-Kuusi, A. (2022) Dynamics of social harms in an algorithmic context, International Journal for Crime, Justice and Social Democracy, 11(1): 18295, doi: 10.5204/ijcjsd.2141.

    • Search Google Scholar
    • Export Citation
  • Mann, M. (2020) Technological politics of automated welfare surveillance: social (and data) justice through critical qualitative inquiry, Global Perspectives, 1(1): 12991, doi: 10.1525/gp.2020.12991.

    • Search Google Scholar
    • Export Citation
  • McCarthy, D.R. (2013) Technology and ‘the International’ or: how I learned to stop worrying and love determinism, Millennium, 41(3): 47090, doi: 10.1177/0305829813484636.

    • Search Google Scholar
    • Export Citation
  • Migri (Finnish Immigration Services) (2021) Maahanmuuttoviraston strategia 2021 [Strategy of the Finnish Immigration Services 2021], https://migri.fi/documents/5202425/9320472/strategia2021_fi/c34606c9-a468-4a7d-b5ce-93187493f31d/strategia2021_fi.pdf.

    • Search Google Scholar
    • Export Citation
  • MoE (Ministry of Economic Affairs and Employment) (2017) Finland’s age of artificial intelligence, Turning Finland into a leading country in the application of artificial intelligence: objective and recommendations for measures, Publications of the Ministry of Economic Affairs and Employment, 41/2017, https://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/160391/TEMrap_47_2017_verkkojulkaisu.pdf?sequence=1&isAllowed=y.

    • Search Google Scholar
    • Export Citation
  • MoE (Ministry of Economic Affairs and Employment) (2019) Leading the way into the age of artificial intelligence, Final report of Finland’s Artificial Intelligence Programme, Publications of the Ministry of Economic Affairs and Employment, 2019:23, https://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/161688/41_19_Leading%20the%20way%20into%20the%20age%20of%20artificial%20intelligence.pdf?sequence=4.

    • Search Google Scholar
    • Export Citation
  • MoE (Ministry of Economic Affairs and Employment) (2021a) Artificial Intelligence 4.0, First interim report: From start-up to implementation, Publications of the Ministry of Economic Affairs and Employment, 2021:29, https://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/163663/TEM_2021_53.pdf?sequence=1&isAllowed=y.

    • Search Google Scholar
    • Export Citation
  • MoE (Ministry of Economic Affairs and Employment) (2021b) Finland to become a winner in a dual transition: getting goals into practice, Artificial Intelligence 4.0 programme, Second interim report, Publications of the Ministry of Economic Affairs and Employment, 2021:64, https://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/163693/TEM_2021_64.pdf?sequence=1&isAllowed=y.

    • Search Google Scholar
    • Export Citation
  • MoF (Ministry of Finance) (2020) Decision to inaugurate AuroraAI. VN/1332/2020, https://vm.fi/documents/10623/16264993/aurora+asettamispaatos+korjattu.pdf/fd7831ba-d4b8-d0a8-e7f2-9cb38407c698/aurora+asettamispaatos+korjattu.pdf?t=1599049693743.

    • Search Google Scholar
    • Export Citation
  • MoF (Ministry of Finance) (2022) Task and objectives, https://vm.fi/en/task-and-objectives.

  • MoI (Ministry of the Interior) (2019) International Migration 2018–2019 – Report for Finland, Publications of the Ministry of the Interior, 2019:32, https://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/161871/SM_2019_32.pdf?sequence=1&isAllowed=y.

    • Search Google Scholar
    • Export Citation
  • MoI (Ministry of the Interior) (2021) The action plan for the prevention of irregular entry and stay for 2021–2024, 2021:9, 2021:34, https://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/163072/SM_2021_9.pdf?sequence=4&isAllowed=y.

    • Search Google Scholar
    • Export Citation
  • MoI (Ministry of the Interior) (2022) Report on possible national solutions to the situation of people without a right of stay in Finland. Publications of the Ministry of the Interior 2022:16, https://intermin.fi/en/-/potential-solutions-investigated-to-address-the-situation-of-individuals-with-no-right-of-residence.

    • Search Google Scholar
    • Export Citation
  • MoJ (Ministry of Justice) (2020) Assessment memorandum on the need to regulate automated decision-making within public administration in general legislation, Publications of the Ministry of Justice, 2020:14, http://urn.fi/URN:ISBN:978-952-259-802-8.

    • Search Google Scholar
    • Export Citation
  • MoJ (Ministry of Justice) (2022) Julkisen hallinnon automaattista päätöksentekoa koskeva lainsäädäntö, Työryhmämietintö. Publications of the Ministry of Justice, 2022:7, https://julkaisut.valtioneuvosto.fi/handle/10024/163847.

    • Search Google Scholar
    • Export Citation
  • Ollus, N. and Alvesalo-Kuusi, A. (2012) From cherry-picking to control: migrant labour and its exploitation in Finnish governmental policies, Nordisk Tidsskrift for Kriminalvidenskab, 3/2012, 37598.

    • Search Google Scholar
    • Export Citation
  • OwalGroup (2019) Turvapaikkaprosessia koskeva selvitys, https://intermin.fi/documents/1410869/3723692/Turvapaikkaprosessia+koskeva+selvitys+27.6.2019/60bd290f-ffbd-2837-7f82-25fb68fe172c/Turvapaikkaprosessia+koskeva+selvitys+27.6.2019.pdf.

    • Search Google Scholar
    • Export Citation
  • Palander, J. and Pellander, S. (2019) Mobility and the security paradigm: how immigration became securitized in Finnish law and policy, Journal of Finnish Studies, 22(1–2): 17393.

    • Search Google Scholar
    • Export Citation
  • Parliamentary Ombudsman (K 15/2020) Eduskunnan oikeusasiamiehen kertomus vuodelta 2019 [Annual Report], https://www.oikeusasiamies.fi/documents/20184/42383/2019-fi/51758de7-f75b-449c-8967-a5372e40df0b.

    • Search Google Scholar
    • Export Citation
  • Pasquale, F. (2015) The Black Box Society, Cambridge, MA: Harvard University Press.

  • Pemberton, S. (2015) Harmful Societies: Understanding Social Harm, Bristol: Policy Press.

  • Pirjatanniemi, E., Lilja, I., Helminen, M., Vainio, K., Lepola, O. and Alvesalo-Kuusi, A. (2021) Ulkomaalaislain ja sen soveltamiskäytännön muutosten yhteisvaikutukset kansainvälistä suojelua hakeneiden ja saaneiden asemaan, Valtioneuvoston selvitys- ja tutkimustoiminnan julkaisusarja [Publication Series of the Government’s investigation and research activities] 2021:10, https://research.abo.fi/en/publications/ulkomaalaislain-ja-sen-soveltamiskäytännön-muutosten-yhteisvaikut.

    • Search Google Scholar
    • Export Citation
  • Schienstock, G. (2007) From path dependency to path creation: Finland on its way to the knowledge-based economy, Current Sociology, 55(1): 92109, doi: 10.1177/0011392107070136.

    • Search Google Scholar
    • Export Citation
  • Shearer, E., Stirling, R. and Pasquarelli, W. (2020) Governmental AI Readiness Index, Oxford Insights, https://www.oxfordinsights.com/government-ai-readiness-index-2020.

    • Search Google Scholar
    • Export Citation
  • Soliman, F. (2019) States of exception, human rights and social harm: towards a border zemiology, Theoretical Criminology, 25(2): 119, 136248061989006.

    • Search Google Scholar
    • Export Citation
  • Suksi, M. (2020) Administrative due process when using automated decision-making in public administration: some notes from a Finnish perspective, Artificial Intelligence and Law 29(1): 87110, doi: 10.1007/s10506-020-09269-x.

    • Search Google Scholar
    • Export Citation
  • Temmes, M. (1998) Finland and new public management, International Review of Administrative Sciences, 64(3): 44156. doi: 10.1177/002085239806400307

    • Search Google Scholar
    • Export Citation
  • Tervonen, M., Pellander, S. and Yuval-Davis, N. (2018) Everyday bordering in the Nordic countries, Nordic Journal of Migration Research, 8(3): 139, doi: 10.2478/njmr-2018-0019.

    • Search Google Scholar
    • Export Citation
  • Ustek-Spilda, F. and Alastalo, M. (2020) Software-sorted exclusion of asylum seekers in Norway and Finland, Global Perspectives, 1(1), doi: 10.1525/gp.2020.12978.

    • Search Google Scholar
    • Export Citation
  • Van Berkel, N., Papachristos, E., Giachanou, A., Hosio, S. and Skov, M.B. (2020) A systematic assessment of national artificial intelligence policies: perspectives from the Nordics and beyond, Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society, pp 112, doi: 10.1145/3419249.3420106.

    • Search Google Scholar
    • Export Citation
  • Vanto, J., Saarikkomäki, E., Alvesalo-Kuusi, A., Lepinkäinen, N., Pirjatanniemi, E. and Lavapuro, J. (2022) Collectivized discretion: seeking explanations for decreased asylum recognition rates in Finland after Europe’s 2015 ‘refugee crisis’, International Migration Review, 56(3): 75479. doi: 10.1177/01979183211044096

    • Search Google Scholar
    • Export Citation
  • Wachter, S., Mittelstadt, B. and Russell, C. (2021) Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI, Computer Law & Security Review, 41: 1055667, doi: 10.1016/j.clsr.2021.105567.

    • Search Google Scholar
    • Export Citation
  • Waldman, A.E. (2019) Power, process, and automated decision-making, Fordham Law Review, 88(2): 61332, https://ir.lawnet.fordham.edu/flr/vol88/iss2/9.

    • Search Google Scholar
    • Export Citation
  • Whittaker, M., Crawford, C., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., Myers West, S., Richardson, R., Schultz, J. and Schwartz, O. (2018) AI Now 2018 Report, New York: AI-Now Institute, https://ainowinstitute.org/AI_Now_2018_Report.pdf.

    • Search Google Scholar
    • Export Citation
  • Wood, M.A. (2020) Rethinking how technologies harm, British Journal of Criminology, 61(3): 62747, doi: 10.1093/bjc/azaa074.

  • Yeung, K. (2018) Algorithmic regulation: a critical interrogation, Regulation & Governance, 12(4): 50523, doi: 10.1111/rego.12158.

  • Yle News (2017) Report: Immigration Service circulates model negative asylum decisions for ‘assembly line’ use, https://yle.fi/news/3-9594858.

    • Search Google Scholar
    • Export Citation
  • Yle News (2018a) Asylum appeals returned at record rates in 2017, https://yle.fi/news/3-10054005.

  • Yle News (2018b) Interior Ministry aims to block repeat asylum applications, https://yle.fi/news/3-10288892.

  • Yle News (2019a) Slow processing of foreign experts’ permits ‘a catastrophe’, https://yle.fi/news/3-11006939.

  • Yle News (2019b) Migri to slash staff, https://yle.fi/news/3-11017691.

  • Yle News (2019c) Business lobby pushes for one-week work permit processing, https://yle.fi/news/3-11021230.

  • Parviala, A. and Yle News (2019d) Oikeusasiamies kieltäisi automaattiset viranomaispäätökset – Virastojen johtajat kauhuissaan: Tarvitaan tuhansia virkamiehiä lisää, https://yle.fi/uutiset/3-11122069.

    • Search Google Scholar
    • Export Citation
  • Yle News (2020a) Steep rise in deportations from Finland in 2019, https://yle.fi/news/3-11202911.

  • Yle News (2020b) Wait for work-based residence permits averages 152 days, https://yle.fi/news/3-11548307.

  • Aradau, C. and Blanke, T. (2018) Governing others: anomaly and the algorithmic subject of security, European Journal of International Security, 3(1): 121, doi: 10.1017/eis.2017.14.

    • Search Google Scholar
    • Export Citation
  • Benjamin, R. (2019) Race After Technology, Cambridge: Polity Press.

  • Canning, V. (2019) Abject asylum: degradation and the deliberate infliction of harm against refugees in Britain, Justice, Power and Resistance, 3(1): 3760, https://research-information.bris.ac.uk/ws/portalfiles/portal/207660459/Abject_Asylum.pdf.

    • Search Google Scholar
    • Export Citation
  • Canning, V. and Tombs, S (2021) From Social Harm to Zemiology, Oxford: Routledge.

  • Chancellor of Justice (K 12/2020) Oikeuskanslerin kertomus vuodelta 2019, https://www.eduskunta.fi/FI/vaski/Kertomus/Documents/K_12+2020.pdf.

    • Search Google Scholar
    • Export Citation
  • Chiusi, F., Fischer, S., Kayser-Bril, N. and Spielkamp, M. (2020) Automating society Report 2020, https://automatingsociety.algorithmwatch.org.

    • Search Google Scholar
    • Export Citation
  • Citron, D.K. and Pasquale, F. (2014) The scored society: due process for automated predictions, Washington Law Review, 89(1): 133.

  • CLC (Constitutional Law Committee) PeVL (7/2019) vp. Perustuslakivaliokunnan lausunto, HE 18/2019 eduskunnalle laiksi henkilötietojen käsittelystä maahanmuuttohallinnossa ja eräiksi siihen liittyviksi laeiksi.

    • Search Google Scholar
    • Export Citation
  • Crawford, K. et al. (2019) AI Now 2019 Report. New York: AI Now Institute, https://ainowinstitute.org/AI_Now_2019_Report.html.

  • Dencik, L. and Kaune, A. (2020) Datafication and the welfare state, Global Perspectives, 1( 1): 12912, doi: 10.1525/gp.2020.12912.

  • Dunleavy, P., Margetts, H., Bastow, S. and Tinkler, J. (2006) New public management is dead: long live digital-era governance, Journal of Public Administration Research and Theory, 16(3): 46794, doi: 10.1093/jopart/mui057.

    • Search Google Scholar
    • Export Citation
  • Eubanks, V. (2017) Automating Inequality, New York: St. Martin’s Press.

  • Finnish Government (2015) Action plan on asylum policy, https://vnk.fi/documents/10184/1058456/Hallituksen_turvapaikkapoliittinen_toimenpideohjelma_08122015+EN.pdf/3e555cc4-ab01-46af-9cd4-138b2ac5bad0.

    • Search Google Scholar
    • Export Citation
  • Finnish Immigration Service (2021) Strategy 2021, https://migri.fi/documents/5202425/9320472/strategia2021_en/b7e1fb27-95d5-4039-b922-0024ad4e58fa/strategia2021_en.pdf.

    • Search Google Scholar
    • Export Citation
  • Floridi, L. (2014) The Fourth Revolution: How the Infosphere is Reshaping Human Reality, Oxford: Oxford University Press.

  • Government Programme (2015) Ratkaisujen suomi, publications of the finnish government, 10/2015.

  • Government Programme (2019) Osaava ja osallistava Suomi – Sosiaalisesti, taloudellisesti ja ekologisesti kestävä yhteiskunta, publications of the finnish government, 2019:25.

    • Search Google Scholar
    • Export Citation
  • Hartonen, V.R., Väisänen, P., Karlsson, L. and Pöllänen, S. (2021) ‘Between heaven and hell’: subjective well‐being of asylum seekers, International Journal of Social Welfare, 30(1): 3045. doi: 10.1111/ijsw.12435

    • Search Google Scholar
    • Export Citation
  • HE (18/2019) vp. Hallituksen esitys eduskunnalle laiksi henkilötietojen käsittelystä maahanmuuttohallinnossa ja eräiksi siihen liittyviksi laeiksi, Government Bill.

    • Search Google Scholar
    • Export Citation
  • HE (224/2018) vp. Hallituksen esitys eduskunnalle laiksi henkilötietojen käsittelystä maahanmuuttohallinnossa ja eräiksi siihen liittyviksi laeiksi, Government Bill.

    • Search Google Scholar
    • Export Citation
  • HE (64/2016) vp. Hallituksen esitys eduskunnalle laeiksi ulkomaalaislain ja eräiden siihen liittyvien lakien muuttamisesta, Government Bill.

    • Search Google Scholar
    • Export Citation
  • HE (67/2019) vp. Hallituksen esitys eduskunnalle vuoden 2019 neljänneksi lisätalousarvioksi, Government Bill.

  • Heino, H. and Jauhiainen, J.S. (2020) Immigration in the strategies of municipalities in Finland, Nordic Journal of Migration Research, 10(3): 7389, doi: 10.33134/njmr.345.

    • Search Google Scholar
    • Export Citation
  • Helminen, M. (2019) Finnish Civil Society Organizations in Criminal Justice: Exploring Their Possibilities to Fulfil Mission Values and Maintain Autonomy From a Comparative Perspective, Doctoral Dissertation, Turku: Turun yliopisto [University of Turku], https://www.utupub.fi/bitstream/handle/10024/148177/AnnalesBHelminen.pdf?sequence=1&isAllowed=y.

    • Search Google Scholar
    • Export Citation
  • Hillyard, P., Pantazis, C., Tombs, S. and Gordon, D. (2004) Beyond Criminology, London: Pluto Press.

  • Husa, J., Nuotio, K. and Pihlajamäki, H. (eds) (2007) Nordic Law: Between Tradition and Dynamism, Antwerp-Oxford: Intersentia.

  • Joppke, C. (2021) Immigration policy in the crossfire of neoliberalism and neonationalism, Swiss Journal of Sociology, 47(1): 7192, doi: 10.2478/sjs-2021-0007.

    • Search Google Scholar
    • Export Citation
  • Koulu, R., Mäihäniemi, B., Kyyrönen, V, Hakkarainen, J. and Markkanen, K. (2019) Algoritmi päätöksentekijänä? Tekoälyn hyödyntämisen mahdollisuudet ja haasteet kansallisessa sääntely-ympäristössä, Valtioneuvoston selvitys- ja tutkimustoiminnan julkaisusarja [Publication series of the Government’s investigation and research activities], 44/2019, https://julkaisut.valtioneuvosto.fi/handle/10024/161700.

    • Search Google Scholar
    • Export Citation
  • Länsineva, P. (2012) Fundamental principles of the constitution of Finland, in K. Nuotio, S. Melander and M. Huomo-Kettunen (eds) Introduction to Finnish Law and Legal Culture, Turku: Forum Iuris, pp 11126.

    • Search Google Scholar
    • Export Citation
  • Lepinkäinen, N. and Malik, H.M. (2022) Discourses on AI and regulation of automated decision-making, Global Perspectives, 3(1): 33707, doi: 10.1525/gp.2022.33707.

    • Search Google Scholar
    • Export Citation
  • Letto-Vanamo, P. and Tamm, D. (2019) Nordic Legal Mind, in P. Letto-Vanamo, D. Tamm and B.O.G. Mortensen (eds) Nordic Law in European Context, Berlin: Springer, pp 119.

    • Search Google Scholar
    • Export Citation
  • Malik, H.M., Viljanen, M., Lepinkäinen, N. and Alvesalo-Kuusi, A. (2022) Dynamics of social harms in an algorithmic context, International Journal for Crime, Justice and Social Democracy, 11(1): 18295, doi: 10.5204/ijcjsd.2141.

    • Search Google Scholar
    • Export Citation
  • Mann, M. (2020) Technological politics of automated welfare surveillance: social (and data) justice through critical qualitative inquiry, Global Perspectives, 1(1): 12991, doi: 10.1525/gp.2020.12991.

    • Search Google Scholar
    • Export Citation
  • McCarthy, D.R. (2013) Technology and ‘the International’ or: how I learned to stop worrying and love determinism, Millennium, 41(3): 47090, doi: 10.1177/0305829813484636.

    • Search Google Scholar
    • Export Citation
  • Migri (Finnish Immigration Services) (2021) Maahanmuuttoviraston strategia 2021 [Strategy of the Finnish Immigration Services 2021], https://migri.fi/documents/5202425/9320472/strategia2021_fi/c34606c9-a468-4a7d-b5ce-93187493f31d/strategia2021_fi.pdf.

    • Search Google Scholar
    • Export Citation
  • MoE (Ministry of Economic Affairs and Employment) (2017) Finland’s age of artificial intelligence, Turning Finland into a leading country in the application of artificial intelligence: objective and recommendations for measures, Publications of the Ministry of Economic Affairs and Employment, 41/2017, https://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/160391/TEMrap_47_2017_verkkojulkaisu.pdf?sequence=1&isAllowed=y.

    • Search Google Scholar
    • Export Citation
  • MoE (Ministry of Economic Affairs and Employment) (2019) Leading the way into the age of artificial intelligence, Final report of Finland’s Artificial Intelligence Programme, Publications of the Ministry of Economic Affairs and Employment, 2019:23, https://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/161688/41_19_Leading%20the%20way%20into%20the%20age%20of%20artificial%20intelligence.pdf?sequence=4.

    • Search Google Scholar
    • Export Citation
  • MoE (Ministry of Economic Affairs and Employment) (2021a) Artificial Intelligence 4.0, First interim report: From start-up to implementation, Publications of the Ministry of Economic Affairs and Employment, 2021:29, https://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/163663/TEM_2021_53.pdf?sequence=1&isAllowed=y.

    • Search Google Scholar
    • Export Citation
  • MoE (Ministry of Economic Affairs and Employment) (2021b) Finland to become a winner in a dual transition: getting goals into practice, Artificial Intelligence 4.0 programme, Second interim report, Publications of the Ministry of Economic Affairs and Employment, 2021:64, https://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/163693/TEM_2021_64.pdf?sequence=1&isAllowed=y.

    • Search Google Scholar
    • Export Citation
  • MoF (Ministry of Finance) (2020) Decision to inaugurate AuroraAI. VN/1332/2020, https://vm.fi/documents/10623/16264993/aurora+asettamispaatos+korjattu.pdf/fd7831ba-d4b8-d0a8-e7f2-9cb38407c698/aurora+asettamispaatos+korjattu.pdf?t=1599049693743.

    • Search Google Scholar
    • Export Citation
  • MoF (Ministry of Finance) (2022) Task and objectives, https://vm.fi/en/task-and-objectives.

  • MoI (Ministry of the Interior) (2019) International Migration 2018–2019 – Report for Finland, Publications of the Ministry of the Interior, 2019:32, https://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/161871/SM_2019_32.pdf?sequence=1&isAllowed=y.

    • Search Google Scholar
    • Export Citation
  • MoI (Ministry of the Interior) (2021) The action plan for the prevention of irregular entry and stay for 2021–2024, 2021:9, 2021:34, https://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/163072/SM_2021_9.pdf?sequence=4&isAllowed=y.

    • Search Google Scholar
    • Export Citation
  • MoI (Ministry of the Interior) (2022) Report on possible national solutions to the situation of people without a right of stay in Finland. Publications of the Ministry of the Interior 2022:16, https://intermin.fi/en/-/potential-solutions-investigated-to-address-the-situation-of-individuals-with-no-right-of-residence.

    • Search Google Scholar
    • Export Citation
  • MoJ (Ministry of Justice) (2020) Assessment memorandum on the need to regulate automated decision-making within public administration in general legislation, Publications of the Ministry of Justice, 2020:14, http://urn.fi/URN:ISBN:978-952-259-802-8.

    • Search Google Scholar
    • Export Citation
  • MoJ (Ministry of Justice) (2022) Julkisen hallinnon automaattista päätöksentekoa koskeva lainsäädäntö, Työryhmämietintö. Publications of the Ministry of Justice, 2022:7, https://julkaisut.valtioneuvosto.fi/handle/10024/163847.

    • Search Google Scholar
    • Export Citation
  • Ollus, N. and Alvesalo-Kuusi, A. (2012) From cherry-picking to control: migrant labour and its exploitation in Finnish governmental policies, Nordisk Tidsskrift for Kriminalvidenskab, 3/2012, 37598.

    • Search Google Scholar
    • Export Citation
  • OwalGroup (2019) Turvapaikkaprosessia koskeva selvitys, https://intermin.fi/documents/1410869/3723692/Turvapaikkaprosessia+koskeva+selvitys+27.6.2019/60bd290f-ffbd-2837-7f82-25fb68fe172c/Turvapaikkaprosessia+koskeva+selvitys+27.6.2019.pdf.

    • Search Google Scholar
    • Export Citation
  • Palander, J. and Pellander, S. (2019) Mobility and the security paradigm: how immigration became securitized in Finnish law and policy, Journal of Finnish Studies, 22(1–2): 17393.

    • Search Google Scholar
    • Export Citation
  • Parliamentary Ombudsman (K 15/2020) Eduskunnan oikeusasiamiehen kertomus vuodelta 2019 [Annual Report], https://www.oikeusasiamies.fi/documents/20184/42383/2019-fi/51758de7-f75b-449c-8967-a5372e40df0b.

    • Search Google Scholar
    • Export Citation
  • Pasquale, F. (2015) The Black Box Society, Cambridge, MA: Harvard University Press.

  • Pemberton, S. (2015) Harmful Societies: Understanding Social Harm, Bristol: Policy Press.

  • Pirjatanniemi, E., Lilja, I., Helminen, M., Vainio, K., Lepola, O. and Alvesalo-Kuusi, A. (2021) Ulkomaalaislain ja sen soveltamiskäytännön muutosten yhteisvaikutukset kansainvälistä suojelua hakeneiden ja saaneiden asemaan, Valtioneuvoston selvitys- ja tutkimustoiminnan julkaisusarja [Publication Series of the Government’s investigation and research activities] 2021:10, https://research.abo.fi/en/publications/ulkomaalaislain-ja-sen-soveltamiskäytännön-muutosten-yhteisvaikut.

    • Search Google Scholar
    • Export Citation
  • Schienstock, G. (2007) From path dependency to path creation: Finland on its way to the knowledge-based economy, Current Sociology, 55(1): 92109, doi: 10.1177/0011392107070136.

    • Search Google Scholar
    • Export Citation
  • Shearer, E., Stirling, R. and Pasquarelli, W. (2020) Governmental AI Readiness Index, Oxford Insights, https://www.oxfordinsights.com/government-ai-readiness-index-2020.

    • Search Google Scholar
    • Export Citation
  • Soliman, F. (2019) States of exception, human rights and social harm: towards a border zemiology, Theoretical Criminology, 25(2): 119, 136248061989006.

    • Search Google Scholar
    • Export Citation
  • Suksi, M. (2020) Administrative due process when using automated decision-making in public administration: some notes from a Finnish perspective, Artificial Intelligence and Law 29(1): 87110, doi: 10.1007/s10506-020-09269-x.

    • Search Google Scholar
    • Export Citation
  • Temmes, M. (1998) Finland and new public management, International Review of Administrative Sciences, 64(3): 44156. doi: 10.1177/002085239806400307

    • Search Google Scholar
    • Export Citation
  • Tervonen, M., Pellander, S. and Yuval-Davis, N. (2018) Everyday bordering in the Nordic countries, Nordic Journal of Migration Research, 8(3): 139, doi: 10.2478/njmr-2018-0019.

    • Search Google Scholar
    • Export Citation
  • Ustek-Spilda, F. and Alastalo, M. (2020) Software-sorted exclusion of asylum seekers in Norway and Finland, Global Perspectives, 1(1), doi: 10.1525/gp.2020.12978.

    • Search Google Scholar
    • Export Citation
  • Van Berkel, N., Papachristos, E., Giachanou, A., Hosio, S. and Skov, M.B. (2020) A systematic assessment of national artificial intelligence policies: perspectives from the Nordics and beyond, Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society, pp 112, doi: 10.1145/3419249.3420106.

    • Search Google Scholar
    • Export Citation
  • Vanto, J., Saarikkomäki, E., Alvesalo-Kuusi, A., Lepinkäinen, N., Pirjatanniemi, E. and Lavapuro, J. (2022) Collectivized discretion: seeking explanations for decreased asylum recognition rates in Finland after Europe’s 2015 ‘refugee crisis’, International Migration Review, 56(3): 75479. doi: 10.1177/01979183211044096

    • Search Google Scholar
    • Export Citation
  • Wachter, S., Mittelstadt, B. and Russell, C. (2021) Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI, Computer Law & Security Review, 41: 1055667, doi: 10.1016/j.clsr.2021.105567.

    • Search Google Scholar
    • Export Citation
  • Waldman, A.E. (2019) Power, process, and automated decision-making, Fordham Law Review, 88(2): 61332, https://ir.lawnet.fordham.edu/flr/vol88/iss2/9.

    • Search Google Scholar
    • Export Citation
  • Whittaker, M., Crawford, C., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., Myers West, S., Richardson, R., Schultz, J. and Schwartz, O. (2018) AI Now 2018 Report, New York: AI-Now Institute, https://ainowinstitute.org/AI_Now_2018_Report.pdf.

    • Search Google Scholar
    • Export Citation
  • Wood, M.A. (2020) Rethinking how technologies harm, British Journal of Criminology, 61(3): 62747, doi: 10.1093/bjc/azaa074.

  • Yeung, K. (2018) Algorithmic regulation: a critical interrogation, Regulation & Governance, 12(4): 50523, doi: 10.1111/rego.12158.

  • Yle News (2017) Report: Immigration Service circulates model negative asylum decisions for ‘assembly line’ use, https://yle.fi/news/3-9594858.

    • Search Google Scholar
    • Export Citation
  • Yle News (2018a) Asylum appeals returned at record rates in 2017, https://yle.fi/news/3-10054005.

  • Yle News (2018b) Interior Ministry aims to block repeat asylum applications, https://yle.fi/news/3-10288892.

  • Yle News (2019a) Slow processing of foreign experts’ permits ‘a catastrophe’, https://yle.fi/news/3-11006939.

  • Yle News (2019b) Migri to slash staff, https://yle.fi/news/3-11017691.

  • Yle News (2019c) Business lobby pushes for one-week work permit processing, https://yle.fi/news/3-11021230.

  • Parviala, A. and Yle News (2019d) Oikeusasiamies kieltäisi automaattiset viranomaispäätökset – Virastojen johtajat kauhuissaan: Tarvitaan tuhansia virkamiehiä lisää, https://yle.fi/uutiset/3-11122069.

    • Search Google Scholar
    • Export Citation
  • Yle News (2020a) Steep rise in deportations from Finland in 2019, https://yle.fi/news/3-11202911.

  • Yle News (2020b) Wait for work-based residence permits averages 152 days, https://yle.fi/news/3-11548307.

Hanna Maria MalikUniversity of Turku, Finland

Search for other papers by Hanna Maria Malik in
Current site
Google Scholar
Close
and
Nea LepinkäinenUniversity of Turku, Finland

Search for other papers by Nea Lepinkäinen in
Current site
Google Scholar
Close

Content Metrics

May 2022 onwards Past Year Past 30 Days
Abstract Views 249 249 0
Full Text Views 334 334 68
PDF Downloads 263 263 62

Altmetrics

Dimensions