Hostname: page-component-848d4c4894-ttngx Total loading time: 0 Render date: 2024-05-04T13:33:38.137Z Has data issue: false hasContentIssue false

Predictive analytics and governance: a new sociotechnical imaginary for uncertain futures

Published online by Cambridge University Press:  03 October 2022

Christophe Lazaro
Affiliation:
UCLouvain, Belgium
Marco Rizzi*
Affiliation:
UWA Law School, Australia
Rights & Permissions [Opens in a new window]

Abstract

In an era of global sanitary, economic and ecological crisis, beliefs in the predictive power of artificial intelligence (AI) progressively penetrate the legal and political spheres, in search of new ways to anticipate and govern the future. In this context, it is critical to understand the idiosyncratic nature of the interplay between governance and algorithmic logics of prediction. This contribution discusses how the association between governance and AI makes the future knowable in the present and shapes a programmatic way of formalising, justifying and deploying action in the here and now. We focus on three principles of institutional mobilisation in the face of uncertainty and indeterminacy: precaution, pre-emption and preparedness, each of which is affected by the use of AI relying on so-called ‘real-time predictions’. Drawing from risk theory and Science and Technology Studies, we argue that the current convergence between AI and governance is shaping a new sociotechnical imaginary, promoting a distinctive conception of life and of the future in the age of the Anthropocene.

Type
Special Issue Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press

1 Introduction

Contemporary algorithmic devices are establishing themselves as essential methods for the optimisation of decision-making processes and anticipating risks. Whether based on conventional statistical modelling systems or artificial intelligence (AI) and machine-learning algorithms, these systems foster the belief in the possibility of anticipating the future and reducing the complexity of life. They seemingly do so by providing reliable projections of future unfoldings. Arguably, their functional capacities reveal a striking feature of our age: an obsession with prediction and anticipation of the future.

Innovations in the field of AI are spreading from the body to the world, permeating many aspects of contemporary human life (Adams et al., Reference Adams, Murphy and Clarke2009). These include health, communication, education, economic activities and beyond, and arguably allow both the private and public sectors to optimise decision-making. In this paper, we are not so much concerned with dissecting a particular sector, but rather preoccupied with teasing out some overarching themes inherent in the use of these technologies in decision-making.

Technologies such as predictive modelling, machine learning and data mining facilitate the analysis of past and present data to make predictions about the future. In this paper, we use the term ‘predictive analytics’ to refer to these technologies (Finlay, Reference Finlay2014). Modern discourse on predictive analytics, whether in the fields of journalism, politics, hard sciences or humanities, often resorts to a semantics of magic or divination, identifying algorithms as the oracles of our contemporary societies: ‘The modern oracles of our networked digital age are Big Data and data analytics …. They provide a targeted look into the crystal ball’ (Romeike and Eicher, Reference Romeike and Eicher2016, p. 168; Baker and Gourley, Reference Baker and Gourley2015; Timms, Reference Timms2017).

We increasingly expect AI to solve the world's biggest challenges: treating chronic diseases, predicting pandemic and epidemic outbreaks, reducing fatality rates in traffic accidents, fighting climate change and fostering sustainable development (European Commission, 2020b). Beliefs in predictive analytics thus progressively penetrate the legal and political spheres. In this contribution, our main objective is to reflect on contemporary beliefs in the divinatory power of digital technologies (Lazaro, Reference Lazaro2018) in the broad domain of governance.

Taking these ‘apparently irrational beliefs’ seriously (Sperber, Reference Sperber, Lukes and Hollis1982) requires understanding how the association between governance and AI makes the future knowable in the present (epistemic practices) (Cetina, Reference Cetina1999), shaping a programmatic way of formalising, justifying and deploying action in the here and now (normative logics). The contemporary debate surrounding AI is dominated by the analysis of risks and human rights impacts of algorithmic systems (such as the violation of privacy, problems of discrimination or lack of transparency) (Council of Europe, 2020; European Commission, 2019). The literature on the risks inherent to biases and the implications of AI for individual consent is growing alongside awareness of the complexity of the challenges posed by the technology for protection of individual rights (Andreotta et al., Reference Andreotta, Kirkham and Rizzi2021). In this paper, we take a complementary approach. We are conducting a review of a large body of literature spanning a number fields and drawing together what we understand to be constitutive threads of a complex tapestry depicting the impact of predictive analytics on governance.

We acknowledge that tackling ‘governance’ as a theme means casting the net very wide. However, an overarching theme of governance, in whatever form or setting, is its preoccupation with providing direction and exercising control over entities. As such, we suggest that governance is particularly invested in shaping the future. It is important here to distinguish between prediction and anticipation, the former being but one modality of the latter. For example, law, a key instrument of governance, can be described as a discrete mode of anticipation (Ost, Reference Ost1999).Footnote 1 Law does not predict but, through a variety of rules, is a vector of anticipation and serves as a guide. It operates as a cognitive and pragmatic resource as well as a constraint. This supports the co-ordination of human actors between themselves and with the world. It is thus essential to grasp the idiosyncratic nature of the progressive convergence between governance (particularly legal) and algorithmic anticipatory logics in order to move beyond a risk-based approach and appreciate in full the impact of the use of predictive analytics on the governance of ‘what is not and may never happen’ (Massumi, Reference Massumi2007).

A growing body of literature has emerged in recent years to examine the nuts and bolts of ‘algorithmic governance’ (Cantero Gamito and Ebers, Reference Cantero, Ebers, Ebers and Cantero2021; Danaher et al., Reference Danaher2017; Gritsenko and Wood, Reference Gritsenko and Wood2022; Kalpokas, Reference Kalpokas2019) or ‘algorithmic regulation’ (Yeung, Reference Yeung2018). These are identified as analytical constructs developed in scholarship to unpack ‘the role of algorithms as a mode of social coordination and control' in concrete contexts of application (Ulbricht and Yeung, Reference Ulbricht and Yeung2021, p. 18). In an ideal dialogue with this literature, we come at the topic from a different angle. Drawing from Science and Technology Studies (STS) (Cole and Bertenthal, Reference Cole and Bertenthal2017), we argue that the current convergence between algorithmic technology and governance is shaping a new sociotechnical imaginary, promoting a distinctive conception of the future in the age of the Anthropocene. Sheila Jasanoff defines sociotechnical imaginaries as follows: ‘collectively held, institutionally stabilized, and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology’ (Jasanoff and Kim, Reference Jasanoff and Kim2015, p. 25).

This new imaginary appears to be rooted at the heart of an enigmatic synchrony: predictive analytics are fast emerging at a time when the future appears more unpredictable and ungovernable than ever. The cosmology of the modern world is filled with radical threats. To mention but one very recent example, the latest report by the Intergovernmental Panel on Climate Change paints a sobering picture of the planet's future in the face of rising temperatures (IPCC, 2021). As a result, a variety of post-apocalyptic narratives has been flourishing for some time, from collapsology to radical trans-humanism and the most basic survivalism (Chateauraynaud and Debaz, Reference Chateauraynaud and Debaz2019). In this context, new attempts to grasp the future through predictive analytics involve a special kind of ‘ontopolitics’ (Chandler, Reference Chandler2018) characterised by normative stances about which forms of life are to be valorised and preserved (or not) in order to cope with uncertain futures. Or, in Jasanoff's terminology, governance through algorithms arguably constitutes a peculiar form of co-production, which she defines as ‘shorthand for the proposition that the ways in which we know and represent the world (both nature and society) are inseparable from the ways in which we choose to live in it’ (Jasanoff, Reference Jasanoff2004, p. 3).

Building on an extensive interdisciplinary body of literature, we thus provide a first outline of the emerging sociotechnical imaginary as well as the modalities of its development. We supplement our findings drawing examples from official documents emanating from European institutions devoted to AI and predictive analytics. We do not claim this to be a comprehensive discursive analysis. Rather, the samples are provided instrumentally as qualitative augmentations of descriptive and theoretical propositions (Baxter and Jack, Reference Baxter and Jack2008; Stake, Reference Stake1995). Indeed, official discourses of the state provide a particularly fertile ground to grasp ‘the coalescence of the collective imagination with scientific and technological production’ (Hajer, Reference Hajer2010, p. 27) and in this context ‘law emerges as an especially fruitful site in which to examine imaginaries in practice’ (Jasanoff and Kim, Reference Jasanoff and Kim2015, p. 25). We focus particularly on statements in which ‘the future’ as an abstract category is disclosed and related to (Rieder, Reference Rieder2018). Among the widespread use of broad concepts such as ‘prediction’, ‘prevision’ and ‘anticipation’, we pay particular attention to the pervasive notion of ‘real-time’, and ‘real-time prediction’. This peculiar and counter-intuitive idea is symptomatic of the tensions that characterise a profound dynamic of reconfiguration of the temporalities of our world, stemming from real-time calculations, and affecting the links between past, present and future (Amoore and Piotukh, Reference Amoore, Piotukh, Amoore and Piotukh2015). This reconfiguration carries significant normative consequences, particularly as regards the valorisation or exclusion of certain forms of life.

The paper is structured in three parts. First, we analyse the contemporary theme of ‘real-time prediction’ and contextualise the growing popularity of this oxymoron in the context of what we define as ‘life as pure contingency’. Second, we discuss the epistemological and normative dimensions of ‘governing the future’ through predictive analytics. Finally, we sketch the contours of the new sociotechnical imaginary that emerges from the convergence between governance and AI. As this paper represents the first step of a broader project on ‘AI, Law and the Future’, we conclude by setting the scene for future research.

2 Real-time predictions of contingent life

2.1 From complexity to pure contingency

Whether forged in hard or social sciences, contemporary theories (in Western societies) tackling fundamental questions about life increasingly converge towards conceiving of it as pure contingency as opposed to a system marked by linear and deterministic temporality (Anderson, Reference Anderson2010a). This entails three crucial aspects.

First, life is conceived of in terms of irreducible complexity (Holland, Reference Holland2014; Morin, Reference Morin2008). This complexity is among other things the result of a globalised world woven by a multiplicity of heterogeneous flows and connections, embodied in the figure of the ‘network’. The governance of complex life revolves around the problem of the relationship between ‘good’ and ‘bad’ flows or connections (such as transnational terrorists, personal data, epidemics, etc.). The complexity of life can also be explained by the infinite nature of its intrinsic risks. For instance, risks tend to exceed the limits of the insurable in two directions: the infinitely small (e.g. biological, natural, health risks related to food consumption) and the infinitely large (e.g. major technological risks or technological disasters) (Ewald, Reference Ewald and Massumi1993).

Second, life is conceptualised according to the principle of included middle. This aspect creates a major problem: the entanglement or ‘heterogenesis of the bad within the good’ (Anderson, Reference Anderson2010a, p. 781), which deviates from the law of non-contradiction. The causes of a disaster are presumed to incubate within life itself, requiring intervention before (or as) the catastrophic process incubates and certainly before it exceeds the threshold of catastrophe. Brian Massumi gives an insightful account of how life and its underlying threats are conceived of today: ‘[t]his is the figure of today's threat: the suddenly irrupting, locally self-organising, systemically self-amplifying threat of large-scale disruption. This form of threat is not only indiscriminate, coming anywhere, as out of nowhere, at any time, it is also indiscriminable’ (Massumi, Reference Massumi2009, p. 154).

Finally, if life is contingent, the future remains open as disasters are themselves emerging phenomena (Christen and Franklin, Reference Christen and Franklin2002). The effects or impacts of disasters change and evolve as they circulate. This idea implies that one can take advantage of a crisis to invest and earn money. For example, Michael Lewis has produced a masterful account of the large profits made by certain market players in the lead-up to the Global Financial Crisis of 2007–2008 in his compelling book, The Big Short (Lewis, Reference Lewis2011). The uncertainty characterising life is therefore both a promise and a threat to be simultaneously neutralised and nurtured (Amin, Reference Amin2013). Anticipatory actions based on predictive analytics emerge in a situation in which the very contingency of life generates the occurrence of a threat/opportunity, danger/profit.

These aspects of ‘life as pure contingency’ are particularly noticeable, for example, in the unfolding of the COVID-19 crisis. The phenomenon is complex not only because of its global scale, but also because of the participation of an intricate network of human and non-human actors, including public health responses, asymptomatic virus holders, vaccine discovery and access, and viral mutations. The causes of disaster incubate indiscriminately within life in ways that render certain social and/or cultural lifestyles problematic. Yet, despite the dramatic consequences of the COVID-19 crisis, the future remains open. The effects of the pandemic hint at new opportunities in environmental issues as lockdowns appear to have positively affected the environment. Carbon emissions have reduced due to drops in traffic, power usage and industrial production (Le Quéré et al., Reference Le Quéré2020). Similarly, the challenges posed by the so-called Delta variant of the original SARS-CoV-2 virus are raising awareness about the importance of adequate ventilation systems in closed indoor shared spaces (WHO, 2021).

In a world permeated by contingency, real-time prediction becomes the tool that allows a fresh injection of control and autonomy (Misuraca et al., Reference Misuraca, Broster and Centeno2012). Indeed, despite today's perception of life as contingent and indeterminate, humans must still engage with it. The yearning for a recovery of control is apparent in the discourse of institutions and experts:

‘Today a growing number of societal challenges (such as climate change, natural disasters, urban planning and pandemics) are not only extremely complex, but also interrelated. Data represents a key raw material to deal with such challenges. The huge amount of data produced every day can reveal real-time information that is critical to understanding patterns of human behaviour and activities.’ (European Commission, 2020a, p. 15)

In a report for the Council of Europe, Karen Yeung points out that there is an intimate link between living in a complex world and the potential value of AI in helping us to govern it. However, she stresses the future challenges that lie ahead for computer science research in this respect:

‘The challenge of devising solutions that will enable us reliably to predict, model and take action to prevent unwanted and potentially catastrophic outcomes arising from the interaction between dynamic and complex socio-technical systems generates a new and increasingly urgent frontier for computational research.’ (Yeung, Reference Yeung2019, p. 67)

‘Potentially catastrophic outcomes’, ‘complex socio-technical systems’, ‘to predict’ and ‘model’ emerge as key discursive elements of the contemporary bond between the ontology of life as pure contingency and the emerging epistemology of digital devices based on AI.

2.2 From conditional future to real time

When reading official documents of European institutions as well as the literature on predictive analytics, one notion keeps surfacing, and its meaning remains uneasy to grasp: ‘real-time’ or ‘real-time prediction’, often presented as one of the key characteristics of AI (Yeung, Reference Yeung2019, p. 22).

For example, the Communication of 24 April 2018 of the European Commission, entitled Towards a Common European Data Space, mentions this notion several times (explicitly or implicitly):

‘In manufacturing, real-time sensor data supports predictive maintenance. Data-driven innovation … can help with crisis management and in developing environmental and financial policies. Sharing research data on the outbreak of epidemics can advance relevant research much faster and contribute to a more timely response. High-resolution satellite data … contributes to the real-time monitoring of natural water resources to prevent drought or pollution.’ (European Commission, 2018a, p. 2)

To predict in real time is a convoluted notion, comprising predicting as time passes, predicting through instantaneous translations of reality, predicting as an immediate adjustment to reality. To pre-dict: what meaning is left for the prefix and the term as a whole? Does it still refer to anticipation or merely to a constant adjustment to events? And what is the exact reality of this time called ‘real’ – does it encompass the quantification of its unfolding? This kind of reality would only account for what is happening now and what we grasp from this particular irruption of time. The future would thus be reduced to the actualisation of its imminence and to the digital capture of an almost-already-happening-here-and-now. In other words, ‘what is real is what unfolds in real time’ (Hui Kyong Chun, Reference Hui Kyong2011, p. 96).

Predicting in real time sounds contradictory. Even computer scientists acknowledge the ambiguity of the term and the fact that ‘a predictive model cannot be built in “real time” in its true sense’ (Sangireddy, Reference Sangireddy2015) because it is not possible to predict the here-and-now of what is still becoming.

The expression highlights a confusion between thought and action – a collision between the future and the present. The reality of this time appears to be intrinsic to its conjuration of the future or its ‘de-futurization’ (Esposito, Reference Esposito2011b, p. 180). The future is only the one that has triumphed over countless possible others by becoming actualised, making it entirely subsumed to the present that is emerging while simultaneously forming the object of predictions. This ambition to ‘predict the present’ has led researchers to forge the neologism ‘nowcasting’ to supplement the more conventional ‘forecasting’ (Choi and Varian, Reference Choi and Varian2012; Sanila et al., Reference Sanila, Subramanian and Sathyalakshmi2017; Wu et al., Reference Wu, Leung and Leung2020).

The promise of real-time prediction has been described as the new avatar of ‘presentism’ (Hartog, Reference Hartog2003) or the domination of a perpetual present. Presentism entails a way of articulating the universal categories of past, present and future entirely subject to the reign of immediacy. It presupposes that our temporal horizon has been invaded by an ‘increasingly inflated, hypertrophied present’, imposing demands of productivity, flexibility and mobility upon us all (Baschet, Reference Baschet2018). This analysis assimilates real-time predictions to a form of alienating injunction, thrusting upon human beings the strict normativity of a life lived for the sake of permanent and vigilant adaptation (Stiegler, Reference Stiegler2019). We believe this interpretation to be a simplistic shortcut that neglects to question the conceptual tensions that lie at the heart of ‘real-time prediction’. These tensions, we argue, crystallise a series of reconfigurations that signal a transition towards a more complex and novel sociotechnical imaginary.

The salient features of this new imaginary are embedded in the epistemological and normative dimensions discussed in the next sections. In summary, these include a series of reconfigurations of the relationships between (1) temporality and materiality; (2) knowledge and action; (3) subject and object; (4) the virtual and the possible; and (5) the past, present and future. Before engaging with these reconfigurations, we must discuss the emerging phenomenon of governing the future through predictive analytics. This exercise sheds light on two forms of heterogeneity – epistemological and normative – that are intrinsic to the use of predictive analytics in governance and revelatory of tensions that often remain concealed by virtue of the apparent immediacy of the medium.

3 Governance of the future and predictive analytics

The multiplication and increasing prevalence of predictive analysis systems, as well as the legitimacy they are gradually acquiring, put Ulrich Beck's theory into question. Beck described our contemporary world as a risk society in which catastrophic, incalculable and uninsurable risks proliferate, to the point that incalculability moulds the transformation of society (Beck, Reference Beck1992). His analysis shows how the development of technology generates risks, the effects of which are unlimited in time and space, and can affect future generations around the globe. The consequence of incalculability is that modern risks cannot be contained, anticipated or even diverted (Sørensen, Reference Sørensen2018).

However, predictive analytics and the ‘politics of temporality’ they foster (Adams et al., Reference Adams, Murphy and Clarke2009, p. 247) postulate the calculability of all phenomena. These systems aim to establish a predictive score (in the form of a probability or a profile) for any entity (customer, employee, patient, product, machine, etc.) in order to determine, inform or influence organisational processes. The indeterminacy and complexity of life have not defeated the urge to quantify and calculate risks and, more broadly, the probability of events occurring. On the contrary, recent years have arguably brought a shift from a risk society to a score society (Citron and Pasquale, Reference Citron and Pasquale2014).

3.1 Epistemological heterogeneity: constructions of knowledge

The development of analytical tools such as deep-learning algorithms has made it possible to develop ‘scores’ emerging from immense datasets to identify forms of regularities, patterns or modes of behaviour. These tools create new modes of knowledge acquisition that enable predictions. Far from being neutral and declarative, these predictions shape the future in very visible ways. Reverting again to the notion of ‘real-time prediction’, we identify further aspects particularly problematic for governance purposes. Indeed, the notion is misleading because it suggests that ‘real time’ refers to what is im-mediate and un-mediated. However, this belief ‘ignores the formal structure and materiality of the technologies that make real time itself … possible’ (Thomas, Reference Thomas2014, p. 290).

What we call ‘real time’ is the result of a constant technical mediation involving a structural time lag, which is unavoidable for two reasons. First, time is required for the construction of new datasets, the potential modifications of the analytical model's parameters or the updating of technical infrastructure (Thomas, Reference Thomas2014). Second, time is needed to make newly collected data intelligible and appropriately usable in accordance with the objectives pursued (Kaufmann et al., Reference Kaufmann, Egbert and Leese2019). This time lag makes the capture of the future incomplete, imperfect and in need of constant readjustment as the data change and the analytical tools evolve. This defeats the magical idea that the processing of information can be concomitant with the event or phenomenon that the information purports to describe.

The mediating role of technical devices also implies that the type of knowledge obtained from predictive analytics varies according to their specificity, with consequences for the ways in which they make the future present. The systems used in predictive policing for example rely on patterns for the identification of future crimes (Benbouzid, Reference Benbouzid2018). These patterns stem from the association of specific algorithms with equally specific datasets (e.g. a cartography of areas more prone to arrests for violent crimes). Relevance of these patterns varies according to the different algorithmic models used, the data on which they rely and the analytical approaches applied to data collection and identification. Relevance also depends on human decisions evaluating their possible meaning in the specific context of police activities.

The ability of AI systems to unveil new information hidden within datasets provides them with an aura of ‘epistemological authority’, the apparent unquestionability of which masks the collaborative efforts, methodological options and value judgments involved in identifying patterns (Amoore, Reference Amoore2019). For example, recent research reveals the full extent of the constructive process leading to the emergence of patterns structuring police activity – a process that ‘makes patterns political’ (Kaufmann et al., Reference Kaufmann, Egbert and Leese2019, p. 684). The complex assemblage of actors, algorithms, theories and decisions that contribute to the emergence of patterns (Ananny, Reference Ananny2015) signals their intrinsically normative dimensions (Winner, Reference Winner and Winner2020): they formalise conceptions of crime that are themselves based on specific ideas about how to govern it.

This type of analysis not only debunks the alleged neutrality of AI systems; it also breaks with representational schemes that narrow the complexity and richness of experience in favour of abstract formalisms or questionable reifications. In contrast, what emerges is the importance of teasing out the plurality of forms of knowledge that can be inferred from AI systems (Kaufmann et al., Reference Kaufmann, Egbert and Leese2019, p. 680) – what we call epistemological heterogeneity. Predictive practices therefore do more than just gather the knowledge that is necessary to know the future: they enable performative operations that establish the presence of the future in different ways (Aykut et al., Reference Aykut, Demortain and Benbouzid2019).

This epistemological heterogeneity questions the type of rationality that predictive analytics are based upon. Early modern rationality rests on the notion that the human observer occupies an external, neutral and objective position with respect to the world they are studying (Esposito, Reference Esposito2011a; Latour, Reference Latour2002). The distinction between a knowable subject and a knowable object is ill-equipped to discern the self-referential circularities of predictive or oracular logicsFootnote 2 – that is, the consequences that the actions of the observer and the act of observing itself have on events (Barad, Reference Barad2007). For instance, a recent analysis of the 2008 Global Financial Crisis has highlighted how the ‘models being used to forecast future developments in the markets have not taken into account the extent to which current predictions would affect the future’ (Esposito, Reference Esposito2011a, p. 16).

Additionally, in the context of predictive analytics, even linear relationships of cause and effect are transformed through retroactive loops (Hofstadter, Reference Hofstadter2008) by virtue of which desired effects end up becoming originating causes that are difficult to control (Esposito, Reference Esposito2011a, p. 15). Paradoxically, the development of devices capable of performing complex tasks through reflexive processes, similar to those of humans, makes predicting the effects of predictive systems increasingly hard (European Commission, 2018a). This (rather ironic) tension is apparent in a recent report of the Council of Europe on advanced digital technologies. The report highlights the extreme difficulty of making accurate predictions about the long-term effects of the digital revolution (European Commission, 2018b), as well as the great complexity of these technologies:

‘[m]achine learning and deep learning systems become progressively complex, not only due to the availability of data, but also due to increased programming complexity. As a result, these systems are subject to three types of vulnerability: first, increased programming complexity increases the propensity of these systems to generate stochastic components (i.e. make mistakes); secondly, this complexity opens the door to a wide range of adversarial attacks; and thirdly, the unpredictability of their outputs can generate unintended yet highly consequential adverse third party effects (‘externalities’).’ (European Commission, 2018a, p. 21)

Contemporary digital technologies are therefore just as likely to provoke new crises as they are to help us solve existing and emerging ones (Hui Kyong Chun, Reference Hui Kyong2011, p. 92). In this context, we must question the assumption, inherent in current European (but arguably global) policy discourse, that complexity necessarily requires new ways of governing based on predictive analytics’ performances.

3.2 Normative heterogeneity: logics of action

Once a score is calculated or a pattern identified, a spectral reality takes shape. Made present in the here and now through the analysis of past data, the uncertain future becomes almost palpable and visible. Anticipation of the future is a form of ‘generative truth’ that requires a transition from knowledge to action and eventually imposes itself as a ‘moral imperative, a will to anticipate’ or an ‘injunction’ of sorts (Adams et al., Reference Adams, Murphy and Clarke2009, p. 254; Andersson, Reference Andersson2018, p. 30).

The authority of predictive practices – whether in the form of machine-learning algorithms or ancestral divinatory rites – is thus not only epistemic (Vernant et al., Reference Vernant1974, p. 10). It is also normative as it both requires and justifies the deployment of certain logics of action in the here and now. Alongside the strictly predictive function, providing legitimate grounds for decision-making is integral to the role of predictive practices. In this respect, digital signals have replaced divine signs. Governance is done by numbers (Supiot, Reference Supiot2015), but numbers have the same function as the omens of the past: to immunise decision-making against failure. Of course, the quantification of risk does not always prevent bad outcomes and cannot provide a guarantee or insurance against an uncertain future. But regardless of success or failure (Chandler, Reference Chandler2016), it serves as a justification for action. Although it cannot make the future really foreseeable or the world really controllable, it can provide the means to act as if it did.

It has been observed that ‘the unknowability of complex life itself comes to constitute the rationality of its governance’ (Chandler, Reference Chandler2014, p. 58), which also ends up taking very complex detours. AI can serve different logics of anticipatory action, prompting diverse initiatives for the prevention, compensation, preparation or adaptation to the emergence of a specific future. An examination of the plurality of logics of action leads to the identification of more or less coherent ways of justifying and carrying out political or legal interventions in the present. Crucially, these are not necessarily aimed at neutralising the future before it occurs. Taking into account this normative heterogeneity enriches an analysis too often limited to an indictment of the inhibiting pre-emptive powers of ‘algorithmic governmentality’ (Rouvroy and Stiegler, Reference Rouvroy and Stiegler2015).

The institutional governance of complex life is performed through a number of normative principles that pursue different logics of action and require different interventions in the here and now. These are revelatory of different ways of anticipating uncertain futures. From an analytical standpoint, AI systems can support at least three distinct logics of action: (1) precaution; (2) pre-emption; and (3) preparedness (Anderson, Reference Anderson2010a).

The logic of precaution involves situations in which the risk or threat has been identified and demands action before the damage becomes irreversible. The precautionary logic is probably the best known because it has been established as a legal principle over the course of the past decades at both the national and international levels. In February 2000, the European Commission outlined the essential characteristics of the principle (now enshrined in Art. 191 TFEU):

‘Whether or not to invoke the precautionary principle is a decision exercised where scientific information is insufficient, inconclusive, or uncertain and where there are indications that the possible effects on the environment, or human, animal or plant health may be potentially dangerous and inconsistent with the chosen level of protection.’ (European Commission, 2000, p. 7)

At the heart of the precautionary principle is the idea that, in the face of uncertainty, the occurrence of a potentially catastrophic event and its impact on different forms of life must be prevented. The impossibility of determining the exact probability or severity of a potentially catastrophic event should not prompt decision-makers to refrain from taking preventative action. Recourse to the precautionary principle therefore presupposes a particular ‘epistemic situation’, which is one of uncertainty (Guillaume, Reference Guillaume2012, p. 494): the impact of a situation on the environment or human health may be probable, but the probabilities are unknown – or, more broadly, the potentially hazardous effects of a phenomenon, product or process have been identified but the scientific assessment does not allow for an exact determination of the risk of harm (Bourguignon, Reference Bourguignon2015).

Much of the debate on AI and the precautionary principle to date has focused on a precautionary approach to AI and how this could potentially stifle technological progress. The fear is that an excess of precaution by public authorities will hinder innovation, thus creating unnecessary obstacles to the fulfilment of AI's potential (Castro and McLaughlin, Reference Castro and McLaughlin2019). However, an analysis of relevant policy documents suggests this fear is unwarranted. For example, the recent EU ‘White Paper on Artificial Intelligence’ (European Commission, 2020c) does not mention the precautionary principle at all, and no concession is made for the possibility of a precautionary approach to AI. Conversely, the potential of AI-powered predictive analytics is heavily relied upon for varied purposes including the prevention of diseases, climate change mitigation, etc. Scepticism towards precaution as a logic is of course not new (Clarke, Reference Clarke2005; Sunstein, Reference Sunstein2002). What is interesting here is the apparent paradox inherent to AI, which refracts precautionary attempts while at the same time it is increasingly embedded in large-scale preventative actions. Indeed, in contrast with rule-makers’ hesitancy in applying precaution to AI as an object of regulation, the use of predictive analysis systems for precautionary purposes is establishing itself in a number of areas.

Perhaps the most obvious example is the environmental sphere, where predictive analytics improve understanding of a series of phenomena as complex as climate change or the evolution of biodiversity (Hallgren et al., Reference Hallgren2016; Hampton et al., Reference Hampton2013). Experts in these fields emphasise the need for massive data collection, from disparate sources, over a long period of time and on a large spatial scale, in order to better grasp the characteristics of these phenomena and take action.Footnote 3

The logic of pre-emption takes preventative action one step further. It prompts action prior to any formation and identification of a real threat, which is considered in the abstract as likely to have considerable impact. Recourse to this logic implies a different ‘epistemic situation’: no longer one of uncertainty, but one of ignorance. Pre-emption is aimed at cases in which both the impacts of a potential situation on society and their probabilities are unknown. In this sense, while the precautionary logic can be said to react to ‘known unknowns’, the pre-emptive logic entails the neutralisation of risks before knowledge of their potential even consolidates: ‘unknown unknowns’ (Massumi, Reference Massumi2007; Rasmussen, Reference Rasmussen2004).

It may appear paradoxical or oxymoronic that predictive analytics would be used to inspire pre-emptive action since, by definition, their predictions are based on known and existing data, in contrast with the ignorance that inspires the pre-emptive logic. Yet, reliance on predictive analytics for pre-emptive purposes has become increasingly common in the fight against crime and terrorism, in particular through ‘predictive justice’ (McCulloch and Wilson, Reference McCulloch and Wilson2015). Based on profiling and risk analysis, this process of neutralisation aims at anticipating individuals’ capabilities, intentions or desires in order to intervene by structuring the possible scope of their action. In other words, ‘pre-emptive predictions are intentionally used to diminish a person's range of future options’ (Kerr and Earle, Reference Kerr and Earle2013, p. 67, emphasis in original). For this reason, this form of algorithmic governance predominantly focuses on the predispositions of individuals, as evidenced by the establishment of profiles or scores, evaluating the potential for dangerousness, failure or fallibility.

This new mode of governance has been described as emblematic of our contemporary ‘societies of clairvoyance’ (Neyrat, Reference Neyrat2010). Increasing recourse to this logic of action signals how our societies nurture a very limited and problematic relationship with the future: they foster an actuarial and pre-emptive temporality that crushes the present into predetermined courses of action (Mantello, Reference Mantello2016; Neyrat, Reference Neyrat2009).Footnote 4 In Europe, the use of pre-emptive logic in the context of predictive justice has been problematised with a particular focus on the threat to individual rights (such as privacy, or the GDPR protection against fully automated decision-making) (Jansen, Reference Jansen2018; Lynskey, Reference Lynskey2019; Williams and Kind, Reference Williams and Kind2019). Less attention has been dedicated to the broader collective implications of accepting ‘clairvoyance’ as a method of governance. When the logic shifts away from prevention in its traditional sense to embrace a pre-emptive turn, ensuing measures equally change in nature. The focus moves from causes to intervene ‘on the information and physical environment of individuals’ to prevent certain things or actions from being actualised or even possible (Rouvroy and Stiegler, Reference Rouvroy and Stiegler2015, p. 125).

The key tenet of this mode of governance is the connection between traditional approaches to risk assessment, based on a risk-utility calculus, and the notion of ‘clairvoyance’, provided by predictive analytics, which refutes the ontological uncertainty of future events in favour of a more reassuring form of artificially designed determinism.

Finally, the logic of preparedness is engaged at a different point in the timeline, when a particular event is either unfolding or producing its impact on life. Much like precaution, the logic of preparedness is designed to be applied to ‘epistemic situations’ in which threats are neither calculable nor controllable. Unlike precaution however, preparedness does not prescribe the avoidance of a threatening event. Rather, it ‘assumes that the occurrence of the event may not be avoidable and so generates knowledge about its potential consequences’ (Lakoff, Reference Lakoff2017, p. 19).

The logic of preparedness and that of precaution represent the two ends of the same paradox. The worst case scenario is perceived as something to be avoided at all costs while, at the same time, it is understood as fundamentally unavoidable. In this sense, the two logics are ‘increasingly joined’ to inspire ‘operational criteria of response’ (Aradau and Van Munster, Reference Aradau, Van, Amoore and de Goede2008, p. 30).

Preparedness engages simultaneously speculative and reactive dimensions. The aim is to be prepared for the worst as if its devastating consequences were already present. To make a disaster fictitiously occur in the here and now requires an artificial projection into a state of emergency to formulate potential responses to the crisis (imaginatively) at hand. In this sense, preparatory logics tend to rely upon resilience (Zebrowski, Reference Zebrowski2013), as well as crisis management planning and the protection of vital infrastructures for society (Collier and Lakoff, Reference Collier and Lakoff2015). Specifically, they require taking into account the various phases of initial rescue operations (e.g. medical triage, evacuations, provision of food and water supplies), as well as initial actions to be taken in the immediate aftermath of the event generating the crisis to minimise its consequences.

The use of predictive analytics in the context of preparedness and crisis management is increasingly common, whether for the purpose of monitoring epidemics (Jayalakshmi and Anuradha, Reference Jayalakshmi and Anuradha2017; Raza, Reference Raza, Hassanien, Dey and Elghamrawy2020; Zeng et al., Reference Zeng, Cao, Neill, Xing, Giger and Min2021) or mitigating the impacts of a humanitarian crisis (Raymond and Al Achkar, Reference Raymond and Al2016) or natural catastrophe (Yu et al., Reference Yu, Yang and Li2018). In these contexts, AI systems are supposed to provide and help maintain high levels of ‘situational awareness’, which in turn are necessary to ensure adequate response to the emergency (Mehrotra et al., Reference Mehrotra2013).

Preparing for the worst entails the capacity to respond adequately once the worst materialises. Thus, preparedness also engages the response phase in the aftermath of a crisis, which necessitates the deployment of appropriate reactions constantly alive to mutating circumstances. In this very specific context, the expression ‘real-time prediction’ carries understandable value and meaning, as algorithmic devices are able to inform reactions ‘live’ while a disastrous event is unfolding:

‘Big Data analysis in real time can identify which areas need the most urgent attention from the crisis administrators. With the use of the GIS and GPS systems, Big Data analysis can assist the right guidance to the public to avoid or move away from the hazardous situation. Furthermore, analysis from prior crisis could help identify the most effective strategy for responding to future disasters.’ (Dontas and Dontas, Reference Dontas and Dontas2015, p. 480)

Given the characteristics of predictive analytics, it is unsurprising that their deployment in the context of preparedness has followed increasing levels of securitisation of potential threats, and health threats in particular. A progressive expansion of the scope of biosecurity governance and regulations, both globally and at the European level, has been recently highlighted (Dijkstra and De Ruijter, Reference Dijkstra and De2017; Roberts, Reference Roberts2019). The availability of modern algorithmic devices is exacerbating the levels of permanent surveillance and data collection practices that the logic of preparedness is inherently capable of generating, as it is dependent on them. With the rise of predictive analytics, the logic of preparedness, originally conceived to ready societies for the unknown, paradoxically entrenches pre-existing experiences as the sole source of preparatory inspiration, thereby limiting the range of action.

While we cannot exclude the possibility that there may be more, precaution, pre-emption and preparedness constitute three major logics of action that predictive analytics can serve. It is possible and necessary to separate these logics from an analytical standpoint, yet it is equally crucial to appreciate that they can be juxtaposed or used in conjunction in the course of attempts to apprehend complex phenomena. These principles can operate in pairs. For example, precautionary and preparatory logic can operate in tandem in the face of emerging health threats.Footnote 5 Similarly, it is conceivable that matters of public order may prompt authorities to combine pre-emptive with preparatory actions. It is also possible to observe a simultaneous deployment of all three logics of action by public authorities as attested by the European policies for the collection and use of Passenger Name Records (PNR) data in the context of anti-terrorism and trans-border criminal laws (European Commission, 2011).

In an early working paper on the topic, the European Commission explicitly refers to three potential uses of PNR data: ‘reactive’, ‘in real time’ and ‘proactive’. The paper insists on the necessity for combining different logics of action in the use of the data:

‘The combined pro-active and real-time use of PNR data thus enable law enforcement authorities to address the threat of serious crime and terrorism from a different perspective than through the processing of other categories of personal data: as explained further below, the processing of personal data available to law enforcement authorities through existing and planned EU-level measures such as the Directive on Advance Passenger Information, the Schengen Information System (SIS) and the second-generation Schengen Information System (SIS II) do not enable law enforcement authorities to identify “unknown” suspects in the way that the analysis of PNR data does.’ (European Commission, 2011, p. 12)

PNR data can be used reactively in the context of criminal investigations for the purpose of disentangling networks after a crime has been committed – thus falling within the logic of preparedness. It is equally crucial to use PNR data in real time upon arrival or departure of identified passengers to observe or arrest individuals before a crime is committed, because it is about to be or is in the course of being committed – therefore encompassing both the precautionary and preparatory logics. Finally, the Commission's working paper underlines the specific utility of using PNR data by reference to predetermined evaluative criteria in order to identify individuals without criminal records. PNR data can then be used pro-actively to further develop analytical benchmarks and evaluative criteria for assessing passengers prior to their arrival or departure – thus following a pre-emptive logic.

The complex discussion of epistemic and normative heterogeneity developed in this section prompts us to interrogate, in the final part of the paper, the significance of this modern conjunction between predictive analytics and governance practices from a broader societal perspective.

4 A new sociotechnical imaginary?

4.1 Governing through real-time predictions

The association between predictive analytics and governance strives to make the future knowable in the present (epistemic practices) and shapes a programmatic way of formalising, justifying and deploying action in the here and now (normative logics). We argue that this convergence is shaping a new sociotechnical imaginary – to paraphrase Jasanoff: a collectively held, institutionally stabilised and publicly performed vision of a desirable future, animated by shared understandings of social life and social order attainable through advances in science and technology. We now interrogate its scope and limits.

This emergent vision is not quite institutionally stabilised and yet it is increasingly performed in the public sphere. It enshrines a capricious and indomitable future that is always indeterminate, unpredictable and complex, requiring permanent taming and, in some cases, total neutralisation. We have seen that ‘real-time’ predictions are the providential tools for the task: anticipatory techniques allowing humans to regain a measure of control and autonomy in a contingent world. The convergence between algorithmic and politico-legal logics is in this sense both the result and the vector of the new imaginary (Jasanoff, Reference Jasanoff, Jasanoff and Kim2015, p. 26).

As discussed, predicting in real time is counter-intuitive. Conceptually, this expression uncovers many contradictions and tensions inherent in current predictive practices. These tensions perform a profound reconfiguring role that sketches the contours of the new sociotechnical imaginary. We identify five critical ones:

  1. 1 The idea of real-time prediction reconfigures the relations between temporality and materiality, the (im)mediate and the mediated. The sparkling velocity of big data and the immediacy of real time render technology invisible and distract from the materiality of predictive analytics. Through a process of ‘blackboxing’ (Kallinikos, Reference Kallinikos2002) or ‘camouflage’ (Dubey and de Jouvancourt, Reference Dubey and de Jouvancourt2018), the apparently immediate result obliterates the mediating role of technology (Verbeek, Reference Verbeek, Berg, Friis and Crease2016) and all the material work and significant time involved in algorithmic modelling, data cleansing, system testing, etc.

  2. 2 The idea of real-time prediction blurs the relations between knowledge and action because it implies that following the digital traces of a phenomenon in real time is tantamount to acting on the phenomenon itself. The time of observation and the time of action merge into an epistemic trap, prompting people to believe that knowledge about a phenomenon or behaviour provides normative guidance on how to act in its face. Big data generates a different type of ‘knowledge’ than ordinary science. Based on correlations, it is ‘more akin to the translation or interpretation of signs rather than … understanding chains of causation’ (Chandler, Reference Chandler2015, p. 836). Can the performativity of a score or profile justify action without adequate understanding of the multiplicity of causes behind a phenomenon?

  3. 3 The idea of real-time prediction questions the distinction between subject and object, and more specifically between a predicting subject and a knowable object. These predictions do not operate in accordance with modernist rationality because they follow an oracular logic of self-referential and self-fulfilling circularities, where the observer is located inside the world under observation and subject to its principles. This requires taking into consideration the consequences that the actions of the observer and the acts of observing and predicting themselves have on events. The self-fulfilling nature of predictions is of course not new and has been discussed in sociological literature for decades (Merton, Reference Merton1948). It is the fact that predictions seemingly occur in ‘real time’ that exacerbates these tendencies, which are indeed inherent to the very idea of predicting.

  4. 4 The idea of real-time prediction blurs the distinction between the virtual and the possible. As real time points to ‘real world’ events, real-time predictions only relate to what is possibly happening as it has already happened. By identifying correlations from past regularities, this kind of prediction reduces the real to the possible and overlooks the virtual dimension of life and its multiple potentialities. Giles Deleuze has discussed that the virtual is not opposed or alternative to the real. Virtual is what exists potentially and can materialise through actualisation. Thus, the virtual differs from the possible in that it is not predetermined and is therefore unpredictable and responds to an open multiplicity of variables from which one or more forms of actualisations may emerge (Deleuze, Reference Deleuze2013).

  5. 5 The idea of real-time prediction reconfigures the links between the past, the present and the future. Because these predictions rely exclusively on past regularities, the future made present in the here and now is impoverished and reduced to a mere repetition of the possible, of what has already happened at least once. In this sense, predictive analytics end up ‘reducing the future to the past, or, more precisely, to a past anticipation of the future’ (Hui Kyong Chun, Reference Hui Kyong2011, p. 92). Additionally, predictions are no longer made with a view to anticipating today what could happen in the future: they are made to anticipate right now what immediately comes next. The temporal unit radically changes with real-time predictions – it shrinks and therefore modifies the notions of short-, medium- and long-term.

These reconfiguring tensions are moulding the transition towards the new imaginary, which appears eminently characterised by the ‘necessity of continuous adaptation to the world in its emergence’ (Chandler, Reference Chandler2016, p. 410). However, this way of envisioning the future has important limitations, particularly given its role in inspiring normative logics of action.

4.2 Pluralising the future

The first limitation of the emerging imaginary relates to the issue of epistemological heterogeneity and requires questioning the place occupied by predictive analytics among other anticipatory techniques. If governance practices yearn to address a purely contingent life, to what extent can or should they rely on data-driven science and quantified future visions? The question involves the hierarchical place of ‘algorithmic governance’ vis-à-vis radically different modes of anticipatory knowledge such as foresight (Cazes, Reference Cazes1986), imagination (Engélibert, Reference Engélibert2019) or performance (Anderson, Reference Anderson2010b). As the epistemological authority of quantified knowledge gains momentum in the era of big data, the risk is a move towards a future monopolised by data-driven science with reduced normative options. This is problematic because reliance on a multiplicity of anticipatory techniques (Aykut et al., Reference Aykut, Demortain and Benbouzid2019) is essential for alternative ‘visions of desirable futures’ to be contested, negotiated or reconciled by different actors. Maintaining plurality is critical to preserve an ‘ecology of futures’ (Michael, Reference Michael2017) within which policy-makers, lawyers, experts, stakeholders and citizens can navigate and make decisions to face the adversities of a contingent life.

The analysis of the normative logics of action above is revealing of the need for this plurality. Let us take preparedness as an example. The very idea of this logic of action is to accept the inevitability of the unknown, and prepare for its aftermath. But this necessarily requires a measure of imagination as a mode of anticipation. Where the operation of the logic is subsumed within the crushing limitations of correlative patterns identified in existing datasets – even where these are updated in real time – the normative force of the logic changes in nature. When deploying preparatory action becomes inextricably tied to a predictive score, the logic arguably ceases to be one of preparedness and becomes one of adjustment. A risk inherent to allowing a data-driven monopoly of anticipation is therefore an involution of normative logics of action projected towards unknown futures into a data-directed adjustment to emerging presents – the continuous adaptation Chandler refers to. This prompts us to raise a strong caveat against the hegemony of quantification in governance, and argue instead for decision-making processes that rely on a diversity of ‘arts and technologies of imagining the actionable future’ (de Goede and Randalls, Reference de Goede and Randalls2009, p. 860). The inability of predictive scores to imagine beyond set patterns and therefore capture the unexpected is well exemplified by the failure of the ‘hundreds of AI tools’ developed to ‘catch’ COVID-19 (Heaven, Reference Heaven2021).

In this regard, a report drafted by a group of foresight experts and addressed to the European Commission in 2015 rightly insists on the need for a ‘co-production of knowledge’ in a context of radical uncertainty:

‘In an era of big data, some are optimistic that real-time data mining combined with continued increases in computational power and speed will enable more reliable predictive modelling. However, being able to run the modelling process faster and deliver more detail does not guarantee better outcomes. Achieving this requires the worlds of theory and practice to be effectively bridged in a co-production of knowledge. In the foresight philosophy of non-deterministic, still emerging and open multiple futures, this bridging needs to be done in a way that effectively grapples with problematic situations or enables the management of unprecedented large-scale transitions in the context of unpredictability and uncertainties.’ (Wendeling, Reference Wendeling2015, p. 31)

Crucially, a combination of diverse anticipatory techniques would have the ability to balance out one of the intrinsic weaknesses of predictive analytics – their conservative or reactionary nature. Emerging as it is from correlative patterns in past data, the future deduced from predictive analytics is one that makes it impossible to apprehend the new, the abnormal or the spontaneous: a future that ignores the virtual and reduces itself to the possible (Deleuze, Reference Deleuze2013). This inevitable characteristic leads us to reflect upon the second major limitation of the new sociotechnical imaginary: the uncertain fate of forms or modes of life that do not conform to dominant the sociopolitical model. Indeed, visions of a desirable future are ‘animated by shared understandings of forms of social life and social order’ (Jasanoff and Kim, Reference Jasanoff and Kim2015, p. 4). This raises three questions: (1) Which conceptions of (social) life and (social) order are involved in the new imaginary? (ii) Are they truly shared and by whom? (iii) Which normativity is associated with these conceptions or, in other words, how do they valorise (or not) certain forms of life – particularly ‘non-conforming’ ones?

Answering these questions requires a firm grasp of the constructivist nature of predictive practices. Indeed, these practices can only claim to possess knowledge about the future because they inherently imply specific ways of defining what counts as the real world, along with its various constituents (Holbraad, Reference Holbraad2013). In this sense, predictive practices can be described as ‘demiurgic’ (De Boeck and Devisch, Reference De Boeck and Devisch1994) as they shape the world by bringing about the presence of the future in the here and now. This demiurgic power makes people and things exist in a certain way: it is a process of ‘institution’ (Castoriadis, Reference Castoriadis1975) or ‘instauration’ of persons and things – a process that allows existence to gain reality (Lapoujade, Reference Lapoujade2017, p. 73; Souriau, Reference Souriau1939).

In the face of their intrinsic link to the possible, which drastically limits their ability to address novelty or abnormality, the process of instauration performed by predictive analytics opens itself to questions, particularly with regard to the way algorithmic governance contributes to forging and valorising certain forms of life (human or non-human) to the detriment of others. Indeed, consubstantial to any recognition or instauration of some form of life is a reflex of immunisation towards others, deemed expendable because of their difference or abnormality. This type of reflex generates a form of ‘immunopolitics’ (Esposito, Reference Esposito2008) that translates into indifference, discrimination or even alienation:

‘we can get a sense of how anticipatory action (re)distributes the relationship that lives within and outside liberal democracies have to disaster. To protect, save and care for certain forms of life is to potentially abandon, dispossess and destroy others.’ (Anderson, Reference Anderson2010a, p. 791)

Large-scale manifestations of this process of immunisation are surfacing in a multitude of domains. Two recent books, both evocatively titled The Uncounted (Cobham, Reference Cobham2020; Davis, Reference Davis2020), develop the theme of underrepresentation of marginalised groups in datasets spanning economic welfare, demographics and public health (with a particular focus on the fight against HIV/AIDS). A further example is the progressive emergence of a ‘digital welfare state’ in which social protection and assistance are data-driven and digital technologies are used for diverse purposes, including ‘to automate, predict, identify, surveil, detect, target and punish’ (Alston, Reference Alston2019b, p. 4; Madden et al., Reference Madden2017). The recent Dutch case involving the system SyRI (System Risk Indication) is very instructive in this regard as it is one of the first examples of successful litigation against the governmental use of a digital tool in the context of welfare provision (de Rechtspraak, 2020; Gantchev, Reference Gantchev2019).

Although the exact technology used in SyRI has not been publicly released, this automated system allows the Dutch central and local governments to identify risks of social security fraud. Processing data from a range of datasets relating to education, credit-worthiness, health insurance, welfare benefits, etc., SyRI provides an opaque risk-assessment modelling to identify whether individuals are worthy of investigation for potential fraud and unlawful claims under (and/or non-compliance with) legislation.

Since its adoption in 2014, SyRI has been under scrutiny by several non-governmental organisations and other interest groups, which filed a lawsuit in 2018. This was due to serious concerns about the specific targeting of people from low socio-economic backgrounds and other vulnerable groups such as immigrants and ethnic or religious minorities. It is now apparent that SyRI was disproportionately focusing on ‘difficult neighbourhoods’, further undermining their reputation and that of their inhabitants (Leijten, Reference Leijten2020; Vervloesem, Reference Vervloesem2020). The use of SyRI raises critical issues both in terms of procedural fairness and human rights – be it the right to privacy or the right to social security. Philip Alston, the UN the Special Rapporteur on extreme poverty and human rights, submitted a brief as Amicus Curiae in the case, in which he makes several remarks about the potential detrimental impact of the use of systems like SyRI:

‘The use of digital tools to pursue welfare fraud is … not a neutral development, but part of a partisan political trend. In this environment, welfare recipients, especially those who receive non-contributory assistance designed to assist the poorest in society, are regularly depicted as second-class citizens intent on defrauding the state and the community. In such an atmosphere … digital tools are being mobilized to target disproportionately those groups that are already more vulnerable and less able to protect their social rights.’ (Alston, Reference Alston2019a, p. 7)

In February 2020, the District Court of The Hague held that the SyRI legislation violated Article 8 of the European Convention on Human Rights (ECHR), which protects the right to private and family life. The court's reasoning focused chiefly on the issue of privacy and on the proportionality of SyRI's interference with private life. Surprisingly, the issue of discrimination and the stigmatisation of the poor and recipients of social benefits, put forward by the plaintiffs and discussed by Alston, received little attention. Yet, this dimension is crucial as SyRI had been demonstrably used to disproportionately target groups of already vulnerable individuals, with serious impact on their rights and no due process. The case is all the more interesting as it reveals how predictive analytics can cause social unfairness beyond the sensitive categories explicitly identified and protected by the law (Timan and Grommé, Reference Timan and Grommé2020). This is troublesome, particularly in the current ‘move to predicting risk instead of the ex post enforcement of rules violations’ (Alston, Reference Alston2019b, p. 19) – a move that animates the increasing reliance on predictive analytics in the context of governance practices inspired by the normative logic of actions analysed above.

These reflexes of immunisation, which exclude certain groups or forms of life deemed intrinsically suspect or pernicious, are symptomatic of ‘the inequality of the value of lives’ (Fassin, Reference Fassin2018, p. 32) in the digital age. Thus, as governance and algorithmic logics keep converging, a fundamental question remains open: how to avoid entrenching indifference for forms of life neglected by dominant sociopolitical models – in other words, how to develop ‘novel and less exclusive cosmologies’ (Chateauraynaud and Debaz, Reference Chateauraynaud and Debaz2019, p. 132). Any positive answer will require safeguarding a plurality of anticipatory techniques as a necessary premise for alternative ‘visions of desirable futures’ to emerge.

5 Conclusion

In this paper, we have problematised contemporary beliefs in the miracles of big data and the divinatory power of AI with reference to the implications of such beliefs for the anticipatory aspirations of governance practices. By way of conclusion, we suggests the contours of what we understand to be the necessary objectives of future research.

The link between access to knowledge and the ability to govern the future is inevitably determined by our relationship to the real and our recognition of its qualities, which may vary between times and epochs, and from culture to culture, as ‘anticipation is a regime of being in time’ (Adams et al., Reference Adams, Murphy and Clarke2009, p. 247).

The ways in which we describe reality or ‘the state of the world’ are critical for two reasons. The first is to elucidate our comprehension of how societies operate and organise themselves to anticipate the future; the second is to reveal the cosmologies, the worldviews that lie at the heart of these descriptions (Reith, Reference Reith2004). Analysing modes of anticipation allows us to understand the means by which a certain imaginary frames and represents alternative futures, links the past to the future, enables or hinders action and naturalises specific ways of contemplating possible worlds.Footnote 6

This process is not merely of epistemological consequence for the ways in which reality is represented through emerging means of knowledge. In a world in which life is envisaged and understood in terms of pure contingency, the challenge of governing extreme complexity gains a specific ontological dimension. The issue ceases to be that of ‘knowing more’ about a certain reality and turns into delimiting ‘what is to be known’ (Chandler, Reference Chandler2014, p. 50).

Awareness of the ontological dimensions of anticipatory logics deployed in governance and predictive analytics is critical to grasp the profound meaning of any truth-claim posited by the association between the two. By claiming to possess knowledge about the future, these logics necessarily imply specific modes of defining what matters, what does not and what can be legitimately regarded as constituting part of the world. In this sense, prediction is indeed a ‘project of world making’ (Andersson, Reference Andersson2018, p. 23).

The logics of anticipation enshrined in the association between predictive analytics and governance thus aspire to make the future present in the here and now and, in so doing, impose themselves as world-shaping instruments, fostering specific ways of instauring people and things. ‘Instauration’ however comes at the price of ‘immunisation’, which can occur as distantiation, detachment or complete indifference towards forms of life that are not afforded any credit or value. Therefore, future research on the assemblage between governance and AI must consider the fate of those forms of life that do not adapt, do not align or even actively resist dominant sociopolitical models. The risk is otherwise for the assemblage to crystallise into an exclusive mode of protection of life aligned to and acknowledged by these models (Schinkel, Reference Schinkel2011).

As masterfully put by Jenny Andersson, the fundamental problem that we have today vis-à-vis the future is not the passage from progress to crisis, as some believe. It is rather the challenge of managing a potentially infinite plurality of futures and of moulding a society that is truly plural, where access to the widest spectrum of modes of engagement in and with the world is genuinely open.

Acknowledgements

The authors wish to thank Amy Thomasson for her assistance in finalising this article.

Conflicts of Interest

None

Footnotes

1 An argument can be made that such overlap exists between governance and law that the distinction, conceptually, is minimal. This is particularly true where one accepts the conclusions of Bruno Latour who refers to law as an ambivalent phenomenon, both institutional and pragmatic: law as an institution (legislation, regulation, governance) and law as a practice (‘law in action’, adjudication) (Gutwirth, Reference Gutwirth2013; Latour, Reference Latour2009). However, we will not be making these arguments here and will maintain the analytical distinction whereby law is an instrument of governance. A parallel suggests that ‘law in action’, beyond its technical adjudication, embraces the broader spectrum of its ‘life’ beyond the black letter of legal rules (Friedman et al., Reference Friedman, Macaulay and Stookey1995). Again, we cannot engage with the complexity of this strand of socio-legal scholarship here, but we acknowledge its potential relevance to our arguments.

2 In these, the observer is located within the world they observe and are subject to its inescapable principles. This is wonderfully illustrated by the myth of Oedipus and its self-fulfilling prophecy: the observer who wants to escape the prediction announced by the oracle contributes to its realisation (Rosset, Reference Rosset2012).

3 There is a deeply shared conviction that climate modelling powered by big data, while not a ‘silver bullet’, will allow governments to soften the blow of climate change and prevent some of its consequences. For example, the potential of predictive analytics has been described as capable of a ‘sustainability revolution’ (Herweijer and Ramchandani, Reference Herweijer and Ramchandani2018) as well as improving local biodiversity and conservation efforts (Norouzzadeh et al., Reference Norouzzadeh2018).

4 In criminal law, the pre-emptive logic triggers a threefold phenomenon: (1) the shift from the category of act to that of intention; (2) the emergence of linguistic avatars of terrorism (such as ‘dangerousness’) as predisposition to crime; and (3) the adoption of laws acting on these predispositions.

5 E.g. the successful (to date) public health countermeasures enacted by the governments of Australia and New Zealand to face the COVID-19 pandemic, while initially inspired by a logic of preparedness, have become increasingly informed by a logic of precaution – if not pre-emption with blanket border closures.

6 The link between past and future can take different forms depending on the normative logic under consideration. The precautionary principle involves a future that manifests itself in the form of an absence (it is not yet here and we do not know what its impact will be); with the principle of pre-emption, the future is apprehended in the form of imminence (it is about to happen and we must act immediately before it does); finally, with the principle of preparedness, the future is characterised by its presence (the worst is happening and we must respond appropriately).

References

Adams, V, Murphy, M and Clarke, AE (2009) Anticipation: technoscience, life, affect, temporality. Subjectivity 28, 246265.CrossRefGoogle Scholar
Alston, P (2019a) Brief by the United Nations Special Rapporteur on Extreme Poverty and Human Rights as Amicus Curiae in the Case NJCM c.s./Der Staat Der Nederlanden (SyRI) before the District Court of The Hague (Case Number: C/09/550982/HA ZA 18/388). New York: United Nations. Available at: https://www.ohchr.org/Documents/Issues/Poverty/Amicusfinalversionsigned.pdf (accessed 29 December 2020).Google Scholar
Alston, P (2019b) Extreme Poverty and Human Rights – A/74/493. United Nations. Available at: https://undocs.org/A/74/493 (accessed 29 December 2020).Google Scholar
Amin, A (2013) Surviving the turbulent future. Environment and Planning D: Society and Space 31, 140156.CrossRefGoogle Scholar
Amoore, L (2019) Doubt and the algorithm: on the partial accounts of machine learning. Theory, Culture and Society 36, 147169.Google Scholar
Amoore, L and Piotukh, V (2015) Introduction. In Amoore, L and Piotukh, V (eds), Algorithmic Life: Calculative Devices in the Age of Big Data. New York: Routledge, pp. 1532.Google Scholar
Ananny, M (2015) Toward an ethics of algorithms: convening, observation, probability, and timeliness. Science Technology and Human Values 41, 93117.CrossRefGoogle Scholar
Anderson, B (2010a) Preemption, precaution, preparedness: anticipatory action and future geographies. Progress in Human Geography 34, 777798.CrossRefGoogle Scholar
Anderson, B (2010b) Security and the future: anticipating the event of terror. Geoforum 41, 227235.CrossRefGoogle Scholar
Andersson, J (2018) The Future of the World: Futurology, Futurists, and the Struggle for the Post Cold War Imagination. Oxford: Oxford University Press.Google Scholar
Andreotta, AJ, Kirkham, N and Rizzi, M (2021) AI, big data, and the future of consent. AI & SOCIETY 30 August, 114.Google Scholar
Aradau, C and Van, Munster R (2008) Taming the future: the dispositif of risk in the War on Terror. In Amoore, L and de Goede, M (eds), Risk and the War on Terror. New York: Routledge, pp. 2340.Google Scholar
Aykut, SC, Demortain, D and Benbouzid, B (2019) The politics of anticipatory expertise: plurality and contestation of futures knowledge in governance: introduction to the Special Issue. Science and Technology Studies 32, 212.Google Scholar
Baker, P and Gourley, B (2015) Data Divination: Big Data Strategies. Boston, MA: Cengage Learning PTR.Google Scholar
Barad, K (2007) Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. London: Duke University Press.CrossRefGoogle Scholar
Baschet, J (2018) Défaire La Tyrannie Du Présent: Temporalité Émergentes et Futurs Inédits [Undo the Tyranny of the Present: Emerging Temporalities and New Futures]. Paris: La Découverte.Google Scholar
Baxter, P and Jack, S (2008) Qualitative case study methodology: study design and implementation for novice researchers. The Qualitative Report Volume 13, 544559.Google Scholar
Beck, U (1992) Risk Society: Towards a New Modernity, 1st edn. London: SAGE Publications.Google Scholar
Benbouzid, B (2018) Quand prédire, c'est gérer [When to predict is to manage]. Réseaux 5, 221256.CrossRefGoogle Scholar
Bourguignon, D (2015) Le principe de précaution: définitions, applications et gouvernance: analyse approfondie [The precautionary principle: definitions, applications and governance: an in-depth analysis]. Strasbourg: European Parliament, Directorate-General for Parliamentary Research Services Office. Available at: https://doi.org/10.2861/96978 (accessed 13 August 2022).CrossRefGoogle Scholar
Cantero, Gamito M and Ebers, M (2021) Algorithmic governance and governance of algorithms: an introduction. In Ebers, M and Cantero, Gamito M (eds), Algorithmic Governance and Governance of Algorithms. Cham: Springer, pp. 122.Google Scholar
Castoriadis, C (1975) L'institution Imaginaire de La Société [The Imaginary Institution of Society]. Paris: Éditions du Seuil.Google Scholar
Castro, D and McLaughlin, M (2019) Ten Ways the Precautionary Principle Undermines Progress in Artificial Intelligence. Available at: https://itif.org/publications/2019/02/04/ten-ways-precautionary-principle-undermines-progress-artificial-intelligence (accessed 3 December 2020).Google Scholar
Cazes, B (1986) Histoire Des Futurs Les Figures de l'avenir, de Saint Augustin Au XXIe Siècle [History of the Future: The Figures of the Future of Saint Augustine in the 21st Century]. Paris: Seghers.Google Scholar
Cetina, KK (1999) Epistemic Cultures Harvard: How the Sciences Make Knowledge. Cambridge: Harvard University Press.Google Scholar
Chandler, D (2014) Beyond neoliberalism: resilience, the new art of governing complexity. Resilience 2, 4763.CrossRefGoogle Scholar
Chandler, D (2015) A world without causation: big data and the coming of age of posthumanism. Millennium: Journal of International Studies 43, 833851.CrossRefGoogle Scholar
Chandler, D (2016) How the world learned to stop worrying and love failure: big data, resilience and emergent causality. Millennium: Journal of International Studies 44, 391410.Google Scholar
Chandler, D (2018) Ontopolitics in the Anthropocene: An Introduction to Mapping, Sensing and Hacking. New York/London: Routledge.CrossRefGoogle Scholar
Chateauraynaud, F and Debaz, J (2019) Agir avant et après la fin du monde, dans l'infinité des milieux en interaction [Acting before and after the end of the world within infinite milieus of interaction]. Multitudes 76, 126–132.Google Scholar
Choi, H and Varian, H (2012) Predicting the present with google trends. Economic Record 88, 29.CrossRefGoogle Scholar
Christen, M and Franklin, LR (2002) The Concept of Emergence in Complexity Science: Finding Coherence between Theory and Practice. New York/Zurich. Available at: https://www.encyclog.com/_upl/files/2002_emergence.pdf (accessed 3 December 2020).Google Scholar
Citron, DK and Pasquale, F (2014) The scored society: due process for automated predictions. Washington Law Review 89, 133.Google Scholar
Clarke, S (2005) Future technologies, dystopic futures and the precautionary principle. Ethics and Information Technology 7, 212–126.CrossRefGoogle Scholar
Cobham, A (2020) The Uncounted. Cambridge: Polity Press.Google Scholar
Cole, SA and Bertenthal, A (2017) Science, technology, society, and law. Annual Review of Law and Social Science 13, 351371.CrossRefGoogle Scholar
Collier, SJ and Lakoff, A (2015) Vital systems security: reflexive biopolitics and the government of emergency. Theory, Culture & Society 32, 1951.CrossRefGoogle Scholar
Council of Europe (2020) Recommendation CM/Rec(2020)1 of the Committee of Ministers to Member States on the Human Rights Impacts of Algorithmic Systems. Council of Europe. Available at: https://search.coe.int/cm/pages/result_details.aspx?ObjectId=09000016809e1154 (accessed 2 December 2020).Google Scholar
Danaher, J et al. (2017) Algorithmic governance: developing a research agenda through the power of collective intelligence. Big Data & Society 4, 121.Google Scholar
Davis, S (2020) The Uncounted: Politics of Data in Global Health. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
De Boeck, F and Devisch, RD (1994) Ndembu, luunda and yaka divination compared: from representation and social engineering to embodiment and worldmaking. Journal of Religion in Africa 24, 98133.CrossRefGoogle Scholar
de Goede, M and Randalls, S (2009) Precaution, preemption: arts and technologies of the actionable future. Environment and Planning D: Society and Space 27, 859878.CrossRefGoogle Scholar
de Rechtspraak (2020) SyRI Legislation in Violation of the European Convention on Human Rights. The Hague. Available at: https://www.rechtspraak.nl/Organisatie-en-contact/Organisatie/Rechtbanken/Rechtbank-Den-Haag/Nieuws/Paginas/SyRI-wetgeving-in-strijd-met-het-Europees-Verdrag-voor-de-Rechten-voor-de-Mens.aspx (accessed 3 December 2020).Google Scholar
Deleuze, G (2013) Différence et répétition [Difference and Repetition]. Paris: Presses Universitaires de France.Google Scholar
Dijkstra, H and De, Ruijter A (2017) The health-security nexus and the European Union: toward a research agenda. European Journal of Risk Regulation 8, 613625.Google Scholar
Dontas, E and Dontas, N (2015) Big data analytics in prevention, preparedness, response and recovery in crisis and disaster management. Recent Advances in Computer Science 32, 476482.Google Scholar
Dubey, G and de Jouvancourt, P (2018) Mauvais Temps. Anthropocène et Numérisation Du Monde [Bad Times: Anthropocene and the Digitalization of the World]. Paris: DEHORS.Google Scholar
Engélibert, J-P (2019) Fabuler La Fin Du Monde: La Puissance Critique Des Fictions d'apocalypse [Make Up the End of the World: The Critical Power of Apocalypse Fiction]. Paris: La Découverte.CrossRefGoogle Scholar
Esposito, E (2011a) A Time of Divination and a Time of Risk: Social Precondition for Prophecy and Prediction in Fate, Freedom and Prognostication. Strategies for Coping with the Future in East Asia and Europe, Modena, International Consortium for Research in the Humanities, 30 November 2011.Google Scholar
Esposito, E (2011b) The Future of Futures. Cheltenham: Edward Elgar Publishing.CrossRefGoogle Scholar
Esposito, R (2008) Bíos: Biopolitics and Philosophy. Minneapolis: University of Minnesota Press.Google Scholar
European Commission (2000) Communication from the Commission precautionary principle. COM(2000) 1 Final. Brussels: Commission of the European Communities. Available at: https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2000:0001:FIN:EN:PDF (accessed 3 December 2020).Google Scholar
European Commission (2011) Commission Staff Working Paper – Impact Assessment: Accompanying document to the Proposal for a European Parliament and Council Directive on the Use of Passenger Name Record Data for the Prevention, Detection, Investigation and Prosecution of Terrorist Offences and Secious Crime. Brussels: European Commission. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52010SC0132&from=EN (accessed 3 December 2020).Google Scholar
European Commission (2018a) Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: ‘Towards a Common European Data Space’. Brussels: European Commission. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52018DC0232&from=EN (accessed 1 September 2022).Google Scholar
European Commission (2018b) Staff Working Document on Liability for Emerging Digital Technologies Accompanying the Document Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee. Brussels: European Commission. Available at: https://ec.europa.eu/digital-single-market/en/news/european-commission-staff-working-document-liability-emerging-digital-technologies (accessed 3 December 2020).Google Scholar
European Commission (2019) Ethics Guidelines for Trustworthy AI | Shaping Europe's Digital Future. Brussels: European Commission. Available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai (accessed 2 December 2020).Google Scholar
European Commission (2020a) Towards a European Strategy on Business-to-Government Data Sharing for the Public Interest: Final Report Prepared by the High-Level Expert Group on Business-to-Government Data Sharing. Brussels: European Commission. Available at: https://op.europa.eu/en/publication-detail/-/publication/d96edc29-70fd-11eb-9ac9-01aa75ed71a1 (accessed 3 December 2020).Google Scholar
European Commission (2020b) What Can Big Data Do for You? | Shaping Europe's Digital Future. Brussels: European Commission. Available at: https://ec.europa.eu/digital-single-market/en/what-can-big-data-do-you (accessed 2 December 2020).Google Scholar
European Commission (2020c) White Paper on Artificial Intelligence – A European Approach to Excellence and Trust. Brussels: European Commission. Available at: https://ec.europa.eu/commission/sites/beta-political/files/political-guidelines-next-commission_en.pdf. (accessed 3 December 2020).Google Scholar
Ewald, F (1993) Two infinities of risk. In Massumi, B (ed), The Politics of Everyday Fear. Minneapolis: University of Minneapolis Press, pp. 221228.Google Scholar
Fassin, D (2018) Life: A Critical User's Manual. Cambridge: Polity Press.Google Scholar
Finlay, S (2014) Predictive Analytics, Data Mining and Big Data: Predictive Analytics, Data Mining and Big Data. Basingstoke: Palgrave Macmillan UK.CrossRefGoogle Scholar
Friedman, L, Macaulay, S and Stookey, J (1995) Law & Society: Readings on the Social Study of Law. New York: W.W. Norton & Co.Google Scholar
Gantchev, V (2019) Data protection in the age of welfare conditionality: respect for basic rights or a race to the bottom? European Journal of Social Security 21, 322.Google Scholar
Gritsenko, D and Wood, M (2022) Algorithmic governance: a modes of governance approach. Regulation & Governance 16, 4562.CrossRefGoogle Scholar
Guillaume, B (2012) L'esprit de la précaution [The spirit of precaution]. Revue de Metaphysique et de Morale 76, 491509.Google Scholar
Gutwirth, S (2013) Le contexte du droit ce sont ses sources formelles et les faits et moyens qui exigent son intervention [The context of the law is its formal sources and the facts and means which require its intervention]. Revue interdisciplinaire d’études juridiques 70, 108116.Google Scholar
Hajer, MA (2010) Authoritative Governance: Policy Making in the Age of Mediatization. Oxford: Oxford University Press.Google Scholar
Hallgren, W et al. (2016) The biodiversity and climate change virtual laboratory: where ecology meets big data. Environmental Modelling and Software 76, 182186.CrossRefGoogle Scholar
Hampton, SE et al. (2013) Big data and the future of ecology. Frontiers in Ecology and the Environment 11, 156162.Google Scholar
Hartog, F (2003) Régimes d'historicité. Présentisme et Expériences Du Temps, François Hartog, Sciences Humaines [Regimes of Historicity: Presentism and Experiences of Time]. Paris: Seuil.Google Scholar
Heaven, WD (2021) Hundreds of AI Tools Have Been Built to Catch Covid: None of them Helped. MIT Technology Review. Available at: https://www.technologyreview.com/2021/07/30/1030329/machine-learning-ai-failed-covid-hospital-diagnosis-pandemic/ (accessed 31 December 2020).Google Scholar
Herweijer, C and Ramchandani, P (2018) Fourth Industrial Revolution for the Earth Series Harnessing Artificial Intelligence for the Earth. Fourth Industrial Revolution for the Earth Series. Available at: http://www3.weforum.org/docs/Harnessing_Artificial_Intelligence_for_the_Earth_report_2018.pdf (accessed 31 December 2020).Google Scholar
Hofstadter, D (2008) Gödel, Escher, Bach – Les Brins d'une Guirlande Éternelle [Gödel, Escher, Bach: The Strands of an Eternal Garland]. Paris: Dunod.Google Scholar
Holbraad, M (2013) Turning a corner: Preamble for ‘The relative native’ by Eduardo Viveiros de Castro. HAU: Journal of Ethnographic Theory 3, 467470.Google Scholar
Holland, JH (2014) Complexity: A Very Short Introduction. Oxford: Oxford University Press.CrossRefGoogle Scholar
Hui Kyong, Chun W (2011) Crisis, crisis, crisis, or sovereignty and networks. Theory, Culture & Society 28, 91112.CrossRefGoogle Scholar
IPCC (2021) Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. New York/Cambridge: Cambridge University Press.Google Scholar
Jansen, F (2018) Data Driven Policing in the Context of Europe. Working paper. Cardiff University, Wales. Available at: https://www.digitalmarketplace.service.gov.uk/digital-outcomes-and-specialists/opportunities/1227 (accessed 3 December 2020).Google Scholar
Jasanoff, S (2004) States of Knowledge: The Co-production of Science and the Social Order. London/New York: Routledge Taylor & Francis Group.CrossRefGoogle Scholar
Jasanoff, S (2015) Future imperfect: science, technology, and the imaginations of modernity. In Jasanoff, S and Kim, S-H (eds), Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power. Chicago/London: University of Chicago Press.CrossRefGoogle Scholar
Jasanoff, S and Kim, S-H (eds) (2015) Dreamscapes of Modernity. Dreamscapes of Modernity. Chicago/London: University of Chicago Press.CrossRefGoogle Scholar
Jayalakshmi, G and Anuradha, T (2017) Big Data Technologies for Predicting Epidemics and Enhancing the Quality of Human Life, 2017 International Conference on Big Data Analytics and Computational Intelligence, 23–25 March. Available at: https://doi.org/10.1109/ICBDACI.2017.8070831 (accessed 30 December 2020).CrossRefGoogle Scholar
Kallinikos, J (2002) Reopening the black box of technology artifacts. ICIS 2002 Proceedings published online December 2002, https://aisel.aisnet.org/icis2002/26.Google Scholar
Kalpokas, I (2019) Algorithmic Governance: Politics and Law in the Post-human Era. Cham: Springer International Publishing.CrossRefGoogle Scholar
Kaufmann, M, Egbert, S and Leese, M (2019) Predictive policing and the politics of patterns. British Journal of Criminology 59, 674692.Google Scholar
Kerr, I and Earle, J (2013) Prediction, preemption, presumption: how big data threatens big picture privacy. Stanford Law Review Online 66, 6572.Google Scholar
Lakoff, A (2017) Unprepared – Global Health in a Time of Emergency. Oakland: University of California Press.CrossRefGoogle Scholar
Lapoujade, D (2017) Les Existences Moindres [Lesser Existences]. Paris: Les Éditions de Minuit.Google Scholar
Latour, B (2002) We Have Never Been Modern. Cambridge, MA: Harvard University Press.Google Scholar
Latour, B (2009) The Making of Law: An Ethnography of the Conseil d'Etat. Cambridge: Polity Press.Google Scholar
Lazaro, C (2018) Le pouvoir « divinatoire » des algorithmes [The ‘divinatory’ power of algorithms]. Anthropologie et Sociétés published online 5 October, https://doi.org/10.7202/1052640ar.Google Scholar
Le Quéré, C et al. (2020) Temporary reduction in daily global CO2 emissions during the COVID-19 forced confinement. Nature Climate Change 10, 647653.CrossRefGoogle Scholar
Leijten, I (2020) The Dutch SyRI Case: Some Thoughts on Indivisible Interferences and the Status of Social Rights. IACL-AIDC Blog. Available at: https://blog-iacl-aidc.org/social-rights/2020/5/19/the-dutch-syri-case-some-thoughts-on-indivisible-interferences-and-the-status-of-social-rights (accessed 29 December 2020).Google Scholar
Lewis, M (2011) The Big Short: Inside the Doomsday Machine. New York: W.W. Norton & Co.Google Scholar
Lynskey, O (2019) Criminal justice profiling and EU data protection law: precarious protection from predictive policing. International Journal of Law in Context 15, 162176.Google Scholar
Madden, M et al. (2017) Privacy, poverty, and big data: a matrix of vulnerabilities for poor privacy, poverty, and big data: a matrix of vulnerabilities for poor americans. Washington University Law Review 95, 53125.Google Scholar
Mantello, P (2016) The machine that ate bad people: the ontopolitics of the precrime assemblage. Big Data and Society 3, published online 12 December, https://doi.org/10.1177/2053951716682538.CrossRefGoogle Scholar
Massumi, B (2007) Potential politics and the primacy of preemption. Theory & Event 10, 3-3.CrossRefGoogle Scholar
Massumi, B (2009) National enterprise emergency: steps toward an ecology of powers. Theory, Culture & Society 26, 153185.CrossRefGoogle Scholar
McCulloch, J and Wilson, D (2015) Pre-Crime: Pre-Emption, Precaution and the Future. London: Routledge.CrossRefGoogle Scholar
Mehrotra, S et al. (2013) Technological challenges in emergency response. IEEE Intelligent Systems published online July, https://doi.org/10.1109/MIS.2013.118.CrossRefGoogle Scholar
Merton, RK (1948) The self-fulfilling prophecy. Antioch Review 8, 193210.CrossRefGoogle Scholar
Michael, M (2017) Enacting big futures, little futures: toward an ecology of futures. Sociological Review 65, 509524.CrossRefGoogle Scholar
Misuraca, G, Broster, D and Centeno, C (2012) Digital Europe 2030: designing scenarios for ICT in future governance and policy making. Government Information Quarterly 29, S121S131.CrossRefGoogle Scholar
Morin, E (2008) On Complexity. New York: Hampton Press.Google Scholar
Neyrat, F (2009) Rupture de défense [Defence break]. Lignes published online, https://doi.org/10.3917/lignes.029.0046.CrossRefGoogle Scholar
Neyrat, F (2010) Avant-propos sur les sociétés de clairvoyance [Foreword on clairvoyance societies]. Multitudes 40, 104111.CrossRefGoogle Scholar
Norouzzadeh, MS et al. (2018) Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning. Proceedings of the National Academy of Sciences of the United States of America 115, E5716E5725.Google ScholarPubMed
Ost, F (1999) Le Temps Du Droit [The time of law]. Paris: Odile Jacob.Google Scholar
Rasmussen, MV (2004) ‘It sounds like a riddle’: security studies, the War on Terror and risk. Millennium: Journal of International Studies 33, 381395.CrossRefGoogle Scholar
Raymond, N and Al, Achkar Z (2016) Data Preparedness: Connecting Data, Decision-making and Humanitarian Response. Cambridge: Harvard Humanitarian Initiative. Available at: https://hhi.harvard.edu/files/humanitarianinitiative/files/data_preparedness_update.pdf?m=1607547533 (accessed 3 December 2020).Google Scholar
Raza, K (2020) Artificial intelligence against COVID-19: a meta-analysis of current research. In Hassanien, A, Dey, N and Elghamrawy, S (eds), Big Data Analytics and Artificial Intelligence Against COVID-19: Innovation Vision and Approach. Cham: Springer International Publishing, pp. 165176.Google Scholar
Reith, G (2004) Uncertain times: the notion of ‘risk’ and the development of modernity. Time & Society 13, 383402.Google Scholar
Rieder, G (2018) Tracing Big Data Imaginaries through Public Policy: The Case of the European Commission. New York: Routledge.Google Scholar
Roberts, SL (2019) Big data, algorithmic governmentality and the regulation of pandemic risk. European Journal of Risk Regulation 122.Google Scholar
Romeike, F and Eicher, A (2016) Predictive Analytics – RiskNET – The Risk Management Network. FIRM Yearbook. Available at: https://www.risknet.de/en/topics/news-details/predictive-analytics-1/ (accessed 2 December 2020).Google Scholar
Rosset, C (2012) The Real and Its Double. Chicago: Seagull Books.Google Scholar
Rouvroy, A and Stiegler, B (2015) Le régime de vérité numérique [The digital truth regime]. Socio 4, 113140.CrossRefGoogle Scholar
Sangireddy, R (2015) Unraveling Real-time Predictive Analytics. IOT Central. Available at: https://www.iotcentral.io/blog/unraveling-real-time-predictive-analytics (accessed 2 December 2020).Google Scholar
Sanila, S, Subramanian, DV and Sathyalakshmi, S (2017) Real-time mining techniques: a big data perspective for a smart future. Indian Journal of Science and Technology 10, 17.CrossRefGoogle Scholar
Schinkel, W (2011) Prepression: the actuarial archive and new technologies of security. Theoretical Criminology 15, 365380.CrossRefGoogle Scholar
Sørensen, MP (2018) Ulrich Beck: exploring and contesting risk. Journal of Risk Research 21, 616.CrossRefGoogle Scholar
Souriau, E (1939) L'instauration Philosophique [The Philosphical Establishment]. Paris: Alcan.Google Scholar
Sperber, D (1982) Apparently irrational beliefs. In Lukes, S and Hollis, M (eds), Rationality and Relativism. Oxford: Blackwell, pp. 149180.Google Scholar
Stake, R (1995) The Art of Case Study Research. Thousand Oaks, CA: SAGE Publications.Google Scholar
Stiegler, B (2019) Il Faut s'adapter – Sur Un Nouvel Impératif Politique [‘You Have to Adapt’ – On a New Political Imperative]. Paris: Gallimard.Google Scholar
Sunstein, C (2002) Risk and Reason – Safety Law and Environment. Cambridge: Cambridge University Press.Google Scholar
Supiot, A (2015) La Gouvernance Par Les Nombres [Governance by Numbers]. Paris: Frayard.Google Scholar
Thomas, L (2014) Pandemics of the future: disease surveillance in real time. Surveillance and Society 12, 287300.CrossRefGoogle Scholar
Timan, T and Grommé, F (2020) A framework for social fairness: insights from two algorithmic decision-making controversies in the Netherlands. SSRN Electronic Journal. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3756755 (accessed 29 December 2020).Google Scholar
Timms, A (2017) Data divination [technically speaking by Paull McFedries]. IEEE Spectrum published online 31 January, https://doi.org/10.1109/MSPEC.2017.7833501.CrossRefGoogle Scholar
Ulbricht, L and Yeung, K (2021) Algorithmic regulation: a maturing concept for investigating regulation of and through algorithms. Regulation & Governance 16, published online 27 August, https://doi.org/10.1111/rego.12437.Google Scholar
Verbeek, P-P (2016) Toward a theory of technological mediation: a program for postphenomenological research. In Berg, O, Friis, JK and Crease, RP (eds), Technoscience and Postphenomenology: The Manhattan Papers. Maryland/London: Lexington Books, pp. 1189.Google Scholar
Vernant, JP et al. (1974) Divination et Rationalité [Divination and Rationality]. Paris: Éditions du Seuil.Google Scholar
Vervloesem, K (2020) How Dutch activists got an invasive fraud detection algorithm banned. Algorithm Watch. Available at: https://algorithmwatch.org/en/story/syri-netherlands-algorithm/ (accessed 29 December 2020).Google Scholar
Wendeling, C (2015) Concurrent Design Foresight: Report to the European Commission of the Expert Group on Foresight Modelling. Publications Office of the European Union. Available at: http://ec.europa.eu/research/swafs/pdf/pub_governance/concurrent_design_foresight_report.pdf (accessed 3 December 2020).Google Scholar
Williams, P and Kind, E (2019) Data-driven Policing: The Hardwiring of Discriminatory Policing Practices Across Europe. Brussels: European Network Against Racism.Google Scholar
Winner, L (2020) Do artefacts have politics? In Winner, L (ed.), The Whale and the Reactor: A Search for Limits in an Age of High Technology, 2nd edn. Chicago: University of Chicago Press.CrossRefGoogle Scholar
World Health Organization (WHO) (2021) Roadmap to Improve and Ensure Good Indoor Ventilation in the Context of COVID-19. Geneva: WHO Headquarters.Google Scholar
Wu, JT, Leung, K and Leung, GM (2020) Nowcasting and forecasting the potential domestic and international spread of the 2019-nCoV outbreak originating in Wuhan, China: a modelling study. The Lancet 395, 689697.CrossRefGoogle Scholar
Yeung, K (2018) Algorithmic regulation: a critical interrogation. Regulation and Governance 12, 505523.CrossRefGoogle Scholar
Yeung, K (2019) A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework. SSRN Electronic Journal. Available at: https://ssrn.com/abstract=3286027 (accessed 29 December 2020).Google Scholar
Yu, M, Yang, C and Li, Y (2018) Big data in natural disaster management: a review. Geosciences 8, 165.CrossRefGoogle Scholar
Zebrowski, C (2013) The nature of resilience. Resilience 1, 159173.CrossRefGoogle Scholar
Zeng, D, Cao, Z and Neill, DB (2021) Artificial intelligence–enabled public health surveillance – from local detection to global epidemic monitoring and control. In Xing, L, Giger, M and Min, J (eds), Artificial Intelligence in Medicine: Technical Basis and Clinical Applications. Amsterdam: Elsevier, pp. 437453.CrossRefGoogle Scholar