Abstract
In recent years, we have learned much about how police, defendants, and prosecutors are affected by different policies. At the same time, economic theory is being forgotten or disregarded. Even more so today, economists and political scientists treat “the enforcement apparatus of police, courts, prosecutors, and legislature as a philosopher-king, with imperfect knowledge but only the best of motives” (Friedman Law’s order: What economics has to do with law and why it matters. Princeton University Press, Princeton, 2001). This special issue contains a sampling of papers at the intersection of criminal justice and public choice. Our introduction discusses the recent literature on criminal justice and calls for a multifaceted empirical approach that incorporates the insights of public choice theory.
Over the last few decades, scholars have increasingly turned to identifying the causal impact of changes in policies and laws on criminal justice outcomes. This literature, too enormous to overview here, has yielded important insights into the actual functioning of crime and judicial systems. Typically, however, legal actors are not modeled and identification strategies assume they act naively (or even randomly). Moreover, “policy implications” are made with far too little concern for unanticipated consequences or methodological limitations.
“Statistics first” is the zeitgeist in many areas of the social sciences.Footnote 1 As Hipp and Williams (2020) summarize, “the data and methods used by researchers in recent years are so novel that theory is at best used in a post-hoc manner to explain findings rather than to generate testable hypotheses”. While the balance of theory and empirics is an active research area (Todd & Wolpin, 2021; Imbens, 2021; Al-Ubaydli et al., 2021; Card, 2022),Footnote 2 Recent research now clearly documents the problems with a “statistics first” mentality (Vivalt, 2020; Akerlof, 2020; Brodeur et al., 2020; Huntington-Klein et al., 2021; Weill et al., 2021; Eliaz et al., 2021; Jardim et al., 2022; de Chaisemartin & D’Haultfoeuille, 2022; Young, 2022; Frankel & Kartik, 2022; Beiser-McGrath & Beiser-McGrath, 2023; Simonsohn, 2022). Criminal justice research is not immune. Our knowledge of crime and courts does not improve lexicographically—progressing along rungs of a statistical hierarchy with a single gold-standard of evidence on top.
While many empirical fields face tradeoffs between identifiability and practical relevance, criminal justice research faces unique measurement issues (Adamson & Rentschler, 2023). Most of our empirical records result from actions by criminal actors (who do not want any record of their activity) as well as civilians, police, coroners, and many others. Scholars often make assumptions that the behavior of some, or even all, of such actors are exogenous to satisfy statistical criteria. This ignores much economic literature and is a main way that a “statistics first” mentality creates false findings and obscures true ones,Footnote 3 Additional issues surround the research questions and cases that a “statistics first” mentality selects for. Hayek (1943) is again pertinent:
The blind transfer of the striving for quantitative measurements to a field where the specific conditions are not present which give it its basic importance in the natural sciences is the result of an entirely unfounded prejudice. It is probably responsible for the worst aberrations and absurdities produced by scientism in the social sciences. It not only leads frequently to the selection for study of the most irrelevant aspects of the phenomena because they happen to be measurable, but also to “measurements” and assignments of numerical values which are absolutely meaningless. What a distinguished philosopher recently wrote about psychology is at least equally true of the social sciences, namely that it is only too easy “to rush off to measure something without considering what it is we are measuring, or what measurement means.”
Once we acknowledge measurement and methodological issues, we must also acknowledge that many links to policy are tenuous at best. While “legal capacity” research is in some sense the branch of criminal justice that explicitly adopts the “best of motives” assumption, experimental and applied microeconomic studies also seem to have a well-intending policymaker in mind with their implied policies. (Experimental studies are more focused on justice biases and quasi-experimental studies are more focused on legal biases, but both evade the question of “who will nudge the nudgers?”) The pure theory of self-interested policymakers is not sufficient to determine policy, but it is necessary for deducing policy implications from criminal justice statistics.
A mentality of complements, we argue, is a promising path forward. The cost-benefit analysis of crime and courts (such as Becker (1968)) are crowning achievements of economics, and a “statistics first” mentality to criminal justice is in many ways a return to a pre-economic analysis of law. (Criminologists have a long history of field experiments, including the Cambridge-Somerville Youth Study of 1939 and the Kansas City preventive patrol experiment of 1972; see Oakley (2000).) Although there are gains from adopting the credible methods developed by criminologists, the gains are probably larger with specialization and exchange. While public choice scholars have contributed both empirics and theory, economists might have a comparative advantage in theory. In particular, public choice theory has much to contribute in answering two broad questions: what do empirical analyses actually show when criminal justice data are generated from self-interested legal actors and researchers? What legal policies are feasible and desirable when being determined by self-interested policy-makers?
We call for public choice scholars to analyze legal systems and actors using a variety of methodologies, and to exploit the complementarities between them. For this special issue, we solicited papers that took all legal actors to be rational, and which used experiments, observational data, or applied theory. We fostered complementarity across these papers by requesting at least one reviewer with a different methodological specialty. As might be expected, the papers that appear in this special issue are methodologically diverse.
One topic that features prominently is the importance of civilian views and beliefs with regard to legal actors. Hong and Zhang (2023) theoretically examine the role of bureaucratic beliefs in corruption. Candelo et al. (2023) report an insightful experiment exploring how trust for public officials varies with income. Baumann et al. (2023) provide a theoretical and experimental analysis of the effects of fine revenue going directly to the police or to the public purse. DeAngelo et al. (2023) provide an observational study examining how prosecutors “upcharge” to protect the police from bad publicity.
Another theme emerged on the effect of incentives and institutions surrounding trials on behavior prior to trials. Michaeli and Zohar (2023) provide a theoretical demonstration that the availability of plea bargaining will tend to dramatically reduce the prevalence of trials. Ralston et al. (2023) provide a theoretical and experimental analysis of the effects of incentives that change how prosecutors value plea bargains or trial sentences. Bienenstock and Kopp (2023) provide a theoretical model that helps explain why foreign companies have consistently chosen to settle cases brought against them by the American Foreign Corrupt Practices Act. Guerra et al. (2023) provide a theoretical and experimental analysis of behavior before trials under adversarial and inquisitorial systems.
The special issue also includes Ball et al. (2023), which experimentally evaluates whether how rights are framed can affect cooperative behavior. Finally, Di Liddo and Morone (2023) reports the results of an interesting experiment that explores the relationship between fiscal disparities and corruption.
Notes
As Leeson (2020) notes, it is important not to conflate statistics and economics.
This literature extends deep into economics (Koopmans, 1947; Klein, 1960; Roth, 1991; McCloskey & Ziliak, 1996; Levitt & List, 2009; Angrist & Pischke, 2010; Keane, 2010; Rust, 2010; Heckman, 2010; Deaton, 2010) and also wide into other fields (Ward et al., 2010; Mearsheimer & Walt, 2013; Muthukrishna & Henrich, 2019; Wasserstein et al., 2019). See Roe and Just (2009) for an overview of methodological tradeoffs in economics, and Wilson (2014) for advancing the idea of consilience. Even John Tukey, a seminal statistician in the credibility revolution, warned that “neither exploratory nor confirmatory is sufficient alone. To try to replace either by the other is madness. We need them both” (Tukey, 1980).
For more macro-historical studies, additional problems arise due to the multitude of measurements all labeled “legal capacity”. Nonetheless, studies at any scale often have at least three sins identified by Schrodt (2014): “Pre-scientific explanation in the absence of prediction” “A linear statistical monoculture that fails to consider alternative structures”, and “Confusing statistical controls and experimental controls”. Another common sin, not listed by Schrodt but common in observational studies that seek to establish a causal direction, is confusing “absence of evidence” with “evidence of absence” for whether a variable is endogenous.
References
Adamson, J., & Rentschler, L. (2023). How officer incentives affect crime, measurement, and justice. Ssrn working paper.
Akerlof, G. A. (2020). Sins of omission and the practice of economics. Journal of Economic Literature, 58(2), 405–18.
Al-Ubaydli, O., Lee, M. S., List, J. A., Mackevicius, C. L., & Suskind, D. (2021). How can experiments play a greater role in public policy? Twelve proposals from an economic model of scaling. Behavioural Public Policy, 5(1), 2–49.
Angrist, J. D., & Pischke, J.-S. (2010). The credibility revolution in empirical economics: How better research design is taking the con out of econometrics. Journal of Economic Perspectives, 24(2), 3–30.
Ball, S., Dave, C., & Dodds, S. (2023). Enumerating rights: More is not always better. Public Choice, pp. 1–23.
Baumann, F., Bienenstock, S., Friehe, T., & Ropaul, M. (2023). Fines as enforcers’ rewards or as a transfer to society at large? evidence on deterrence and enforcement implications. Public Choice, pp. 1–27.
Becker, G. S. (1968). Crime and punishment: An economic approach. Journal of Political Economy, 76(2), 169–217.
Beiser-McGrath, J., & Beiser-McGrath, L. F. (2023). The consequences of model misspecification for the estimation of nonlinear interaction effects. Political Analysis, 31(2), 278–287.
Bienenstock, S., & Kopp, P. (2023). The extensive reach of the FCPA beyond american borders: Is a bad deal always better than a good trial? Public Choice, pp. 1–21.
Brodeur, A., Cook, N., & Heyes, A. (2020). Methods matter: p-Hacking and publication bias in causal analysis in economics. American Economic Review, 110(11), 3634–60.
Candelo, N., de Oliveira, A. C., & Eckel, C. (2023). Trust among the poor: African americans trust their neighbors, but are less trusting of public officials. Public Choice, pp. 1–26.
Card, D. (2022). Design-based research in empirical microeconomics. American Economic Review, 112(6), 1773–81.
de Chaisemartin, C., & D’Haultfoeuille, X. (2022). Two-way fixed effects and differences-in-differences with heterogeneous treatment effects: A survey. Working Paper 29691, National Bureau of Economic Research.
DeAngelo, G., Gomies, M., & Romaniuc, R. (2023). Do civilian complaints against police get punished? Public Choice, pp. 1–30.
Deaton, A. (2010). Instruments, randomization, and learning about development. Journal of Economic Literature, 48(2), 424–55.
Di Liddo, G., & Morone, A. (2023). Local income inequality, rent-seeking detection, and equalization: A laboratory experiment. Public Choice, pp. 1–19.
Eliaz, K., Spiegler, R., & Weiss, Y. (2021). Cheating with models. American Economic Review: Insights, 3(4), 417–34.
Frankel, A., & Kartik, N. (2022). Improving information from manipulable data. Journal of the European Economic Association, 20(1), 79–115.
Friedman, D. D. (2001). Law’s order: What economics has to do with law and why it matters. Princeton University Press.
Guerra, A., Maraki, M., Massenot, B., & Thöni, C. (2023). Deterrence, settlement, and litigation under adversarial versus inquisitorial systems. Public Choice, pp. 1–26.
Heckman, J. J. (2010). Building bridges between structural and program evaluation approaches to evaluating policy. Journal of Economic Literature, 48(2), 356–98.
Hipp, J. R., & Williams, S. A. (2020). Advances in spatial criminology: The spatial scale of crime. Annual Review of Criminology, 3(1), 75–95.
Hong, F., & Zhang, D. (2023). Bureaucratic beliefs and law enforcement. Public Choice, pp. 1–23.
Huntington-Klein, N., Arenas, A., Beam, E., Bertoni, M., Bloem, J. R., Burli, P., Chen, N., Grieco, P., Ekpe, G., Pugatch, T., Saavedra, M., & Stopnitzky, Y. (2021). The influence of hidden researcher decisions in applied microeconomics. Economic Inquiry, 59(3), 944–960.
Imbens, G. W. (2021). Statistical significance, p-values, and the reporting of uncertainty. Journal of Economic Perspectives, 35(3), 157–74.
Jardim, E. S., Long, M. C., Plotnick, R., van Inwegen, E., Vigdor, J. L., & Wething, H. (2022). Boundary discontinuity methods and policy spillovers. Working Paper 30075, National Bureau of Economic Research.
Keane, M. P. (2010). Structural vs. atheoretic approaches to econometrics. Journal of Econometrics, 156(1), 3–20. https://doi.org/10.1016/j.jeconom.2009.09.003
Klein, L. R. (1960). Single equation vs. equation system methods of estimation in econometrics. Econometrica, 28(4), 866–871.
Koopmans, T. C. (1947). Measurement without theory. The Review of Economics and Statistics, 29(3), 161–172.
Leeson, P. T. (2020). Economics is not statistics (and vice versa). Journal of Institutional Economics, 16(4), 423–425.
Levitt, S. D., & List, J. A. (2009). Field experiments in economics: The past, the present, and the future. European Economic Review, 53(1), 1–18.
McCloskey, D. N., & Ziliak, S. T. (1996). The standard error of regressions. Journal of Economic Literature, 34(1), 97–114.
Mearsheimer, J. J., & Walt, S. M. (2013). Leaving theory behind: Why simplistic hypothesis testing is bad for international relations. European Journal of International Relations, 19(3), 427–457.
Michaeli, M., & Zohar, Y. (2023). The vanishing trial: A dynamic model with adaptive agents. Public Choice, pp. 1–22.
Muthukrishna, M., & Henrich, J. (2019). A problem in theory. Nature Human Behaviour, 3, 221–229.
Oakley, A. (2000). A historical perspective on the use of randomized trials in social science settings. Crime & Delinquency, 46(3), 315–329.
Ralston, J., Aimone, J., Rentschler, L., & North, C. (2023). Prosecutor plea bargaining and conviction rate structure: Evidence from an experiment. Public Choice, pp. 1–27.
Roe, B. E., & Just, D. R. (2009). Internal and external validity in economics research: Tradeoffs between experiments, field experiments, natural experiments, and field data. American Journal of Agricultural Economics, 91(5), 1266–1271.
Roth, A. E. (1991). Game theory as a part of empirical economics. The Economic Journal, 101(404), 107–114.
Rust, J. (2010). Comments on: “structural vs atheoretic approaches to econometrics’’ by michael keane. Journal of Econometrics, 156(1), 21–24.
Schrodt, P. A. (2014). Seven deadly sins of contemporary quantitative political analysis. Journal of Peace Research, 51(2), 287–300.
Simonsohn, U. (2022). Interactiongate: Testing and probing interactions with linear models in the real (nonlinear) world is scandalously invalid. Working paper.
Todd, P. E., & Wolpin, K. I. (2023). The best of both worlds: Combining RCTS with structural modeling. Journal of Economic Literature, 61(1), 41–85.
Tukey, J. W. (1980). We need both exploratory and confirmatory. The American Statistician, 34(1), 23–25.
Hayek, F. A. V. (1943). Scientism and the study of society part ii. Economica, 10(37), 34–63.
Vivalt, E. (2020). How much can we generalize from impact evaluations? Journal of the European Economic Association, 18(6), 3045–3089.
Ward, M. D., Greenhill, B. D., & Bakke, K. M. (2010). The perils of policy by p-value: Predicting civil conflicts. Journal of Peace Research, 47(4), 363–375.
Wasserstein, R. L., Schirm, A. L., & Lazar, N. A. (2019). Moving to a world beyond “\({p}<0.05\)”. The American Statistician, 73(sup1), 1–19.
Weill, J. A., Stigler, M., Deschenes, O., & Springborn, M. R. (2021). Researchers’ degrees-of-flexibility and the credibility of difference-in-differences estimates: Evidence from the pandemic policy evaluations. Working Paper 29550, National Bureau of Economic Research.
Wilson, E. (2014). Consilience: The unity of knowledge. Knopf Doubleday Publishing Group.
Young, A. (2022). Consistency without inference: Instrumental variables in practical application. European Economic Review, 147, 104112.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Adamson, J., Rentschler, L. Criminal justice from a public choice perspective: an introduction to the special issue. Public Choice 196, 223–227 (2023). https://doi.org/10.1007/s11127-023-01089-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11127-023-01089-2