Ethical and Political Issues in Data Mining, Especially Unfairness in Automated Decision Making
03 Aug 2024 21:50
Attention conservation notice: I'm not an active researcher in this area, but many of my friends and colleagues are, I sit on thesis committees, etc., and so my recommended readings below are, no doubt, more CMU-centric than an impartial survey of the literature would warrant. I wouldn't bother to mention this, except that some readers appear to be confused between "a personal notebook I put online in case others might find it useful" and "a reference work which makes claims to authority".
I won't be explaining data mining here. But I will say that I think "ethical and political issues in data mining" is a lot more accurate and reasonable name for what people are really worried about than "algorithmic fairness". This opinion is partly because I don't think algorithms are really at the core of a lot of the justified and widely-shared concerns. The formal notions of "algorithmic fairness" could also be applied to human decision makers. (It would be very interesting to see whether, say, unaided human loan officers are closer to, or further from, false positive parity than credit-scoring algorithms; maybe someone's done this experiment.) Indeed, if those formal notions are good ones, we probably ought to be applying them to human decision-makers. That doesn't mean those who design automated decision-making systems shouldn't pay attention, but it does tell me that the real issue here isn't the use of algorithms.
Or, again: it's (probably!) a fact that in contemporary English, the word "doctor" is more likely to refer to a man than to a woman, and vice versa for "nurse". If a text-mining model picks up this actual correlation and uses it (for instance in an analogy-completion tassk), it is accurately reflecting facts about how English is used in our society. It seems obvious to me that those facts are explained by untold generations of sexism. Whether and when we want language models to exploit such facts would seem to depend on the uses we're putting those algorithms to, as well as on contested ethical and political choices about what kind of world we'd like to see. (There are, after all, plenty of people who approve of a world where doctors are more likely to be men and nurses women.) It would also seem to require sociological knowledge, or at least theories, about how modifying the output of text-mining systems might, or might not, contribute to changing society. If the combination of political and ethical contestation with reliance on necessarily-speculative theories about the remote, cumulative impacts of technical choices on social structure seems like a recipe for disputes, well, you wouldn't be wrong. I wouldn't even blame you for wanting to ignore the issue and get back to making the damn things work. But the issue will not ignore you.
(I also dislike talk of "regulating artificial intelligence", not least because artificial intelligence, in the sense people like to think of it, "is the technology of the future, and always will be".)
Do my homework for me: A lot of the work in this area is done by people who more or less presuppose secular, egalitarian values, with some variation in how they feel about liberalism vs. socialism. As a secular, egaliatarian liberal socialist, I share these values, but this is also a very narrow range of opinion. Is there no serious work being done by conservatives? Is there no work on algorithmic fairness informed by Catholic social teaching, or by Islamic law? No neo-Confucians? If anyone could send me pointers, I'd appreciate it.
A straightforward, if labor-intensive, project in the sociology of science / science-and-technology-studies: Go over the first, say, five years of conference proceedings in algorithmic fairness. Grab the CVs of all the contributors. How many of them had received any formal training in ethics, political theory, or even any social science? (I have a guess!) Now apply Abbott on the "system of professions", and in particular on claims of jurisdiction by would-be professions. (To be clear, I have no formal training in any of those areas.)
A personal point of incredulity: Randomized decision-making algorithms as a way of achieving fairness. I understand the technical reasons why people write papers about these, but I just can't swallow it. The line I used to use at thesis defenses was to imagine that your brother's case is being decided by such a procedure, and the judge / loan officer / etc. is rolling the dice right in front of you --- would you really feel your brother had been fairly treated? (I no longer use this line at thesis defenses because (a) everyone at CMU's heard it too many times, and (b) it's not fair to take this out on graduate students who are just going along with the literature.)
- See also:
- Clinical and Actuarial Judgment Compared
- Law
- Measurement, Especially in the Social and Behavioral Sciences
- Recommendation Engines
- Recommended, big picture (links on book titles point to my reviews):
- Sam Corbett-Davies and Sharad Goel, "The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning", arxiv:1808.00023
- Henry Farrell and Marion Fourcade, "The Moral Economy of High-Tech Modernism", Daedalus Winter 2023
- Bernard E. Harcourt, Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age [Precis as a 43 pp. PDF working paper]
- Michael Kearns and Aaron Roth, The Ethical Algorithm: The Science of Socially Aware Algorithm Design
- Cathy O'Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
- Bruce Schneier, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World
- Recommended, close ups (very misc. for such a huge topic):
- danah boyd and Kate Crawford, "Six Provocations for Big Data" (2011) [ssrn/1926431]
- Josh Dzieza, "AI Is a Lot of Work", The Verge 20 June 2023 [Comments]
- Henry Farrell, Abraham Newman and Jeremy Wallace, "Spirals of Delusion: How AI Distorts Decision-Making and Makes Dictators More Dangerous", Foreign Affairs September-October 2022
- Alison Gopnik, "What AI Still Doesn't Know How to Do; Artificial intelligence programs that learn to write and speak can sound almost human --- but they can't think creatively like a small child can", Wall Street Jounral 15 July 2022 [Comments]
- Cynthia Rudin, "Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead", arxiv:1811.10154
- Zeynep Tufekci, "Engineering the Public: Big Data, Surveillance, and Computational Politics", First Monday 19:7 (2014)
- Recommended, close-ups, algorithmic fairness etc.:
- Ali Alkhatib and Michael Bernstein, "Street-Level Algorithms: A Theory at the Gaps Between Policy and Decisions", paper 530 in CHI Conference on Human Factors in Computing Systems Proceedings [CHI 2019] [PDF reprint via the Stanford HCI group. My comments.]
- Richard A. Berk and Ayya A. Elzarka, "Almost Politically Acceptable Criminal Justice Risk Assessment", Criminology and Public Policy 19 (2020): 1231--257, arxiv:1910.11410
- Alexandra Chouldechova, "Fair prediction with disparate impact: A study of bias in recidivism prediction instruments", arxiv:1610.07524
- A. Feder Cooper, Katherine Lee, Madiha Zahrah Choksi, Solon Barocas, Christopher De Sa, James Grimmelmann, Jon Kleinberg, Siddhartha Sen, Baobao Zhang, "Is My Prediction Arbitrary? The Confounding Effects of Variance in Fair Classification Benchmarks", arxiv:2301.11562
- Amanda Coston, Principled Machine Learning for Societally Consequential Decision Making [Ph.D. thesis, Heinz College of Public Policy and Machine Learning Department, Carnegie Mellon University, 2023]
- Amanda Coston, Neel Guha, Derek Ouyang, Lisa Lu, Alexandra Chouldechova, Daniel E. Ho, "Leveraging Administrative Data for Bias Audits: Assessing Disparate Coverage with Mobility Data for COVID-19 Policy", arxiv:2011.07194 [I am convinced by their points about differential measurement error across groups, but equaly struck by this: "the estimates at the individual polling place location level are quite noisy: root meansquared error is 1375 voters". This seems excessively imprecise to base any decisions on!]
- Amanda Coston, Alan Mishler, Edward H. Kennedy, Alexandra Chouldechova, "Counterfactual Risk Assessments, Evaluation, and Fairness", FAT* '20, arxiv:1909.00066
- Amanda Coston, Ashesh Rambachan, Alexandra Chouldechova, "Characterizing Fairness Over the Set of Good Models Under Selective Labels", arxiv:2101.00352
- Kate Crawford, "The Hidden Biases in Big Data", Harvard Business Review 1 April 2013 [but serious, despite the date!]
- Simon DeDeo, "Wrong side of the tracks: Big Data and Protected Categories", pp. 31--42 in Cassidy R. Sugimoto, Hamid R. Ekbia and Michael Mattioli, Big Data Is Not a Monolith (MIT Press, 2016), arxiv:1412.4643 [This is the idea that, when Simon and I were batting it around, we called "prediction without racism". Basically: you don't want to be racist (or sexist, etc.), so obviously you don't directly base your predictions /decisions on race. But there are lots of other innocous-seeming features which might be relevant to what you're trying to predict, or what course of action you should recommend, but are also really correlated, especially in bulk, with race. (For instance, your race and sex can be predicted with reasonable accuracy from the websites you visit.) So how can we use the features without just slipping in the racism through the back door? Simon's very ingenious solution was to use information theory to find the distribution which is closest to the real distribution of the data, but where the variable we're trying to predict is independent of the protected variable(s). This sets up an optimization problem which can actually be solved in closed form, and basically tells you how much you have to re-weight each data point in your model fitting. It's a really clever idea, and I wish it was more widely used.]
- Julia Dressel and Hany Farid, "The accuracy, fairness, and limits of predicting recidivism", Science Advances 4 (2018): eaao5580 [Demonstrating that you can reproduce the error rates of the proprietary COMPAS score (at least on one data set...) using a logistic regression on age and number of priors. This doesn't surprise me, because (before reading this paper!) I'd set that as an exercise in my undergraduate data mining class. The Kids also convinced me that very small classification trees, using those two features, do only very slightly worse. (The optimal tree for predicting violent recidivism has just four leaves.) Now, this doesn't necessarily mean that algorithmic risk prediction tools are a bad idea --- we don't have error rates for judges! --- but it does blow up the justification for using complex, proprietary models. (How proprietary models can possibly have any place in a supposedly adversarial legal system, I cannot understand.)]
- Michael Feldman, Sorelle Friedler, John Moeller, Carlos Scheidegger and Suresh Venkatasubramanian, "Certifying and removing disparate impact", arxiv:1412.3756
- Ira Globus-Harris, Michael Kearns, Aaron Roth, "An Algorithmic Framework for Bias Bounties", arxiv:2201.10408 [The basic idea here is (appropriately!) telegraphed by the abstract. If we're using a model \( f \) to make predictions, and someone or something can point to a (measurable) group of cases \( g \) where another model \( h \) does better by the agreed-upon loss function, switch to a new model, which follows \( h \) on group \( g \), and otherwise still follows \( f \). There is more to it than that, because groups might overlap and so they introduce some machinery to try to keep overlapping patches from interfering with each other, and there are interesting learning-theoretic aspects to making sure that we're not data-mining in the bad sense, i.e., over-fitting accidents of the sample data. But this idea --- when we find a group where another model does better, use that model instead on that group --- is the core. It's a very good paper and I will certainly teach it going forward, but there are some limitations which I wish they'd addressed. (1) At the basic level of procedural fairness (i.e., broadly-liberal ideas of justice), this is a recipe for treating different groups according to different criteria. This is the literal definition of "privilege" (or at least its etymology); it might nonetheless be ethically acceptable for liberals, but there's a tension there which needs at least to be explored. (2) Relatedly, I am not a lawyer, but because this constructs a patchwork of different rules and criteria for different groups, it'd seem very easy to attack this under American anti-discrimination law: these are algorithms for coding disparate treatment into the ultimate model! (3) The algorithms only care about reducing expected loss / "risk" conditional on group membership. They do not try to equalize anything. It is entirely possible that the minimum possible conditional risk for different demographic groups is just different. But this would mean that the predictor which minimizes the conditional risk for every group might violate demographic parity, error rate parity, etc., indeed all the usual notions of algorithmic fairness. So we seem to be back to the trade-offs which the paper sought to escape at its beginning: "Be as accurate as possible for everyone" is, at least potentially, in tension with "Be equally accurate for everyone". My impulse, at least at the time I write this, is to say that it isn't really fair to use a model which is deliberately less accurate than possible for some groups, just because it's equally accurate for all groups. (That is, my gut sides with this paper.) But the contrary position, that equal accuracy across groups (in some form) is what justice and/or political prudence demands, isn't self-evidently absurd. At the least there needs to be an ethical and/or political argument here. (4) Since I happened to re-read chapter 2 of Kearns and Roth's The Ethical Algorithm, prior to teaching it, right after reading this paper, I'd point out that my (1)--(3) are basically all issues they raise in that chapter, which makes it a bit weirder that they're not handled here...]
- Sara Hooker, "Moving beyond 'algorithmic bias is a data problem'", Patterns 2 (2021): 100241 [Commentary]
- Abigail Z. Jacobs, Hanna Wallach, "Measurement and Fairness", arxiv:1912.05511
- Wei-Yin Ko, Daniel D'souza, Karina Nguyen, Randall Balestriero, Sara Hooker, "FAIR-Ensemble: When Fairness Naturally Emerges From Deep Ensembling", arxiv:2303.00586
- Momin M. Malik, "A Hierarchy of Limitations in Machine Learning", arxiv:2002.05193
- Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, Aram Galstyan, "A Survey on Bias and Fairness in Machine Learning", arxiv:1908.09635
- Alan Mishler, Auditing and Achieving Counterfactual Fairness [Ph.D. thesis, CMU Statistics Dept., 2021]
- Arvind Narayanan, "Translation tutorial: 21 fairness definitions and their politics" [PDF]
- Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, Kilian Q. Weinberger, "On Fairness and Calibration", arxiv:1709.02012
- Sonja B. Starr, "Evidence-Based Sentencing and the Scientific Rationalization of Discrimination", Stanford Law Review 66 (2014) 803--872 [The strongest part of this, to my mind, is the causal-inference critique: predicting the risk that someone will re-offend within \( k \) years, under current conditions, is not at all the same as predicting the risk of their committing another crime as a function of the sentence they receive. I am also very sympathetic to the points about the very modest predictive power of the existing algorithms, the possibility of great unmeasured heterogeneity within groups, and the ethical dubiousness of punishing someone more because of demographic groups they belong to. About the legal-constitutional issues I'm not fit to comment. One point to which Starr doesn't, I think, give enough weight is that even if risk-prediction formulas aren't any fairer or more accurate than what judges do now, they are however more explicit and public, and so both more subject to democratic control and to improvement over time. (Comment written in 2014.)]
- Megha Srivastava, Hoda Heidari, Andreas Krause, "Mathematical Notions vs. Human Perception of Fairness: A Descriptive Approach to Fairness for Machine Learning", arxiv:1902.04783 [The headline is that the simplest possible notion of fairness, namely "demographic parity" (equal rates of positive decisions across groups) best captures lay people's notions of "fairness".]
- Recommended, now-historical close-ups:
- Kling, Scherson and Allen, "Parallel Computing and Information Capitalism," in Metropolis and Rota (eds.), A New Era in Computation (1992) [A batch of UC Irvine comp. sci. professors who write like sociologists. "'Information capitalism' refers to forms of organization in which data-intensive techniques and computerization are key strategic resources for corporate production."]
- Erik Larson, The Naked Consumer: How Our Private Lives Become Public Commodities
- Modesty forbids me to recommend:
- Henry Farrell and CRS, "Artificial Intelligence is a Familiar-Looking Monster", The Economist 21 June 2023 [Commentary]
- The lecture notes on algorithmic fairness from my data mining class (latest iteration)
- To read:
- Daron Acemoglu et al., Redesigning AI [magazine forum version]
- Ajay Agrawal, Joshua Gans and Avi Goldfarb, Prediction Machines: The Simple Economics of Artificial Intelligence
- Ashrya Agrawal, Florian Pfisterer, Bernd Bischl, Francois Buet-Golfouse, Srijan Sood, Jiahao Chen, Sameena Shah, Sebastian Vollmer, "Debiasing classifiers: is reality at variance with expectation?", arxiv:2011.02407
- Wael Alghamdi, Hsiang Hsu, Haewon Jeong, Hao Wang, P. Winston Michalak, Shahab Asoodeh, Flavio P. Calmon, "Beyond Adult and COMPAS: Fairness in Multi-Class Prediction", arxiv:2206.07801
- Ifeoma Ajunwa, "The Paradox of Automation as Anti-Bias Intervention", Cardozo, Law Review 41 (2020): 1671 [ssrn/2746078]
- Kristen M. Altenburger and Daniel E. Ho, "When Algorithms Import Private Bias into Public Enforcement: The Promise and Limitations of Statistical Debiasing Solutions", Journal of Institutional and Theoretical Economics 175 (2019): 98--122
- Eugene Bagdasaryan, Vitaly Shmatikov, "Differential Privacy Has Disparate Impact on Model Accuracy", arxiv:1905.12101 [This seems extremely intuitive to me from the definition of differential privacy, but I've not read beyond the abstract --- and anyway who trusts my intuition?]
- Matias Barenstein, "ProPublica's COMPAS Data Revisited", arxiv:1906.04711
- Solon Barocas and Andrew D. Selbst, "Big Data's Disparate Impact" (2014), ssrn/2477899
- Solon Barocas, Moritz Hardt, Arvind Narayanan, Fairness in Machine Learning: Limitations and Opportunities ["Incomplete working draft"]
- Yahav Bechavod, Katrina Ligett, "Penalizing Unfairness in Binary Classification", arxiv:1707.00044
- Fabian Beigang, "Reconciling Algorithmic Fairness Criteria", Philosophy and Public Affairs 51 (2023): 166--190 [Preliminary comments, after a first reading]
- Omer Ben-Porat, Fedor Sandomirskiy, Moshe Tennenholtz, "Protecting the Protected Group: Circumventing Harmful Fairness", arxiv:1905.10546
- Richard A. Berk, "Artificial Intelligence, Predictive Policing, and Risk Assessment for Law Enforcement", Annual Review of Criminology 4 (2021): 209--237
- Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, Aaron Roth, "Fairness in Criminal Justice Risk Assessments: The State of the Art", Sociological Methods and Research 50 (2021): 3--44, arxiv:1703.09207
- Richard A. Berk, Arun Kumar Kuchibhotla, "Improving Fairness in Criminal Justice Algorithmic Risk Assessments Using Conformal Prediction Sets", arxiv:2008.11664
- Anna Bernasek and D. T. Morgan, All You Can Pay: How Companies Use Our Data to Empty Our Wallets
- Catherine Besteman (ed.), Life by Algorithms: How Roboprocesses Are Remaking Our World
- Reuben Binns, "Fairness in Machine Learning: Lessons from Political Philosophy", arxiv:1712.03586
- Emily Black, John Logan Kopeke, Pauline Kim, Solon Barocas and Hingwei Hsu, "Less Discriminatory Algorithms", ssrn/4590481 (2023)
- Sarah Brayne
- "Big Data Surveillance: The Case of Policing", American Sociological Review 82 (2017): 977--1008
- "The Criminal Law and Law Enforcement Implications of Big Data", Annual Review of Law and Social Science 14 (2018): 293--308
- Predict and Surveil: Data, Discretion, and the Future of Policing
- Tapabrata Chakraborti, Arijit Patra, Alison Noble, "Contrastive Fairness in Machine Learning", arxiv:1905.07360
- Fotini Christia, Jessy Xinyi Han, Andrew Miller, Devavrat Shah, S. Craig Watkins, Christopher Winship, "A Causal Framework to Evaluate Racial Bias in Law Enforcement Systems", arxiv:2402.14959
- Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence
- Kate Crawford and Jason Schultz, "Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms", Boston College Law Review 55:93 (2014), ssrn/2325784
- Elliot Creager, David Madras, Toniann Pitassi, Richard Zemel, "Causal Modeling for Fairness in Dynamical Systems", arxiv:1909.09141
- Brian d'Alessandro, Cathy O'Neil, Tom LaGatta, "Conscientious Classification: A Data Scientist's Guide to Discrimination-Aware Classification", Big Data 5 (2017): 120--134, arxiv:1907.09013
- Thomas Davidson, Debasmita Bhattacharya, Ingmar Weber, "Racial Bias in Hate Speech and Abusive Language Detection Datasets", arxiv:1905.12516
- Robyn M. Dawes, "The Ethics of Using or Not Using Statistical Prediction Rules in Psychological Practice and Related Consulting Activities", Philosophy of Science 69 (2002): S178--S184
- Samuel Deng, Achille Varzi, "Methodological Blind Spots in Machine Learning Fairness: Lessons from the Philosophy of Science and Computer Science", arxiv:1910.14210
- Hyungrok Do, Shinjini Nandi, Preston Putzel, Padhraic Smyth, Judy Zhong, "Joint Fairness Model with Applications to Risk Predictions for Under-represented Populations", arxiv:2105.04648
- Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor
- Michael Feffer, Hoda Heidari, Zachary C. Lipton, "Moral Machine or Tyranny of the Majority?", arxiv:2305.17319
- Andrew Guthrie Ferguson, The Rise of Big Data Policing: Surveillance, Race and the Future of Law Enforcement
- Katherine B. Forrest, When Machines Can Be Judge, Jury, and Executioner: Justice in the Age of Artificial Intelligence [By a former federal judge; review by a current federal judge in the NYRB]
- Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian, "The (Im)possibility of Fairness: Different Value Systems Require Different Mechanisms For Fair Decision Making", Communications of the ACM 64 (2021): 136--143
- Vivek Gupta, Pegah Nokhiz, Chitradeep Dutta Roy, Suresh Venkatasubramanian, "Equalizing Recourse across Groups", arxiv:1909.03166
- César A. Hidalgo, Diana Orghiain, Jordi Albo Canals, Filipa de Almeida and Natalia Martin, How Humans Judge Machines [Because economists are the best people to do empirical moral psychology...]
- Torben Iversen and Philipp Rehm, Big Data and the Welfare State: How the Information Revolution Threatens Social Solidarity
- Amir-Hossein Karimi, Krikamol Muandet, Simon Kornblith, Bernhard Schölkopf, Been Kim, "On the Relationship Between Explanation and Prediction: A Causal View", arxiv:2212.06925 [To be clear, they mean the relations between methods for (supposedly) extracting an explanation for a model's decision, not between explaining and predicting phenomena outside the model.]
- Maximilian Kasy and Rediet Abebe, "Fairness, Equality, and Power in Algorithmic Decision-Making", FAccT 21 (2021): 576--586
- Rinat Khaziev, Bryce Casavant, Pearce Washabaugh, Amy A. Winecoff, Matthew Graham, "Recommendation or Discrimination?: Quantifying Distribution Parity in Information Retrieval Systems", arxiv:1909.06429
- Niki Kilbertus, Manuel Gomez-Rodriguez, Bernhard Schölkopf, Krikamol Muandet, Isabel Valera, "Fair Decisions Despite Imperfect Predictions", arxiv:1902.02979
- Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, Bernhard Schölkopf, "Avoiding Discrimination through Causal Reasoning", NIPS 2017 pp. 656--666, arxiv:1706.02744
- Barbara Kiviat, "Which Data Fairly Differentiate? American Views on the Use of Personal Data in Two Market Settings", Sociological Science 8 (2021): 2
- Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Cass R. Sunstein, "Discrimination in the Age of Algorithms", Journal of Legal Analysis 10 (2018): 113--174
- Benjamin Laufer
- "Feedback Effects in Repeat-Use Criminal Risk Assessments", arxiv:2011.14075
- "Compounding Injustice: History and Prediction in Carceral Decision-Making", arxiv:2005.13404
- Kun Lin, Nasim Sonboli, Bamshad Mobasher, Robin Burke, "Crank up the volume: preference bias amplification in collaborative recommendation", arxiv:1909.06362 [Surely there's a question here about the extent to which stated preferences are real preferences.]
- Lydia T. Liu, Sarah Dean, Esther Rolf, Max Simchowitz, Moritz Hardt, "Delayed Impact of Fair Machine Learning", arxiv:1803.04383
- Sandra G. Mayson, "Bias In, Bias Out", Yale Law Journal 128 (2019): 2122--2473, SSRN/3257004
- Colleen McCue, Data Mining and Predictive Analysis: Intelligence Gathering and Crime Analysis [To be shot after a fair trial, you should excuse the expression]
- Shira Mitchell, Eric Potash, Solon Barocas, Alexander D'Amour, and Kristian Lum, "Algorithmic Fairness: Choices, Assumptions, and Definitions", Annual Review of Statistics and Its Application 8 (2021): forthcoming
- Razieh Nabi, Daniel Malinsky and Ilya Shpitser, "Learning Optimal Fair Policies", ICML 2019, pp. 4674--4682
- Ziad Obermeyer, Brian Powers, Christine Vogeli and Sendhil Mullainathan, "Dissecting racial bias in an algorithm used to manage the health of populations", Science 366 (2019): 447--453
- David C. Parkes, Rakesh V. Vohra, et al., "Algorithmic and Economic Perspectives on Fairness", arxiv:1909.05282
- Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information
- Stephen R. Pfohl, Agata Foryciarz, Nigam H. Shah, "An Empirical Characterization of Fair Machine Learning For Clinical Risk Prediction", Journal of Biomedical Informatics forthcoming (2020), arxiv:2007.10306
- Federica Russo, Eric Schliesser and Jean Wagemans, "Connecting ethics and epistemology of AI", AI and Society forthcoming (2023)
- Nian Si, Karthyek Murthy, Jose Blanchet, Viet Anh Nguyen, "Testing Group Fairness via Optimal Transport Projections", ICML 2021, arxiv:2106.01070
- Josh Simons, Algorithms for the People: Democracy in the Age of AI
- Tom Slee, "The Incompatible Incentives of Private Sector AI", ssrn/3363342 (2019)
- Thea Snow, "From satisficing to artificing: The evolution of administrative decision-making in the age of the algorithm", Data and Policy 3 (2021): E3
- Daniel J. Solove, "Data Mining and the Security-Liberty Debate", SSRN/990030
- David Spiegelhalter, "Should We Trust Algorithms?", Harvard Data Science Review 2:1 (2020)
- Megan T. Stevenson and Jennifer L. Doleac, "Algorithmic Risk Assessment in the Hands of Humans", ssrn/3489440
- Joseph Turow, Niche Envy: Marketing Discrimination in the Digital Age
- Samuel Yeom, Michael Carl Tschantz, "Discriminative but Not Discriminatory: A Comparison of Fairness Definitions under Different Worldviews", arxiv:1808.08619
- Tom Vanderbilt, You May Also Like: Taste in an Age of Endless Choice
- Salome Viljoen, "Democratic Data: A Relational Theory For Data Governance", ssrn/3727562 (2020)
- Davide Viviano, Jelena Bradic, "Fair Policy Targeting", arxiv:2005.12395
- Benjamin Wiggins, Calculating Race: Racial Discrimination in Risk Assessment [Some of the causal arrows in the book description sound obviously backwards to me, but...]
- Sarah Williams, Data Action: Using Data for Public Good
- Richard Zemel, Yu (Ledell) Wu, Kevin Swersky, Toniann Pitassi and Cynthia Dwork, "Learning Fair Representations", ICML 2013 [TODO: Compare carefully to DeDeo's paper.]
- Han Zhao, Amanda Coston, Tameem Adel, Geoffrey J. Gordon, "Conditional Learning of Fair Representations", arxiv:1910.07162
- Han Zhao and Geoffrey Gordon, "Inherent Tradeoffs in Learning Fair Representations", Journal of Machine Learning Research 23 (2022): 57
- To read, historical interest:
- Burnham, Rise of the Computer State
- To write, maybe:
- CRS, "One law for the Lion & the Ox is Oppression"