Artificial Intelligence
20 Nov 2024 14:56
Yet Another Inadequate Placeholder.
I am not best-pleased to see this phrase come back in to vogue over the last few years, riding on a combination of absurd, apocalyptic myth-making and real but limited advances in the art of curve-fitting, a.k.a. "deep learning". (Said differently, I remember the last time Geoff Hinton's students were going to take over the world with multi-layer connectionist models.)
- See also:
- Artificial Life
- "Attention" and "Transformers" in Neural Network "Large Language Models"
- Cognitive Science
- Cybernetics
- Ethical and Political Issues in Data Mining, Especially Unfairness in Automated Decision Making
- Learning Theory, Computational and Statistical
- Machine Learning, Statistical Inference and Induction
- Multi-Agent Systems
- Neuroscience
- The Primal Scene of Artificial Intelligence
- Recommended, big picture:
- Valentino Braitenberg, Vehicles: Experiments in Synthetic Psychology [Review: Hume on Wheels, or, One Must Imagine Frankenstein Happy]
- Marvin Minsky, The Society of Mind
- Stuart Russell and Peter Norvig, Artificial Intelligence, a Modern Approach
- Herbert Simon, The Sciences of the Artificial
- Richard S. Sutton and Andrew G. Barto, Reinforcement Learning: An Introduction
- Recommended, close-ups:
- Margaret Boden, The Creative Mind: Myths and Mechanisms
- Maciej Ceglowski, "Superintelligence: The Idea That Eats Smart People" (29 October 2016)
- Daniel Dennett
- Brainstorms
- Brainchildren: Essays on Designing Minds [Review: An Attempt to Introduce the Experimental Mode of Reasoning into Moral Subjects]
- Marco Dorigo and Marco Colombetti, Robot Shaping: An Experiment in Behavior Engineering
- The Genealogy of ELIZA [I am not altogether joking when I say that you should not trust commentary on AI from anyone who has not both interacted with Eliza, and stepped through the source code.]
- John Holland, Adaptation in Natural and Artificial Systems
- Gary Marcus
- "Deep Learning: A Critical Appraisal", arxiv:1801.00631
- "Innateness, AlphaZero, and Artificial Intelligence", arxiv:1801.05667
- Drew McDermott, "Artificial Intelligence Meets Natural Stupidity", ACM SIGART Bulletin 57 (April, 1976): 4--9
- Melanie Mitchell
- "Why AI is Harder Than We Think", arxiv:2104.12871
- "Do half of AI researchers believe that there's a 10% chance AI will kill us all?", 23 April 2023 [Shorter and less polite MM: No, because that's preposterous.]
- "How do we know how smart AI systems are?", Science 381 (2023): adj5957
- Adam Sobieszek and Tadeusz Price, "Playing Games with AIs: The Limits of GPT-3 and Similar Large Language Models", Minds and Machines 32 341--364 [I have a bunch of quibbles and comments. (0) Gosh, there are a lot of citations to the journal editors. (1) They don't actually use item response theory! They just suggest that we can get information about whether a question-answerer is a human or a computer if different sources have different probabilities of an answer. Which is absolutely true and maybe worth mentioning in this context, but doesn't need IRT. (I say this as someone who thinks that a Rasch model for Turing tests would be awesome.) There is also the issue of why we should think there would be a probability of a given answer for either human beings or machines, stable over time. (2) I think their information-theory-inspired remarks about compression and generalization are a bit over-simplified. But I realize I have (or once had) expert over-sensitivity in this area, and I guess what they say is close enough to right for present purposes. (3) I think the point that if you are going to learn to predict unlabeled text, you are not going to be able to distinguish truth from falsehood, is quite right. (Even true texts are going to contain refutations, hypotheticals, etc.). (4) Similarly, the idea that statistical properties of symbol strings complicate the syntax/semantics distinction is one I remember being fairly widely understood in the late 1990s. (I'd argue that you can find a version of it in Zellig Harris's Language and Information.) Certainly in my own (then) area of research, if in a particular stochastic process the string "01" is followed by "1" 99% of the time, it's very hard to avoid saying things like "'01' usually implies '1' is coming" (cf. "black clouds approaching mean rain soon"). But the semantic field (if I may put it that way) is limited to more of the same process, not the rest of the world. (5) It is incredibly striking to me that there is absolutely nothing in this paper about the specific architectures of the neural networks involved, other than their ability to maintain some sort of long-range context. If these arguments were right, we should be able to do the same thing with a sufficiently powered-up implementation of any probabilistic text predictor --- maybe even anything which does universal source coding. If someone is looking for a nice (but expensive) project, then, implementing one of Paul Algoet's old universal prediction schemes at read-the-whole-Web scale suggests itself.]
- Judea Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference [Graphical models, before Uncle Judea got seized by the importance of causality]
- Arjun Ramani and Zhengdong Wang, "Why transformative artificial intelligence is really, really hard to achieve", The Gradient 26 June 2023
- Roger Schank, Tell Me a Story: A New Look at Real and Artificial Memory [Comments]
- Recommended, historical:
- Allen Newell and Herbert P. Simon, "Current Developments in Complex Information Processing", RAND Corporation report P-850, 1 May 1956 [Thanks to Chris Wiggins for sharing a copy with me --- it's probably available online somewhere... --- and indeed "somewhere" proves to be CMU!]
- J. McCarthy, M. L. Minsky, N. Rochester and C. E. Shannon, "Proposal for the Dartmouth Summer Research Project on Artificial Intelligence" [1955; PDF scan via Ray Solomonoff (!)]
- Claude E. Shannon and John McCarthy (eds.), Automata Studies (1956)
- Herbert A. Simon
- Models of My Life
- The Shape of Automation for Men and Management = The New Science of Management Decisions
- Modesty forbids me to recommend:
- Henry Farrell and CRS, "Artificial Intelligence is a Familiar-Looking Monster", The Economist 21 June 2023 [Commentary]
- CRS, Revised and Extended Remarks at "The Rise of Intelligent Economies and the Work of the IMF" (2018)
- To read, history:
- Margaret Boden, Mind as Machine: A History of Cognitive Science
- Stephen Cave, Kanta Dihal, and Sarah Dillon (eds.), AI Narratives: A History of Imaginative Thinking about Intelligent Machines
- Hamid R. Ekbia, Artificial Dreams: The Quest for Non-Biological Intelligence
- Stefano Franchi and Guven Guzeldere (eds.), Mechanical Bodies, Computational Minds: Artificial Intelligence from Automata to Cyborgs
- Phil Husbands, Owen Holland and Michael Wheeler (eds.), The Mechanical Mind in History
- Pamela McCorduck, Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence
- Nils Nilsson, The Quest for Artificial Intelligence [Participant's history.]
- Alex Roland and Philip Shiman, Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983--1993
- Pierre Steiner, "C. S. Peirce and Artificial Intelligence: Historical Heritage and (New) Theoretical Stakes", pp. 265--276 in Philosophy and Theory of Artificial Intelligence
- To read, popularizations [since this is related to my teaching]
- Meredith Broussard, Artificial Unintelligence: How Computers Misunderstand the World
- Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans
- Janelle Shane, "You Look Like a Thing and I Love You": How AI Works and Why It's Making the World a Weirder Place
- To read, philosophy and critique:
- Harry M. Collins, Artificial Experts: Social Knowledge and Intelligent Machines [Collins is a smart, learned author with some intellectual commitments I think are incredibly wrong-headed. This seems like a logical, yet absurd, outcome of those commitments. But he is smart and well-informed, so...]
- Kenneth M. Ford, Clark Glymour and Patrick J. Hayes (eds.), Android Epistemology
- David Runciman, The Handover: How We Gave Control of Our Lives to Corporations, States and AIs [Review by Gideon Lewis-Kraus in The New Yorker]
- Iris van Rooij, Olivia Guest, Federico G. Adolfi, Ronald de Haan, Antonina Kolokolova and Patricia Rich, "Reclaiming AI as a theoretical tool for cognitive science", psyarxiv/4cbuv (2023) [Superficial and no doubt unfair preliminary comments]
- To read, contributions:
- Philip Agre, Computation and Human Experience
- Philip Agre and Ian Horswill, "Lifeworld Analysis," Journal of Artificial Intelligence Research 6 (1997): 111--145
- James S. Albus and Alexander M. Meystel
- Engineering of Mind
- Intelligent Systems
- Léon Bottou, "From machine learning to machine reasoning", Machine Learning (2014): 133--149
- Justine Cassell, Joseph Sullivan, Scott Prevost, and Elizabeth Churchill (eds.), Embodied Computational Agents
- John C. Collins, "On the Compatibility Between Physics and Intelligent Organisms," physics/0102024 [Claims to have a truly elegant refutation of Penrose]
- Craig DeLancey, Passionate Engines: What Emotions Reveal about the Mind and Artificial Intelligence
- Keith L. Downing, Intelligence Emerging: Adaptivity and Search in Evolving Neural Systems
- Elena Esposito, Artificial Communication: How Algorithms Produce Social Intelligence
- Dario Floreano and Claudio Mattiussi, Bio-Inspired Artificial Intelligence: Theories, Methods, and Technologies
- Vineet Gupta, Radha Jagadeesan and Prakash Panangaden, "Approximate reasoning for real-time probabilistic processes", cs.LO/0505063
- Joseph Y. Halpern and Riccardo Pucella, "Probabilistic Algorithmic Knowledge", cs.AI/0503018
- Marcus Hutter, "Towards a Universal Theory of Artificial Intelligence based on Algorithmic Probability and Sequential Decision Theory," cs.AI/0012011
- Ken Kansky, Tom Silver, David A. Mély, Mohamed Eldawy, Miguel Lázaro-Gredilla, Xinghua Lou, Nimrod Dorfman, Szymon Sidor, Scott Phoenix, Dileep George, "Schema Networks: Zero-shot Transfer with a Generative Causal Model of Intuitive Physics", arxiv:1706.04317
- Eliza Kosoy, David M. Chan, Adrian Liu, Jasmine Collins, Bryanna Kaufmann, Sandy Han Huang, Jessica B. Hamrick, John Canny, Nan Rosemary Ke, Alison Gopnik, "Towards Understanding How Machines Can Learn Causal Overhypotheses", arxiv:2206.08353
- Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman, "Building Machines That Learn and Think Like People", arxiv:1604.00289
- C. E. Larson and N. Van Cleemput, "Automated conjecturing III: Property-relations conjectures", Annals of Mathematics and Artificial Intelligence 81 (2017): 315--327
- Hector J. Levesque and Gerhard Lakemeyer, The Logic of Knowledge Bases
- R. Levinson, "A General Programming Language for Unified Planning and Control," Artificial Intelligence 76 (1995)
- James Robert Lloyd, David Duvenaud, Roger Grosse, Joshua B. Tenenbaum, Zoubin Ghahramani, "Automatic Construction and Natural-Language Description of Nonparametric Regression Models", arxiv:1402.4304
- Luo Zhaohui, Computation and Reason: A Type Theory for Computer Science
- Arthur B. Markman, Knowledge Representation
- Melanie Mitchell, "Abstraction and Analogy-Making in Artificial Intelligence", arxiv:2102.10717
- Arseny Moskvichev, Victor Vikram Odouard, Melanie Mitchell, "The ConceptARC Benchmark: Evaluating Understanding and Generalization in the ARC Domain", arxiv:2305.07141
- N. Muscettola, G. A. Dorais, C. Fry, R. Levinson and C. Plaunt, "A Unified Approach to Model-Based Planning and Execution," in Proceedings of the 6th International Conference on Intelligent Autonomous Systems
- Victor Vikram Odouard, Melanie Mitchell, "Evaluating Understanding on Conceptual Abstraction Benchmarks", arxiv:2206.14187
- Rafael Pérez y Pérez and Mike Sharples, An Introduction to Narrative Generators: How Computers Create Works of Fiction
- John Pollock
- David L. Poole and Alan K. Mackworth, Artificial Intelligence: Foundations of Computational Agents [2nd edition free online]
- Stuart Russell and Eric H. Wefald, Do the Right Thing: Studies in Limited Rationality
- Abulhair Saparov, Tom M. Mitchell, "Towards General Natural Language Understanding with Probabilistic Worldbuilding", arxiv:2105.02486
- Murray Shanahan, Melanie Mitchell, "Abstraction for Deep Reinforcement Learning", arxiv:2202.05839
- Daniel L. Silver, Tom M. Mitchell, "The Roles of Symbols in Neural-based AI: They are Not What You Think!", arxiv:2304.13626
- Tony Veale, Pablo Gervás and Rafael Pérez y Pérez, eds., Special Issue on Computational Creativity", Minds and Machines, Volume 20, Number 4 (2010)
- Michael P. Wellman, "Putting the agent in agent-based modeling", Autonomous Agents and Mutli-Agent Systems 30 (2016): 1175--1189