Neural Nets, Connectionism, Perceptrons, etc.
17 Jul 2024 11:25
Old notes from c. 2000
I'm mostly interested in them as a means of machine learning or statistical inference. I am particularly interested in their role as models of dynamical systems (via recurrent nets, generally), and as models of transduction.I need to understand better how the analogy to spin glasses works, but then, I need to understand spin glasses better too.
The arguments that connectionist models are superior, for purposes of cognitive science, to more "symbolic" ones I find unconvincing. (Saying that they're more biologically realistic is like saying that cars are better models of animal locomotion than bicycles, because cars have four appendages in contact with the ground and not two.) This is not to say, of course, that some connectionist models of cognition aren't interesting, insightful and valid; but the same is true of many symbolic models, and there seems no compelling reason for abandoning the latter in favor of the former. (For more on this point, see Gary Marcus.) --- Of course a cognitive model which cannot be implemented in real brains must be rejected; connecting neurobiology to cognition can hardly be too ardently desired. The point is that the elements in connectionist models called "neurons" bear only the sketchiest resemblance to the real thing, and neural nets are no more than caricatures of real neuronal circuits. Sometimes sketchy resemblances and caricatures are enough to help us learn, which is why Hebb, McCulloch and Neural Computation are important for both connectionism and neurobiology.
Reflections circa 2016
I first learned about neural networks as an undergraduate in the early 1990s, when, judging by the press, Geoff Hinton and his students were going to take over the world. (In "Introduction to Cognitive Science" at Berkeley, we trained a three-layer perceptron to classify fictional characters as "Sharks" or "Jets" using back-propagation; I had no idea what those labels meant because I'd never seen West Side Story.) I then lived through neural nets virtually disappearing from the proceedings of Neural Information Processing Systems, and felt myself very retro for including neural nets the first time I taught data mining in 2006. (I dropped them by 2009.) The recent revival, as "deep learning", is a bit weird for me, especially since none of the public rhetoric has changed. The most interesting thing scientifically about the new wave is that it's lead to the discovery of adversarial examples, which I think we still don't understand very well at all. The most interesting thing meta-scientifically is how much the new wave of excitement about neural networks seems to be accompanied by forgetting earlier results, techniques, and baselines.
Reflections in early 2022
I would now actually say there are three scientifically interesting phenomena revealed by the current wave of interest in neural networks:
- Adversarial examples (as revealed by Szegedy et al.), and the converse phenomenon of extremely high confidence classification of nonsense images that have no humanly-perceptible resemblance to the class (e.g., Nguyen et al.);
- The ability to generalize to new instances by using humanly-irrelevant features like pixels at the edges of images (e.g., Carter et al.);
- The ability to generalize to new instances despite having the capacity to memorize random training data (e.g., Zhang et al.).
It's not at all clear how specific any of these are to neural networks. (See, Belkin's wonderful "Fit without Fear" for a status report on our progress in understanding my item (3) using other models, going back all the way to margin-based understandings of boosting.) It's also not clear how they inter-relate. But they are all clearly extremely important phenomena in machine learning which we do not yet understand, and really, really ought to understand.
I'd add that I still think there has been a remarkable regression of understanding of the past of our field and some hard-won lessons. When I hear people conflating "attention" in neural networks with attention in animals, I start muttering about "wishful mnemonics", and "did Drew McDermott live and fight in vain?" Similarly, when I hear graduate students, and even young professors, explaining that Mikolov et al. 2013 invented the idea of representing words by embedding them in a vector space, with proximity in the space tracking patterns of co-occurrence, as though latent semantic indexing (for instance) didn't date from the 1980s, I get kind of indignant. (Maybe the new embedding methods are better for your particular application than Good Old Fashioned Principal Components, or even than kernelized PCA, but argue that, dammit.)
I am quite prepared to believe that part of my reaction here is sour grapes, since deep learning swept all before it right around the time I got tenure, and I am now too inflexible to really jump on the bandwagon.
That is my opinion; and it is further my opinion that you kids should get off of my lawn.
25 July 2022: In the unlikely event you want to read pages and pages of me on neural networks, try my lecture notes. (That URL might change in the future.)
26 September 2022: Things I should learn more about (an incomplete list):
- "Transformer" architectures, specifically looking at them as ways of doing
sequential probability estimation.
(Now [2023] with their own
irritated notebook.)
If someone were to throw large-language-model-sized computing resources at a Good Old Fashioned SCFG learner, and/or a , what kind of performance would one get on the usual benchmarks? Heck, what if one used a truly capacious implementation of Lempel-Ziv? (You'd have to back out the probabilities from the LZ code-lengths, but we know how to do that.) [See same notebook.]
On that note: could one build a GPT-esque program using Lempel-Ziv as the underlying model? Conversely, can we understand transformers as basically doing some sort of source coding? (The latter question is almost certainly addressed in the literature.) [Ditto.] - What's going on with diffusion models for images? (I know, that's really vague.)
While I am proposing brutally stupid experiments: Take a big labeled image data set and do latent semantic analysis on the labels, i.e., PCA on those bags-of-words, and do PCA on the images themselves. Learn a linear mapping from the word embedding space to the image embedding space. Now take a text query/prompt, map it into the word embedding space (i.e., project on to the word PCs), map that to the image space, and generate an image (i.e., take the appropriate linear combination of image PCs). The result will probably be a bit fuzzy but there should be ways to make it prettier... Of course, after that we kernelize the linear steps (in all possible combinations). - I do not understand how "self-supervised" learning is supposed to differ from what we always did in un-supervised learning with (e.g.) mixture models, or for that matter how statisticians have "trained" autoregressions since about 1900.
Additional stray thought, recorded 27 May 2023: The loss landscape for a neural network, in terms of its weights, is usually very non-convex, so it's surprising that gradient descent (a.k.a. backpropagation) works so well. This leads me, unoriginally, to suspect that there is a lot of hidden structure in the optimization problem. Some of this is presumably just symmetries. But I do wonder if there isn't a way to reformulate it all as a convex program. (Though why gradient descent in the weights would then find it is a bit of a different question...) Alternately, maybe none of this is true and optimization is just radically easier than we thought; in that case I'd eat some crow, and be willing to embrace a lot more central planning in the future socialist commonwealth.
I presume there are scads of papers on all of these issues, so points are genuinely appreciated.
- See also:
- Adversarial Examples
- Artificial Intelligence
- Cognitive Science
- Interpolation in Statistical Learning
- Neuroscience
- Data Mining
- Symmetries of Neural Networks
- Uncertainty for Neural Networks, and Other Large Complicated Models
- Recommended (big picture):
- Maureen Caudill and Charles Butler, Naturally Intelligent Systems
- Patricia Churchland and Terrence Sejnowski, The Computational Brain
- Chris Eliasmith and Charles Anderson, Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems
- Gary F. Marcus, The Algebraic Mind: Integrating Connectionism and Cognitive Science [On the limits of the connectionist approach to cognition, with special reference to language and grammar. Cf. later papers by Marcus below.]
- Brian Ripley, Pattern Recognition and Neural Networks
- Recommended (close-ups; very misc. and not nearly extensive enough):
- Larry Abbot and Terrence Sejnowski (eds.), Neural Codes and Distributed Representations
- Martin Anthony and Peter C. Bartlett, Neural Network Learning: Theoretical Foundations
- Michael A. Arbib (ed.), The Handbook of Brain Theory and Neural Networks
- Dana Ballard, An Introduction to Natural Computation [Review: Not Natural Enough]
- M. J. Barber, J. W. Clark and C. H. Anderson, "Neural Representation of Probabilistic Information", Neural Computation 15 (2003): 1843--1864, arxiv:cond-mat/0108425
- Suzanna Becker, "Unsupervised Learning Procedures for Neural Networks", International Journal of Neural Systems 2 (1991): 17--33
- Mikhail Belkin, "Fit without fear: Remarkable mathematical phenomena of deep learning through the prism of interpolation", arxiv:2105.14368
- Tolga Ergen, Mert Pilanci, "Global Optimality Beyond Two Layers: Training Deep ReLU Networks via Convex Programs", arxiv:2110.05518
- Adam Gaier, David Ha, "Weight Agnostic Neural Networks", arxiv:1906.04358
- Surya Ganguli, Dongsung Huh and Haim Sompolinsky, "Memory traces in dynamical systems", Proceedings of the National Academy of Sciences (USA) 105 (2008): 18970--18975
- Geoffrey Hinton and Terrence Sejnowski (eds.), Unsupervised Learning [A sort of "Neural Computation's Greatest Hits" compilation]
- Anders Krogh and Jesper Vedelsby, "Neural Network Ensembles, Cross Validation, and Active Learning", NIPS 7 (1994): 231--238
- Aaron Mishkin and Mert Pilanci, "Optimal Sets and Solution Paths of ReLU Networks" [PDF preprint via PRof. Pilanci]
- Andrew M. Saxe, Yamini Bansal, Joel Dapello, Madhu Advani1 Artemy Kolchinsky, Brendan D. Tracey and David D. Cox, "On the information bottleneck theory of deep learning", Journal of Statistical Mechanics: Theory and Experiment (2019) 124020 [This looks like trouble for an idea I found very promising]
- Yifei Wang, Jonathan Lacotte, Mert Pilanci, "The Hidden Convex Optimization Landscape of Two-Layer ReLU Neural Networks: an Exact Characterization of the Optimal Solutions", arxiv:2006.05900
- Mathukumalli Vidyasagar, A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems [Extensive discussion of the application of statistical learning theory to neural networks, along with the purely computational difficulties. Mini-review]
- T. L. H Watkin, A. Rau and M. Biehl, "The Statistical Mechanics of Learning a Rule," Reviews of Modern Physics 65 (1993): 499--556
- Achilleas Zapranis and Apostolos-Paul Refenes, Principles of Neural Model Identification, Selection and Adequacy, with Applications to Financial Econometrics [Their English is less than perfect, but they've got very sound ideas about all the important topics]
- Recommended, "your favorite deep neural network sucks":
- Wieland Brendel, Matthias Bethge, "Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet", International Conference on Learning Representations 2019
- Brandon Carter, Siddhartha Jain, Jonas Mueller, David Gifford, "Overinterpretation reveals image classification model pathologies", arxiv:2003.08907
- Maurizio Ferrari Dacrema, Paolo Cremonesi, Dietmar Jannach, "Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches", arxiv:1907.06902
- Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, Wieland Brendel, "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness", International Conference on Learning Representations 2019
- Micah Goldblum, Jonas Geiping, Avi Schwarzschild, Michael Moeller, Tom Goldstein, "Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory", arxiv:1910.00359
- Qian Huang, Horace He, Abhay Singh, Ser-Nam Lim, Austin R. Benson, "Combining Label Propagation and Simple Models Out-performs Graph Neural Networks", arxiv:2010.13993
- Andee Kaplan, Daniel Nordman, Stephen Vardeman, "On the instability and degeneracy of deep learning models", arxiv:1612.01159
- Gary Marcus
- "Deep Learning: A Critical Appraisal", arxiv:1801.00631
- "Innateness, AlphaZero, and Artificial Intelligence", arxiv:1801.05667
- Anh Nguyen, Jason Yosinski, Jeff Clune, "Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images", arxiv:1412.1897
- Filip Piekniewski, "Autopsy of a Deep Learning Paper"
- Adityanarayanan Radhakrishnan, Karren Yang, Mikhail Belkin, Caroline Uhler, "Memorization in Overparameterized Autoencoders", arxiv:1810.10333
- Ali Rahimi and Benjamin Recht
- "Reflections on Random Kitchen Sinks" argmin blog, 5 December 2017
- "An Addendum to Alchemy", argmin blog, 11 December 2017
- Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob Fergus, "Intriguing properties of neural networks", arxiv:1312.6199
- Tan Zhi-Xuan, Nishad Gothoskar, Falk Pollok, Dan Gutfreund, Joshua B. Tenenbaum, Vikash K. Mansinghka, "Solving the Baby Intuitions Benchmark with a Hierarchically Bayesian Theory of Mind", arxiv:2208.02914
- Halbert White, "Learning in Artificial Neural Networks: A Statistical Perspective", Neural Computation 1 (1989): 425--464
- Chengxi Ye, Matthew Evanusa, Hua He, Anton Mitrokhin, Tom Goldstein, James A. Yorke, Cornelia Fermüller, Yiannis Aloimonos, "Network Deconvolution". arxiv:1905.11926 [This is just doing principal components analysis, as invented in 1900]
- John R. Zech, Marcus A. Badgeley, Manway Liu, Anthony B. Costa, Joseph J. Titano, Eric K. Oermann, "Confounding variables can degrade generalization performance of radiological deep learning models", PLoS Medicine 15 (2018): e1002683, arxiv:1807.00431 [Dr. Zech's self-exposition]
- Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals, "Understanding deep learning (still) requires rethinking generalization", Communications of the ACM 64 (2021): 107--115 [previous version: arxiv:1611.03530]
- Recommended, historical:
- Michael A. Arbib, Brains, Machines and Mathematics [1964; a model of clarity in exposition and thought]
- Donald O. Hebb, The Organization of Behavior: A Neuropsychological Theory
- Warren S. McCulloch, Embodiments of Mind
- Modesty forbids me to recommend:
- CRS, "Notes on 'Intriguing Properties of Neural Networks', and two other papers (2014)" [On Szegedy et al., Nguyen et al., and Chalupka et al.]
- CRS, lecture notes on neural networks for CMU's 36-462, "methods of statistical learning" (formerly 36-462, "data mining", and before that, 36-350, "data mining"). Currently (2022), this ls lecture 21, but that might change the next time I teach it.
- To read, history and philosophy:
- William Bechtel and Adele Abrahamsen, Connectionism and the Mind: Parallel Processing, Dynamics, and Evolution in Networks
- William Bechtel and Robert C. Richardson, Discovering Complexity: Decomposition and Localization as Strategies in Scientific Research
- Gardenfors, Conceptual Spaces: The Geometry of Thought
- Orit Halpern, "The Future Will Not Be Calculated: Neural Nets, Neoliberalism, and Reactionary Politics", Critical Inquiry 48 (2022): 334--359
- Andrea Loettgers, "Getting Abstract Mathematical Models in Touch with Nature", Science in Context 20 (2007): 97--124 [Intellectual history of the Hopfield model and its reception]
- To read, now-historical interest:
- Gail A. Carpenter and Stephen Grossberg (eds.), Pattern Recognition by Self-Organizing Neural Networks
- F. A. von Hayek, The Sensory Order
- Jim W. Kay and D. M. Titterington (eds.), Statistics and Neural Networks: Advances at the Interface
- McClelland and Rumelhart (ed.), Parallel Distributed Processing
- Marvin Minsky and Seymour Papert, Perceptrons
- Kohonen, Self-organization and associative memory
- To read, not otherwise classified:
- Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron Courville, Marc G. Bellemare, "Deep Reinforcement Learning at the Edge of the Statistical Precipice", arxiv:2108.13264
- Daniel Amit, Modelling Brain Function
- Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas, "Learning to learn by gradient descent by gradient descent", arxiv:1606.04474
- Marco Antonio Armenta, Pierre-Marc Jodoin, "The Representation Theory of Neural Networks", arxiv:2007.12213
- Pierre Baldi, Deep Learning in Science [2021; Baldi has been around for more than a moment, and so I am interested to see what he makes of recent developments...]
- V. M. Becerra, F. R. Garces, S. J. Nasuto and W. Holderbaum, "An Efficient Parameterization of Dynamic Neural Networks for Nonlinear System Identification", IEEE Transactions on Neural Networks 16 (2005): 983--988
- Randall Beer, Intelligence as Adaptive Behavior: An Experiment in Computational Neuroethology
- Hugues Berry and Mathias Quoy, "Structure and Dynamics of Random Recurrent Neural Networks", Adaptive Behavior 14 (2006): 129--137
- Dimitri P. Bertsekas and John N. Tsitsiklis, Neuro-Dynammic Programming
- Michael Biehl, Reimer Kühn, Ion-Olimpiu Stamatescu, "Learning structured data from unspecific reinforcement," cond-mat/0001405
- D. Bollé and P. Kozlowski, "On-line learning and generalisation in coupled perceptrons," cond-mat/0111493
- Christoph Bunzmann, Michael Biehl, and Robert Urbanczik, "Efficient training of multilayer perceptrons using principal component analysis", Physical Review E 72 (2005): 026117
- Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, Eric Wallace, "Extracting Training Data from Diffusion Models", arxiv:2301.13188
- Axel Cleeremans, Mechanisms of Implicit Learning: Connectionist Models of Sequence Processing
- Salvatore Cuomo, Vincenzo Schiano di Cola, Fabio Giampaolo, Gianluigi Rozza, Maziar Raissi, Francesco Piccialli, "Scientific Machine Learning through Physics-Informed Neural Networks: Where we are and What's next", arxiv:2201.05624
- M. C. P. deSouto, T. B. Ludermir and W. R. deOliveira, "Equivalence Between RAM-Based Neural Networks and Probabilistic Automata", IEEE Transactions on Neural Networks 16 (2005): 996--999
- Aniket Didolkar, Kshitij Gupta, Anirudh Goyal, Nitesh B. Gundavarapu, Alex Lamb, Nan Rosemary Ke, Yoshua Bengio, "Temporal Latent Bottleneck: Synthesis of Fast and Slow Processing Mechanisms in Sequence Learning", arxiv:2205.14794
- Keith L. Downing, Intelligence Emerging: Adaptivity and Search in Evolving Neural Systems
- Brandon Duderstadt, Hayden S. Helm, Carey E. Priebe, "Comparing Foundation Models using Data Kernels", arxiv:2305.05126
- Liat Ein-Dor and Ido Kanter, "Confidence in prediction by neural networks," Physical Review E 60 (1999): 799--802
- Chris Eliasmith, "A Unified Approach to Building and Controlling Spiking Attractor Networks", Neural Computation 17 (2005): 1276--1314
- Elman et al., Rethinking Innateness
- Frank Emmert-Streib
- "Self-organized annealing in laterally inhibited neural networks shows power law decay", cond-mat/0401633
- "A Heterosynaptic Learning Rule for Neural Networks", cond-mat/0608564
- Magnus Enquist and Stefano Ghirlanda, Neural Networks and Animal Behavior
- Gary William Flake, "The Calculus of Jacobian Adaptation" [Not confined to neural nets]
- Leonardo Franco, "A measure for the complexity of Boolean functions related to their implementation in neural networks," cond-mat/0111169
- Jürgen Franke and Michael H. Neumann, "Bootstrapping Neural Networks," Neural Computation 12 (2000): 1929--19949
- Ian Goodfellow, Yoshua Bengio and Aaron Courville, Deep Learning
- Michiel Hermans and Benjamin Schrauwen, "Recurrent Kernel Machines: Computing with Infinite Echo State Networks", Neural Computation 24 (2012): 104--133
- Sara Hooker, Aaron Courville, Gregory Clark, Yann Dauphin, Andrea Frome, "What Do Compressed Deep Neural Networks Forget?", arxiv:1911.05248
- Jun-ichi Inoue and A. C. C. Coolen, "Dynamics of on-line Hebbian learning with structurally unrealizable restricted training sets," cond-mat/0105004
- Henrik Jacobsson, "Rule Extraction from Recurrent Neural Networks: A Taxonomy and Review", Neural Computation 17 (2005): 1223--1263
- Artem Kaznatcheev, Konrad Paul Kording, "Nothing makes sense in deep learning, except in the light of evolution", arxiv:2205.10320
- Alon Keinan, Ben Sandbank, Claus C. Hilgetag, Isaac Meilijson and Eytan Ruppin, "Fair Attribution of Functional Contribution in Artificial and Biological Networks", Neural Computation 16 (2004): 1887--1915
- Beom Jun Kim, "Performance of networks of artificial neurons: The role of clustering", q-bio.NC/0402045
- Konstantin Klemm, Stefan Bornholdt and Heinz Georg Schuster, "Beyond Hebb: XOR and biological learning," adap-org/9909005
- Michael Kohler, Adam Krzyzak, "Over-parametrized deep neural networks do not generalize well", arxiv:1912.03925
- G. A. Kohring, "Artificial Neurons with Arbitrarily Complex Internal Structures," cs.NE/0108009
- John F. Kolen (ed.), A Field Guide to Dynamical Recurrent Networks
- Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman, "Building Machines That Learn and Think Like People", arxiv:1604.00289
- Jaeho Lee, Maxim Raginsky, "Learning finite-dimensional coding schemes with nonlinear reconstruction maps", arxiv:1812.09658
- Hannes Leitgeb, "Interpreted Dynamical Systems and Qualitative Laws: From Neural Networks to Evolutionary Systems", Synthese 146 (2005): 189--202 ["Interpreted dynamical systems are dynamical systems with an additional interpretation mapping by which propositional formulas are assigned to system states. The dynamics of such systems may be described in terms of qualitative laws for which a satisfaction clause is defined. We show that the systems C and CL of nonmonotonic logic are adequate with respect to the corresponding description of the classes of interpreted ordered and interpreted hierarchical systems, respectively"]
- Yonatan Loewenstein, and H. Sebastian Seung, "Operant matching is a generic outcome of synaptic plasticity based on the covariance between reward and neural activity", Proceedings of the National Academy of Sciences (USA) 103 (2006): 15224--15229 [The abstract promises a result about all possible neural mechanisms having some fairly generic features; this is clearly the right way to do theoretical neuroscience, but rarely done...]
- Wolfgang Maass (ed.), Pulsed Neural Networks
- Wolfgang Maass and Eduardo D. Sontag, "Neural Systems as Nonlinear Filters," Neural Computation 12 (2000): 1743--1772
- M. S. Mainieri and R. Erichsen Jr, "Retrieval and Chaos in Extremely Diluted Non-Monotonic Neural Networks," cond-mat/0202097
- Daniele Marinazzo, Mario Pellicoro, Sebastiano Stramaglia, "Causal interactions and delays in a neuronal ensemble", cond-mat/0609523
- Luke Metz, C. Daniel Freeman, Niru Maheswaranathan, Jascha Sohl-Dickstein, "Training Learned Optimizers with Randomly Initialized Learned Optimizers", arxiv:2101.07367
- Mika Meitz, "Statistical inference for generative adversarial networks", arxiv:2104.10601
- Seiji Miyoshi, Kazuyuki Hara, and Masato Okada, "Analysis of ensemble learning using simple perceptrons based on online learning theory", Physical Review E 71 (2005): 036116
- Javier R. Movellan, Paul Mineiro, and R. J. Williams, "A Monte Carlo EM Approach for Partially Observable Diffusion Processes: Theory and Applications to Neural Networks," Neural Computation 14 (20020: 1507--1544
- Randall C. O'Reilly, "Generalization in Interactive Networks: The Benefits of Inhibitory Competition and Hebbian Lsearning," Neural Computation 13 (2001): 1199--1241
- Steven Phillips, "Systematic Minds, Unsystematic Models: Learning Transfer in Humans and Networks", Minds and Machines 9 (1999): 383--398
- Guillermo Puebla, Jeffrey S. Bowers, "Can Deep Convolutional Neural Networks Learn Same-Different Relations?", bioRxiv 2021.04.06.438551
- Suman Ravuri, Mélanie Rey, Shakir Mohamed, Marc Deisenroth, "Understanding Deep Generative Models with Generalized Empirical Likelihoods", arxiv:2306.09780
- Tim Ráz, "Understanding Deep Learning with Statistical Relevance", Philosophy of Science 89 (2022): 20--41 [Comments]
- Daniel A. Roberts and Sho Yaida, The Principles of Deep Learning Theory: An Effective Theory Approach to Understanding Neural Networks
- Patrick D. Roberts, "Dynamics of Temporal Learning Rules," Physical Review E 62 (2000): 4077--4082
- Fabrice Rossi, Brieuc Conan-Guez, "Functional Multi-Layer Perceptron: a Nonlinear Tool for Functional Data Analysis", arxiv:0709.3642
- Fabrice Rossi, Nicolas Delannay, Brieuc Conan-Guez, Michel Verleysen, "Representation of Functional Data in Neural Networks", arxiv:0709.3641
- Ines Samengo, "Independent neurons representing a finite set of stimuli: dependence of the mutual information on the number of units sampled," Network: Computation in Neural Systems, 12 (2000): 21--31, cond-mat/0202023
- Ines Samengo and Alessandro Treves, "Representational capacity of a set of independent neurons," cond-mat/0201588
- Vitaly Schetinin and Anatoly Brazhnikov, "Diagnostic Rule Extraction Using Neural Networks", cs.NE/0504057
- Philip Seliger, Stephen C. Young, and Lev S. Tsimring, "Plasticity and learning in a network of coupled phase oscillators," nlin.AO/0110044
- Paul Smolensky and Géraldine Legendre, The Harmonic
Mind: From Neural Computation to Optimality-Theoretic Grammar
- Dietrich Stauffer and Amnon Aharony, "Efficient Hopfield pattern recognition on a scale-free neural network," cond-mat/0212601
- Yan Sun, Qifan Song and Faming Liang, "Consistent Sparse Deep Learning: Theory and Computation", Journal of the American Statistical Association 117 (2022): 1981--1995
- Marc Toussaint
- "On model selection and the disability of neural networks to decompose tasks," nlin.AO/0202038
- "A neural model for multi-expert architectures," nlin.AO/0202039
- T. Uezu and A. C. C. Coolen, "Hierarchical Self-Programming in Recurrent Neural Networks," cond-mat/0109099
- Leslie G. Valiant
- Circuits of the Mind
- "Memorization and Association on a Realistic Neural Model", Neural Computation 17 (2005): 527--555
- Frank van der Velde and Marc de Kamps, "Neural blackboard architectures of combinatorial structures in cognition", Behavioral and Brain Sciences 29 (2006): 37--70 [+ peer commentary]
- Manuel Vargas Guzmán, Jakub Szymanik, Maciej Malicki, "Testing the limits of logical reasoning in neural and hybrid models", pp. 2267--2279 in Duh, Gomez and Bethard (eds.), Findings of the Association for Computational Linguistics: NAACL 2024
- Hiroshi Wakuya and Jacek M. Zurada, "Bi-directional computing architecture for time series prediction," Neural Networks 14 (2001): 1307--1321
- C. Xiang, S. Ding and T. H. Lee, "Geometrical Interpretation and Architecture Selection of MLP", IEEE Transactions on Neural Networks 16 (2005): 84--96 [MLP = multi-layer perceptron]
- To read, conditional probability density estimation:
- Michael Feindt, "A Neural Bayesian Estimator for Conditional Probability Densities", physics/0402093
- Dirk Husmeier, Neural Networks for Conditional Probability Estimation
- To read, applications of statistical physics to NNs (with thanks to Osame Kinouchi for recommendations):
- Nestor Caticha and Osame Kinouchi, "Time ordering in the evolution of information processing and modulation systems," Philosophical Magazine B 77 (1998): 1565--1574
- A. C. C. Coolen, "Statistical Mechanics of Recurrent Neural Networks": part I, "Statics," cond-mat/0006010 and part II, "Dynamics," cond-mat/0006011
- A. C. C. Coolen, R. Kuehn, and P. Sollich, Theory of Neural Information Processing Systems
- A. C. C. Coolen and D. Saad, "Dynamics of Learning with Restricted Training Sets," Physical Review E 62 (2000): 5444--5487
- Mauro Copelli, Antonio C. Roque, Rodrigo F. Oliveira and Osame Kinouchi, "Enhanced dynamic range in a sensory network of excitable elements," cond-mat/0112395
- Valeria Del Prete and Alessandro Treves, "A theoretical model of neuronal population coding of stimuli with both continuous and discrete dimensions," cond-mat/0103286
- Viktor Dotsenko, Introduction to the Theory of Spin Glasses and Neural Networks
- Ethan Dyer, Guy Gur-Ari, "Asymptotics of Wide Networks from Feynman Diagrams", arxiv:1909.11304
- Andreas Engel and Christian P. L. Van den Broeck, Statistical Mechanics of Learning
- D. Herschkowitz and M. Opper, "Retarded Learning: Rigorous Results from Statistical Mechanics," cond-mat/0103275
- Osame Kinouchi and Nestor Caticha, "Optimal Generalization in Perceptrons," Journal of Physics A 25 (1992): 6243--6250
- W. Kinzel
- "Statistical Physics of Neural Networks," Computer Physics Communications, 122 (1999): 86--93
- "Phase transitions of neural networks," Philosophical Magazine B 77 (1998): 1455--1477
- W. Kinzel, R. Metzler and I. Kanter, "Dynamics of Interacting Neural Networks," Journal of Physica A 33 (2000): L141--L147
- Krogh et al., Introduction to the Theory of Neural Computation
- Patrick C. McGuire, Henrik Bohr, John W. Clark, Robert Haschke, Chris Pershing and Johann Rafelski, "Threshold Disorder as a Source of Diverse and Complex Behavior in Random Nets," cond-mat/0202190
- Richard Metzler, Wolfgang Kinzel, Liat Ein-Dor and Ido Kanter, "Generation of anti-predictable time series by a Neural Network," cond-mat/0011302
- R. Metzler, W. Kinzel and I. Kanter, "Interacting Neural Networks," Physical Review E 62 (2000): 2555--2565 [abstract]
- Samy Tindel, "The stochastic calculus method for spin systems", Annals of Probability 33 (2005): 561--581, math.PR/0503652 [One of the kind of spin systems being perceptrons]
- Robert Urbanczik, "Statistical Physics of Feedforward Neural Networks," cond-mat/0201530
- W. A. van Leeuwen and Bastian Wemmenhove, "Learning by a neural net in a noisy environment --- The pseudo-inverse solution revisited," cond-mat/0205550
- Renato Vicente, Osame Kinouchi and Nestor Caticha, "Statistical mechanics of online learning of drifting concepts: A variational approach," Machine Learning 32 (1998): 179--201 [abstract]
- To read, why are neural networks so easy to fit by back-propagation/gradient descent? [See also Symmetries of Neural Networks]
- Frederik Benzing, "Gradient Descent on Neurons and its Link to Approximate Second-Order Optimization", arxiv:2201.12250
- Sourav Chatterjee, "Convergence of gradient descent for deep neural networks", arxiv:2203.16462
- Michael I. Jordan, Guy Kornowski, Tianyi Lin, Ohad Shamir, Manolis Zampetakis, "Deterministic Nonsmooth Nonconvex Optimization", arxiv:2302.08300
- Matus Telgarsky, "Stochastic linear optimization never overfits with quadratically-bounded losses on general data", arxiv:2202.06915
- Levent Sagun, V. Ugur Guney, Gerard Ben Arous, Yann LeCun, "Explorations on high dimensional landscapes", arxiv:1412.6615
- Gal Vardi, "On the Implicit Bias in Deep-Learning Algorithms", arxiv:2208.12591