Papers are sorted by topic. Related papers get listed as a single item.
Topic: Cellular Automata and Emergent Computation
Universality in Elementary Cellular Automata
Complex Systems, vol. 15 (1), 2004, pp. 1-40.
This result was first described in A New Kind of Science, Wolfram Media, 2002.
A different exposition of the proof, together with details of how the construction can work in polynomial time using results of Neary & Woods, was published later in A Concrete View of Rule 110 Computation, CSP 2008, pp. 31-55.
In this paper I prove a long-standing (since 1985) conjecture of Wolfram, that one of the simplest possible cellular automata (number 110) is capable of universal computation. The proof involves a lengthy and intricate construction. This construction leads to significantly smaller universal Turing machines than were previously known.
Numerous commentaries have discussed my proof, including:
Nature (Jim Giles, vol. 417, 216-218, 16 May 2002): “of particular interest”
Science (Melanie Mitchell, vol. 298 (5591), 65-68, 4 Oct 2002): “an impressive result”
Quantum Information and Computation (Scott Aaronson, vol. 2 (5), 410-423, 2002): “a remarkable construction”
This work has also been referred to in a comic (showing a part of the construction where a bit is about to be read) and a rap.
Self-Assembled Circuit Patterns
Matthew Cook, Paul W. K. Rothemund, and Erik Winfree
DNA Computers 9, LNCS vol. 2943, 2004, pp. 91-107.
This paper shows how several common digital circuits (including demultiplexers, random access memory, and Walsh transforms) can be built in a bottom-up manner using a form of biologically inspired self-assembly. This paper resolves a wager between myself and Paul Rothemund, in which he claimed that it seemed impossible for a fixed cellular automaton to create an arbitrarily large Walsh Transform matrix pattern, while I claimed it must be possible. In this paper I give a construction for such a cellular automaton.
The main reference on self-assembled circuits, this paper became required reading in courses on chemical computation, such as by Goel at Stanford and by Savage at Brown.
Combining Self-Healing and Proofreading in Self-Assembly
David Soloveichik, Matthew Cook, and Erik Winfree
Natural Computing, vol. 7 (2), 2008, pp. 203-218.
This paper presents a detailed construction for converting any directed algorithmic tile set into a tile set that makes the same pattern at a larger scale, but is simultaneously resistant both to wholesale removal of tiles and to the incorporation of incorrect tiles.
Previous work showed the possibility of robustness with respect to accretion errors, and with respect to large punctures, but different constructions and different arguments had been used in the two cases, leaving researchers unsure whether the two types of error-correction could be combined. This paper resolves this quandary in the affirmative.
Temperature 1 self-assembly: Deterministic assembly in 3d and probabilistic assembly in 2d
Matthew Cook, Yunhui Fu, and Robert T. Schweller
SODA (SIAM Symposium on Discrete Algorithms) 2011, proceedings still in production.
ArXiv 0912.0027, pp. 1-40.
This paper examines the self-assembly of structures growing at “temperature 1”, meaning that no cooperativity is needed for the bonding of new elements – if a bond matches, the particle can stick. In this case, it is very difficult to combine information, and yet combining information is a necessary part of any kind of meaningful computation. The only solution is to use the geometry of the structure itself as a form of information.
Temperature 1 crystals, while much easier to grow experimentally, have been shunned by experimenters due to the lack of theoretical methods for getting them to do anything interesting. In particular, when restricted to deterministic assembly in the plane, no temperature 1 assembly system has been shown to build a shape with a tile complexity smaller than the diameter of the shape. By going to three dimensions (or probabilistic assembly), this paper is the first to present positive results showing ways in which temperature 1 systems can compute the same things as standard temperature 2 systems.
Still Life Theory
In New Constructions in Cellular Automata, Oxford University Press, 2003, pp. 93-118.
In this paper I develop a new line of inquiry, regarding the time complexity of deciding whether a two dimensional pattern satisfies one of the most basic definitions regarding configurations in Conway's “Game of Life”, that of being a “Still Life” (a single stable object). Among other findings, I present a quadratic time algorithm for this problem, which had previously been assumed to require exponential time.
The problem for which I found a quadratic time algorithm is one for which many people had previously written code taking exponential time to solve. I was invited to present this work at the MSRI Combinatorial Game Theory Research Workshop, Berkeley, 2000.
Optimal Interleaving on Tori
Andrew Jiang, Matthew Cook, and Jehoshua Bruck
SIAM Journal on Discrete Mathematics, vol. 20 (4), 2006, pp. 841-879.
Conference version appeared as Optimal t-Interleaving on Tori in
IEEE International Symposium on Information Theory 2004 (ISIT ’04), p. 22.
Original technical report: Caltech Paradise ETR 059, 14 April 2004, pp. 1-40.
In this paper we present a complicated construction for optimally coloring any sufficiently large toroidal grid so that any two sites of the same color are at least t steps from each other. (This generalization of regular graph coloring is called ‘interleaving’.)
This is a significant contribution in the area of interleaving. The conference version was already being cited by other research groups long before the journal version came out.
Tournament Sequences and Meeussen Sequences
Matthew Cook and Michael Kleber
Electronic Journal of Combinatorics, vol. 7 (1) R44, September 2000, pp. 1-16.
In this paper we demonstrate that two sequences with unrelated definitions are actually the same sequence, and we present a new method for calculating sequence elements efficiently even though there is no efficient recurrence or generating function for them.
Donald Knuth kindly points readers to our paper in his book Selected Papers on Analysis of Algorithms. Erich Neuwirth (Seminaire Lotharingien de Combinatoire, v. 47, 2002) has since found an even faster algorithm, although it is still O(n^6), like ours.
Topic: Philosophy of Artificial Intelligence
The Reusable Symbol Problem
A Position Paper, in the Proceedings of the Fourth International Workshop on Neural-Symbolic Learning and Reasoning (NeSy'08) at ECAI, 2008, pp. 12-13.
I was attracted by the challenge of the 2-page length limit of this venue, especially since I feel that one of the main challenges for artificial intelligence is to understand the fundamental differences in approach needed between neural computation and traditional symbolic computation.
Examining the major differences between how traditional programs compute and our current understanding of how brains compute, I see one key gap in our ability to carry out logical reasoning within neural networks using standard methods. I refer to this gap as the reusable symbol problem: How can neural systems use multiply-instantiatable symbols to represent arbitrary objects? This problem is fundamental and cannot be readily decomposed into simpler problems. Solving this problem would solve many individual problems such as the problem of representing relations between objects represented as neural activation patterns, the problem of implementing grammars in neural networks, and the well-known binding problem for neural models.
Topic: Computation in Chemical Reaction Networks
Programmability of Chemical Reaction Networks
Matthew Cook, David Soloveichik, Erik Winfree, and Jehoshua Bruck
Algorithmic Bioprocesses, Springer, 2009 (Festschrift: Grzegorz Rozenberg) pp. 543-584.
Original technical report: Caltech Paradise ETR 090, September 2008, pp. 1-45.
More proofs are in Computation with finite stochastic chemical reaction networks (Soloveichik, Cook, Winfree, Bruck), Natural Computing, vol. 7 (4), 2008, pp. 615-633.
Original technical report: Caltech Paradise ETR 085, September 2007, pp. 1-19.
These papers are full of theorems and techniques for many issues regarding computation in the world of simple chemistry: no polymers, no membranes, just a finite list of reactions available to a finite number of molecules. We analyze many questions regarding the computational power of Stochastic Chemical Reaction Networks (SCRNs) and how they relate to more conventional models of computation including Boolean Logic Circuits, Vector Addition Systems, Petri Nets, Gate Implementability, Primitive Recursive Functions, Register Machines, Fractran, and Turing Machines. Among other results, we give a surprising construction that allows SCRNs to simulate Turing machines with only a polynomial slowdown.
SCRNs have been widely used for describing naturally occurring biochemical systems, where the small number of molecules present inside a cell (less than 100 for many species) makes stochasticity an important factor in biochemical circuitry. With the advent of synthetic biology SCRNs have become a promising language for the design of artificial biochemical circuits.
Our book chapter was highlighted in Aaron Sterling's review of the 742 page festschrift, as “perhaps the most theoretically satisfying chapter,” going on to say that “the proofs in this chapter are extremely pretty.”
Our results comprise the first serious theoretical foray into the computational power of SCRNs. Before our work, it was not even suspected that these systems were capable of universal computation – it was assumed that polymers or membranes were needed.
Our results also for the first time place tight bounds on the speedups obtainable by researchers in stochastic simulation of chemical systems, whose goal is to create faster approximate simulation algorithms.
In the computational complexity area, our work provides the first example we know of where a randomized version of a process is not just more efficient, but can solve problems unsolvable by the nondeterministic version: without stochasticity, simple chemical computations (even viewed nondeterministically) are limited to primitive recursive functions, but with stochasticity, the full generality of Turing machine computation is attained.
Topic: Machine Learning and Probabilistic Systems
It Takes Two Neurons To Ride a Bicycle
Presented as a demo at NIPS ’04.
Technical report, pp. 1-8.
In this work I created an environment in which I assumed learning of higher order concepts would be necessary. I created a general purpose physics-based hinged rigid body simulator and used it to simulate a bicycle for a learning agent to learn to ride. But as others have found in other contexts, most environments do not require higher order concepts, and in this case it turned out that a simple two neuron circuit was already sufficient for controlling the bicycle, indeed, better than a human using the keyboard.
The physics simulator was a significant project in itself: I studied rigid body mechanics (which is more complex than most of us realize) and designed the simulator explicitly so as to simultaneously exactly preserve both angular momentum and kinetic energy, as previous simulations of mine (in quantum mechanics!) had shown me that preserving conserved quantities can be crucial for getting accurate results. The simulator works nicely, and I later read in Sam Buss's 2001 paper, Accurate and Efficient Simulation of Rigid Body Rotations, that my precautions were well warranted.
Later I was able to reduce the controller to just one neuron, and then to an even simpler plain linear feedback system, confirming the finding that many real problems are best solved by a hack. The simulator has been useful in other projects since then.
Book review: Probabilistic reasoning and decision making in sensory-motor systems
The Neuromorphic Engineer, 23 February, 2009, pp. 1-3.
Manuscript prior to unapproved changes by the journal editor, pp. 1-3.
In this brief article solicited by The Neuromorphic Engineer, I review a book that describes a dozen robotics-related PhD projects that used probabilistic reasoning in their control of the robot. These projects together provide a good overview of the state of the art in solving problems by defining joint probability distributions over both sensor readings and actions.
The book only confirmed my fears that nobody has figured out a good way to phrase robotic control in terms of probabilistic reasoning. Our brain's behavior probably only fits probabilistic models under very limited circumstances.
Networks of Relations for Representation, Learning, and Generalization
Matthew Cook and Jehoshua Bruck
Proc. Fourth Intl. Conference on Intelligent Systems Design and Applications (ISDA'04).
Same content in technical report: Caltech Paradise ETR 071, November 2005, pp. 1-6.
In this paper I give an outline of how a network of relations can be used as an associative memory, and how such a memory can be used to learn control tasks such as riding a bicycle. Some simple proofs are given, along with an initial experiment in a direction that I have continued to pursue. This is a “concept paper”, avoiding difficult proofs.
This paper set the stage for my continued research into using population coded value representations in a belief propagation network structure.
Multifactor Expectation Maximization for Factor Graphs
Jason T. Rolfe and Matthew Cook
International Conference on Artificial Neural Networks (ICANN) 2010 (3), pp. 267-276.
Submitted version (minus non-anonymous material), pp. 1-8.
Factor graphs allow large probability distributions to be stored efficiently and facilitate fast computation of marginal probabilities, and by using hidden variables they can efficiently represent even very high order correlations. However, the difficulty of training them has limited their use. We present a factor graph learning algorithm which on each iteration merges adjacent factors, performs expectation maximization on the resulting modified factor graph, and then splits the joined factors using non-negative matrix factorization. We show that this multifactor expectation maximization algorithm converges to the global maximum of the likelihood for difficult logical learning problems much faster and more reliably than traditional generalized expectation maximization.
Sharpening Projections: Inter-area Projections Can Be Effectively Sharpened by Biologically Plausible Learning Mechanisms
Matthew Cook, Florian Jug, and Christoph Krautz
Proc. 18th Annual Computational Neuroscience Meeting (CNS*), 2009, poster p. P214.
The brain contains topographic projections between areas, but developmental rules in the cortex can provide only roughly reciprocal connections through chemical cues, leaving open the question of whether more precise reciprocal connectivity is possible in these cases. We have discovered a biologically plausible set of learning rules that can adjust the synaptic weights so that precisely reciprocal ones are strengthened while others are weakened, thus effectively increasing the specificity of the projections.
Unsupervised Learning of Relations
Matthew Cook, Florian Jug, Christoph Krautz, and Angelika Steger
International Conference on Artificial Neural Networks (ICANN) 2010 (1), pp. 164-173.
In order to build a machine (like a brain) that computes using relational reasoning, we need to have building blocks which represent arbitrary simple relationships and can learn such relationships from experience. Here we present a biologically plausible learning mechanism for just such a building block, using a population code representation for the data. We require that the relationships are learned not just from the point of view of an omniscient observer, but rather the network itself must be able to make effective use of the learned relationship, within the population code representations. Using a form of Hebbian learning, local winner-take-all, and homeostatic activity regulation away from the periphery, we obtain a learning framework which is able to learn relationships from examples and then use the learned relationships in any direction for a variety of routine nervous system tasks such as inference, de-noising, cue-integration, and decision making.
Topic: Percolation Theory and Disk Coverings
Continuum Percolation with Unreliable and Spread-Out Connections
Massimo Franceschetti, Lorna Booth, Matthew Cook, Ronald Meester, and Jehoshua Bruck
Journal of Statistical Physics, vol. 118 (3-4), February 2005, pp. 721-734.
Conference version: Ad hoc wireless networks with noisy links (Booth, Bruck, Cook, Franceschetti), proc. ISIT ’03, also available as Caltech Paradise ETR 047, pp. 1-5.
Technical report with extra material: Percolation in Multi-hop Wireless Networks, Caltech Paradise ETR 055, September 2003, pp. 1-15.
If we watch raindrops start to fall on the pavement, at the beginning we can trace long dry routes avoiding bounded wet regions, while later we can trace long wet routes avoiding bounded dry regions. Percolation theory is the study of the spot density at which this phase transition (‘percolation’) occurs, either on a grid or in continuous space. This paper considers continuum percolation using spot shapes other than circles. If we measure the average number of neighbors that each spot has, then percolation for circles happens at about 4.5 neighbors. We give both theoretical and experimental evidence that all other shapes percolate with an even lower number of neighbors.
This paper resolves an issue that had been causing a growing rift between theory and practice in the use of percolation models for wireless networks, namely the use of circular discs to model the range of a wireless transmitter. Percolation is highly relevant in this context because it indicates whether long range connections are available in an ad hoc wireless network. In practice, the range of a wireless transmitter is highly irregular and anisotropic, leading many researchers to dismiss the theoretical results as irrelevant. This paper showed how the existing body of theoretical results can indeed be applied to the irregular shapes arising in practice, thus putting an end to this controversy.
A Geometric Theorem for Network Design
Massimo Franceschetti, Matthew Cook, and Jehoshua Bruck
IEEE Transactions on Computers, vol. 53 (4), April 2004, pp. 483-489.
Also appears in Caltech Paradise ETR 044, pp. 1-8.
Original technical report with extra material: A Geometric Theorem for Approximate Disk Covering Algorithms, Caltech Paradise ETR 035, 2001, pp. 1-20.
These papers present an algorithm for enclosing a given set of points with a small number of circles. Using circles centered at grid points, we obtain an approximation algorithm within a constant factor of the optimal non-grid solution.
As the technical report details, this is a problem which has attracted attention for over 20 years, and our algorithm is the first to achieve a linear running time. Our results also indicate how to compare the cost of a grid-based solution to the cost of an arbitrary solution, a question of active interest in the wireless communications industry regarding optimal placement of base stations for covering a given set of customers or substations.
Networks of Relations
Ph.D. Thesis, California Institute of Technology (Caltech), 2005, pp. 1-150.
Technical report focusing just on the implementability hierarchy: Implementability Among Predicates (Cook and Bruck) Caltech Paradise ETR 067, 2003, pp. 1-15.
Relations are everywhere. In particular, we think and reason in terms of mathematical and English sentences that state relations. However, we teach our students much more about how to manipulate functions than about how to manipulate relations. Consider functions: We know how to combine functions to make new functions, how to evaluate functions efficiently, and how to think about compositions of functions. Especially in the area of boolean functions, we have become experts in the theory and art of designing combinations of functions to yield what we want, and this expertise has led to techniques that enable us to implement mind-bogglingly large yet efficient networks of such functions in hardware to help us with calculations. If we are to make progress in getting machines to be able to reason as well as they can calculate, we need to similarly develop our understanding of relations, especially their composition, so we can develop techniques to help us bridge between the large and small scales.
In my thesis I explored how relations can be combined into networks which implement larger relations. While the formalism is similar to that of constraint satisfaction problems, the methodology is inspired by neural considerations, particularly regarding the non-zero cost of duplicating information. Through a series of proofs of varying difficulty I show exactly which small relations are capable of implementing which others, while a complex construction shows that in contrast the general question is undecidable. This line of investigation leads serendipitously to the study of an existing subject, Chemical Reaction Networks, where among other things I show that the question of producibility of a species (equivalent to the question of implementability among functions without fan-out, analogous to the question of implementability among relations) turns out to be decidable.
This thesis lay the foundation for further work in the topics of both Chemical Reaction Networks and Machine Learning.
One of the open problems listed at the end of the thesis has since been solved by me and my colleague David Soloveichik, namely, we have found a construction that shows how a Chemical Reaction Network can simulate a Turing machine with only a polynomial time slowdown.
In the technical report I briefly present the implementability results from my thesis, but in the language of logic, wherein the results are about which predicates can implement which others by a ‘predicate graph’, where vertices represent predicates and edges represent variables.
The results here are comparable to Post's 1921 thesis work, which presented results on implementability among boolean functions. I found out later that my work answered existing open questions in Valiant's theory of holographic algorithms, for example in Quantum Circuits that can be Simulated Classically in Polynomial Time (SIAM Journal of Computing, v. 31(4), pp. 1229-1254, 2002).
Topic: Non-von Neumann Hardware and Robotics
State-Dependent Sensory Processing in Networks of VLSI Spiking Neurons
Emre Neftci, Elisabetta Chicca, Giacomo Indiveri, Matthew Cook, and Rodney J. Douglas
International Symposium on Circuits and Systems (ISCAS), 2010, pp. 2789-2792.
An increasing number of research groups develop dedicated hybrid analog/digital VLSI devices implementing hundreds of spiking neurons with biophysically realistic dynamics. Despite the signiﬁcant progress in the device technology, there is still little insight in how such neural assemblies can be organized to yield desired (non-trivial) function. In this work, we organize neural circuits to implement soft Winner Take All (WTA) functionality. By showing that such circuits can have persistent activity states, which can be used as a form of working memory, we propose that such circuits can perform state-dependent computation. We demonstrate such a network in a distributed neuromorphic system consisting of two multi–neuron chips implementing soft WTA, stimulated by an event-based vision sensor. The resulting network is able to track and remember the position of a localized stimulus, optionally further restricting the representation to a trajectory known by the system, a form of using known relationships to help interpret noisy sensory data.
Balancing Pencils using Spike-Based Vision Sensors
Jorg Conradt, Raphael Berner, Patrick Lichtsteiner, Rodney J. Douglas, Tobi Delbruck, Matthew Cook
Best Live Demonstration Award (demo),
Bernstein Conference on Computational Neuroscience, Frankfurt, Germany, 2009, poster.
A Pencil Balancing Robot using a Pair of AER Dynamic Vision Sensors (Conradt, Cook, Berner, Lichtsteiner, Douglas, Delbruck), International Symposium on Circuits and Systems (ISCAS), Taipei, Taiwan, 2009, pp. 781-785.
An Embedded AER Dynamic Vision Sensor for Low-Latency Pole Balancing (Conradt, Berner, Cook, Delbruck), IEEE Workshop on Embedded Computer Vision (ECV09) at the International Conference on Computer Vision (ICCV), Kyoto, Japan, 2009, pp. 1-6.
High Speed Pole Balancing with Only Spike-based Visual Input (Conradt, Lichtsteiner, Berner, Delbruck, Douglas, Cook), Neural Information Processing Systems Conference (NIPS), Vancouver, Canada, December 2008, poster.
This was a fun project where we used neuromorphic principles to design a pencil balancer, and then traveled around the world amazing people with it.
Animals far outperform current technology when reacting to visual stimuli, with very low power and processing requirements, demonstrating astonishingly fast reaction times to changes. Robotic pole balancing with large objects is a well known exercise in current robotics research, but using vision to balance arbitrary small poles (such as a pencil, which is too small for a human to balance) has never before been achieved due to computational and communication bottlenecks in vision processing.
At the Institute of Neuroinformatics we developed an analog silicon retina, which uses an information encoding directly inspired by the spike based information transfer from the human eye to visual cortex to solve this information processing bottleneck.
In this demonstration we show how this previously impossible control problem can be solved on low power embedded fixed-point processors using our neuromorphic approach.