I am the author/maintainer of Nengo DL, a package that integrates deep learning methods with the Nengo neural modelling environment. Nengo DL allows Nengo models to be optimized using deep learning training methods, and improves the simulation speed of Nengo models on CPU or GPU.
Under the hood, Nengo DL is implemented using TensorFlow; it works by translating a Nengo model description into a symbolic TensorFlow computation graph. Nengo DL also allows users to insert TensorFlow code directly into a Nengo model, allowing for the simulation of a wide variety of neural network architectures (such as convolutional neural networks).
I am one of the developers of Nengo, a software suite for the construction and simulation of large-scale neural models. Nengo is designed to aid in the translation between an algorithmic/mathematical description of a model and a detailed neural implementation. It provides a simple interface to the user, while flexibly providing more detailed information and control as the modeller requires.
Another key feature of Nengo is that it allows the same model to be run on a range of different computing platforms. Users can run their model on their home computer, on a GPU cluster, on neuromorphic hardware, or on a BlueGene supercomputer, without any modifications required on their part.
I am the author of the hessianfree software package, which provides tools for training feedforward and recurrent deep networks using Hessian-free optimization. Hessian-free is a powerful second order optimization method that has produced state-of-the-art results in deep learning (particularly in the case of recurrent networks). However, its main downside is its complexity – it is difficult to implement, and therefore difficult to customize for a given application.
hessianfree package is designed to make Hessian-free optimization
more accessible to anyone that wants to work with it. It provides a simple
interface for building and training networks, and makes it easy to modify
the system (e.g., using custom nonlinearities, connectivity, or loss
functions) without having to get involved in the internals of the optimization
I am a co-founder of Applied Brain Research, Inc.. The company was founded in 2014 by a small group of researchers in the Computational Neuroscience Research Group, and is aimed at producing practical applications based on the ideas developed in that lab.
Hierarchical reinforcement learning is based on decomposing an overall task (such as making breakfast) into a composition of subtasks (such as making toast, boiling eggs, and so on). This decomposition has a number of functional benefits, allowing the learning agent to solve more complex and interesting problems.
My research investigates whether reinforcement learning processes in the brain could be explained in a similar hierarchical fashion. This has involved constructing the first neural model to implement the computational theory of HRL, and I continue to work on extending those ideas in new computational directions.
The Raven’s Progressive Matrices intelligence test is one of the most popular tests of general intelligence (the factor quantified by IQ). It is based on finding the patterns governing a sequence of geometric shapes, and using those patterns to induce the next shape in the sequence.
The idea behind this project was to build a neural model of the brain’s inductive, pattern-completion processes, and test that model on the RPM. The model was able to flexibly find the rules governing novel patterns, and because of its neural implementation it was able to explore the neural basis of general intelligence in humans, such as the changes that occur in aging brains.
This model was also incorporated into the larger SPAUN model, forming the heart of its inductive reasoning ability.
There are several projects underway around the world aimed at developing neuromorphic computers – computing platforms specialized in the simulation of large-scale neural models. These new computing platforms offer many advantages over traditional architectures, such as massive parallelization and low power usage.
One of the key challenges of this new hardware is that its programs must be written in the language of neural networks. That is where my work comes in: I help to develop neural models to run on these platforms, as well as tools to allow others to do the same (such as Nengo).