Theory of Learning Lab

Saxe Lab

Andrew Saxe's notebook with blue ink markings on the page
Research Area

The interactions of billions of neurons ultimately give rise to our thoughts and actions. Remarkably, much of our behaviour is learned starting in infancy and continuing throughout our lifespan. We aim to develop a mathematical toolkit suitable for analysing and describing learning in the brain and mind. Our current focus is on the theory of deep learning, a class of artificial neural network models that take inspiration from the brain. Alongside this theoretical work, we develop close collaborations with experimentalists to empirically test principles of learning in biological organisms.

Research Topics

Unravelling the influence of learning on neural representations is a fundamental goal in neuroscience, because learning underpins a great diversity of behaviours; and developing better learning algorithms is a fundamental goal in machine learning, because it has proven essential in real world applications.  Current research focuses on three directions:

1.    Learning in complex architectures

The brain contains many layers of interconnected areas. We are investigating the dynamics of learning within complex recurrent, lateral, and top-down network structure, yielding a window into the relation between neural structure and learning dynamics. What are the principles that describe how neural representations change across a distributed network of interconnected brain areas during learning?

2.    Learning with prior knowledge

What we already know changes how we learn. We are investigating the dynamics of learning new tasks with a neural network that already has pre-existing knowledge. The sensitivity of learning dynamics to prior knowledge is of fundamental importance to a variety of empirical phenomena including the impact of curriculum, schema-consistent learning, continual learning and meta learning.

3.    Learning with nonlinearity and context

Living in our world requires responding differently to the same sensory input depending on context. We are investigating learning dynamics in nonlinear neural networks solving rich tasks. Nonlinearity is essential to nearly all real world tasks, and our approach aims to shed light on neural representations underlying task switching, multitasking, and abstraction.
 

Thumbnail
Andrew Saxe
Joint Group Leader at GCNU and SWC
Tyler Boyd-Meredith
Senior Research Fellow
Thumbnail
Erin Grant
Senior Research Fellow
Cristofer Holobetz
PhD Student
Thumbnail
Jin Hwa Lee
PhD Student
Thumbnail
Sebastian Lee
PhD Student, Imperial College
Thumbnail
Nishil Patel
Research Assistant
Thumbnail
Victor Pedrosa
Senior Research Fellow
Profile image
Rachel Swanson
Senior Research Fellow
Selected publications

If deep learning is the answer, what is the question?

Saxe A, Nelli S, Summerfield C
Published by:
Nature Reviews Neuroscience (22: 55-67) (doi: 10.1038/s41583-020-00395-8)
16 November 2020

High-dimensional dynamics of generalization error in neural networks.

Advani MS, Saxe AM, Sompolinsky H
Published by:
Neural Networks (132: 428–446) (doi: 10.1016/j.neunet.2020.08.022)
05 September 2020

A mathematical theory of semantic development in deep neural networks.

Saxe AM, McClelland JL, Ganguli S
Published by:
PNAS (116(23): 11537–11546) (doi: 10.1073/pnas.1820226116)
17 May 2019

Exact solutions to the nonlinear dynamics of learning in deep linear neural networks

Saxe AM, McClelland JL, Ganguli S
Published by:
arXiv (doi: 10.48550/arXiv.1312.6120)
20 December 2013