The hippocampus is a unique place. It has a relationship between memory, navigation and sleep – this is something really special.
When computation isn't working, I say OK, what am I missing from the biology of the brain? You can’t include everything, but it’s a good way to think about what you might be missing from an algorithm.
The next big thing is if the network can learn during sleep, how does that change the activity?

Replaying the past, predicting the future: A new model of the hippocampus

21 March 2025

We spend about one-third of our lives fast asleep, or trying to get to sleep. Without sleep, we would die. But why? What happens in the brain when we close our eyes and drift into slumber? The precise answers remain unknown. As recently as the 1990s, the Harvard sleep researcher Robert Stickgold recalled his former collaborator J. Allan Hobson joking that ‘the only known function of sleep is to cure sleepiness’ . 

But things are changing. In the last 20 years, studies have shown that memory consolidation, sifting through our experiences, and learning, happen while we snooze.  

One phenomenon, known as replay, has been observed in many species during sleep. It occurs in various brain regions, including the hippocampus—a small, seahorse-shaped structure- and involves the re-occurrence of sequences of cell activity that originally took place during wakefulness.

Dr Daniel Levenstein, a postdoctoral research at McGill, Canada, recently spoke at SWC as part of the Emerging Neuroscientists Seminar Series. Using computational models, he aims to understand how replay of waking experience is generated during sleep, and how it is used to consolidate memories. 

He hopes that mimicking this process in artificial neural networks can be mutually beneficial to both neuroscience and AI.

Can you describe your new theory that explains how representation, and replay, occur in the hippocampus? 

The hippocampus is essential for what are seemingly unrelated functions, like spatial navigation, imagination, memory, and planning.  An explanation for these functions hasn’t been clear. 

Predictive learning—the process of anticipating future sensory inputs—offers a promising framework to explain how the hippocampus works. Computational models suggest that learning to predict incoming information can lead to the formation of spatially tuned cells, similar to place cells that activate when we are in a specific location in the environment, as seen in the hippocampus. 

And during offline, or sleep-like states, activity such as replay may be driven by predictive mechanisms that simulate potential future experiences.

Continuous attractor neural networks (CANNs) provide another perspective. These models propose that the hippocampus maintains a structured representation of space, even without sensory input. However, traditional CANN models do not fully explain how the hippocampus learns spatial maps from experience.

A third perspective is that the hippocampus is a sequence generator. This view is mostly based on observations of spiking sequences in the rodent hippocampus during theta oscillations, and was really championed by my PhD lab. However, models of this view have focused on the biophysics and physiology, and have been lacking in computational depth.

We studied whether sequential predictive learning can unify these views. To do this, we trained artificial neural networks, specifically recurrent neural networks (RNNs), to predict sensory sequences in a simulated environment. 

Our results showed that sequential predictive learning could lead to a map-like representation of space - though under certain conditions. Firstly, there had to be recurrent connections (where past information influenced future predictions). There also had to be information about head direction (which way the animal was facing). And finally, the model had to predict not just the next moment but several steps into the future.

This model could also generate realistic movement paths during rest, including paths untaken, similar to the replay that happens in the hippocampus during sleep. 

Our findings unify existing theories about these functions for the first time.

(Left) The agent moves around the gridworld environment with a visual field corresponding to the 7x7 grid of tiles in front of it. Gray tiles represent impassable walls, while colored floor tiles provide visual cues. (Right) a predictive RNN (pRNN) is trained to use the current observation and action to predict the observation in the next timestep
What were the main challenges that you faced when you were building these networks to simulate the brain’s processes?

Before we landed on the main result, which was that sequence-like learning is really important, we were stuck.

We had a network that was learning to predict, and was developing place cells, but it was not developing replay.  My initial thought was that just prediction was going to be sufficient – this was based on the work that had inspired us.

We were stuck for quite a while until I went back to look at the physiology. What is the hippocampus actually doing? That's what gave me the idea to start looking at sequence prediction.

At the end of the day, I'm trying to explain something in physiology, and how it relates to computation. When computation isn't working, I say OK, what am I missing from the biology of the brain? You can’t include everything, but it’s a good way to think about what you might be missing from an algorithm.

How do you think your findings about hippocampal function could be applied to real-world problems like improving memory or understanding neurological conditions where the hippocampus is affected?

There's a lot of evidence that many psychiatric conditions seem to have sleep disruption. Everything from autism to schizophrenia seem to have very specific sleep disruption. In a lot of cases, it seems to involve disrupted communication between the hippocampus and the cortex. My hope is that if we were to better understand what the hippocampus is doing during sleep, that would help us understand what's going wrong in these conditions.

How might your research impact the development of AI systems?

There are a couple of things current AI systems are really bad at. One is that they're very data-hungry, and the other is that they have a lot of trouble generalising outside of the data that they've seen. These things are related. The reason they need more data is because anything we want them to be able to cover has to be in the training data. 

The way the brain seems to solve this problem is by using sleep. So instead of collecting tonnes and tonnes of days' worth of data, we can collect a few days’ worth of data on a task, and then practise it during sleep. 

We use sleep to extrapolate from the specific instances that we've found to more general cases. And so the hope is that if we can mimic how the brain does this, then we can help AI be more data efficient because we could use less data to train the network and then just put the network in a sleep-like state and it will learn.

What are the next steps in your research?

We didn't have any learning during sleep in our models, we just studied how activity was generated in the offline mode. So the next big thing is if the network can learn during sleep, how does that change the activity?

The other thing is the interaction between the hippocampus and the rest of the brain. My model is only about the hippocampus. But most theories about learning during sleep involve interaction between the hippocampus and other parts of the brain.  

The hippocampus is a unique place. It has a relationship between memory, navigation and sleep – this is something really special.

About Daniel

Dan did his PhD with György Buzsáki and John Rinzel at NYU, and is now a postdoc with Blake Richards and Adrien Peyrache. His work focuses on the generation and use of offline, or “spontaneous", activity to support learning in biological and artificial neural networks. He uses biologically-inspired neural network models, neural data analysis, and works closely with experimental collaborators. He also has a strong interest Philosophy of Science and how it can be applied to scientific practice — most notably the use of computational models in neuroscience, and how mechanistic and normative approaches can work together to understand neural systems.