Predictive Attractor Models

Ramy Mounir, Sudeep Sarkar

2024 NeurIPS

Abstract

We propose PAM, a novel sequence memory architecture with desirable generative properties. PAM is a streaming model that learns a sequence in an online, continuous manner by observing each input only once. Additionally, we find that PAM avoids catastrophic forgetting by uniquely representing past context through lateral inhibition in cortical minicolumns, which prevents new memories from overwriting previously learned knowledge. PAM generates future predictions by sampling from a union set of predicted possibilities; this generative ability is realized through an attractor model trained alongside the predictor. We show that PAM is trained with local computations through Hebbian plasticity rules in a biologically plausible framework. Other desirable traits (e.g., noise tolerance, CPU-based learning, capacity scaling) are discussed throughout the paper.

Predictive Attractor Models

Approach

Sequence Generation

Sequence Generation. (Left): Offline generation by sampling a single possibility (i.e., attractor point) from a union of predicted possibilities. (Right): Online generation by removing noise from an observation using the prior beliefs about the observed state. Markov Blanket separates the agent’s latent variables from the world observable states.

Video Presentation

Acknowledgements

This research was supported by the US National Science Foundation Grants CNS 1513126 and IIS 1956050.

Citation

@misc{pam,
  title = {Predictive Attractor Models},
  author = {Ramy Mounir and Sudeep Sarkar},
  booktitle = {Advances in Neural Information Processing Systems},
  year = {2024},
  note = {NeurIPS}
}