Entropy-Aware Memory Systems for Continual Learning: Balancing Neuroplasticity and Stability Under Stochastic Workloads
Continual learning systems must remain plastic enough to learn new tasks while stable enough to avoid catastrophic forgetting, but this tradeoff is increasingly constrained by memory behavior rather than compute alone. This project adopts a unified data-access perspective in which deterministic retrieval and stochastic sampling are treated within one framework, so memory support for both adaptation and retention can be analyzed jointly. We will design and evaluate memory mechanisms that explicitly manage stochastic demand, entropy pressure, and replay/update efficiency across task sequences. Expected impact is a clearer systems-level blueprint for continual learning stacks that improve retention and adaptation while remaining robust and efficient on realistic hardware.
Problem Workspace
Problem Statement
Continual learning systems must remain plastic enough to learn new tasks while stable enough to avoid catastrophic forgetting, but this tradeoff is increasingly constrained by memory behavior rather than compute alone. This project adopts a unified data-access perspective in which deterministic retrieval and stochastic sampling are treated within one framework, so memory support for both adaptation and retention can be analyzed jointly. We will design and evaluate memory mechanisms that explicitly manage stochastic demand, entropy pressure, and replay/update efficiency across task sequences. Expected impact is a clearer systems-level blueprint for continual learning stacks that improve retention and adaptation while remaining robust and efficient on realistic hardware.
Execution plan
Recovered from the GitHub publication repo metadata.