Core repositories in the EverMind ecosystem. EverOS packages the methods, benchmarks, and use cases for building self-evolving agents, while MSA focuses on the model architecture that makes extreme-scale memory practical.
|
An open framework for building self-evolving agents with persistent long-term memory. It brings together memory architectures, open benchmarks, and real use cases so teams can build, evaluate, and integrate memory systems in one place. |
A research repository for Memory Sparse Attention, an end-to-end trainable latent-memory framework that scales long-context reasoning to 100M tokens. It combines sparse routing, KV cache compression, and memory-parallel inference to keep retrieval and generation in a single differentiable system. |
Methods are memory architectures you can choose from — production-ready implementations that give agents persistent, structured long-term memory. Pick the one that fits your use case, or compose them together.
|
A self-organizing memory operating system inspired by biological imprinting. Extracts, structures, and retrieves long-term knowledge from conversations — enabling agents to remember, understand, and continuously evolve. LoCoMo 93.05% · LongMemEval 83.00% |
A hypergraph-based hierarchical memory architecture that captures high-order associations through hyperedges. Organizes memory into topic, event, and fact layers for coarse-to-fine long-term conversation retrieval. LoCoMo 92.73% |
Benchmarks are designed as open public standards. Any memory architecture or agent framework can be evaluated under the same ruler.
|
Three-layer memory quality evaluation: factual recall, applied reasoning, and personalized generalization. Evaluates memory systems and LLMs under a unified standard. |
Agent self-evolution evaluation — not static snapshots, but longitudinal growth curves. Measures transfer efficiency, error avoidance, and skill-hit quality through controlled experiments with and without evolution. |
