The AI that remembers everything.
Not a chatbot. An agent — it searches the web, executes commands, manages its own memory, and evolves its personality over time based on your interactions.
New v2.7 — Inter-agent memory exchange →⟩ What It Does
Private by default. No cloud dependency. LLM, homeserver, search engine, vector DB — everything on a machine you control.
LLM, Matrix homeserver, search engine, vector database — everything runs on your hardware. Your data never leaves your machine.
Daily journals with semantic search (RAG) powered by on-device embeddings. A curated core memory plus chronological logs — it actually remembers.
Auto-discovered tools with an iterative reasoning loop. Web search, file management, git, downloads — or drop in your own as a .py file.
Run local models via Ollama or tap into 200+ cloud models through OpenRouter. Switch providers with a single config change.
Identity files that Memtrix reads, understands, and updates over time. Its personality and knowledge of you grows with every conversation.
Create specialist sub-agents with their own identity, memory, and Matrix presence. Agents can consult each other autonomously.
Communicates through the Matrix protocol. Use Element, FluffyChat, or any Matrix client. Each room gets its own session.
Non-root container, read-only filesystem, all capabilities dropped, isolated workspaces. Human-in-the-loop for destructive operations.
Built-in SearXNG instance for privacy-respecting web searches. Fetch, read, and summarize any URL — all self-hosted.
⟩ Quick Start
Clone, configure, launch. Memtrix handles the rest.
Then open Element → connect to localhost:6167 → invite @memtrix:memtrix.local to a room.
⟩ Architecture
Every piece runs locally in Docker. No external services required.
Python agent — orchestrates LLM calls, tool execution, memory operations, sessions, and sub-agent management.
Lightweight Matrix homeserver. Local-only, no federation — your messages stay on your machine.
Privacy-respecting metasearch engine for web access. Self-hosted, no tracking, no data leaks.
Embedded vector database for semantic memory search. Powers the RAG pipeline with on-device embeddings.
Local LLM inference. Run models like Llama, Mistral, Gemma — entirely on your hardware.
Cloud LLM gateway. Access 200+ models from OpenAI, Anthropic, Google, and more when you need scale.
⟩ Built-in Tools
Every tool is a drop-in Python file. Add your own and Memtrix picks it up on the next restart.
⟩ Memory System
A curated brain for long-term knowledge, plus daily journals for everything else — all searchable via semantic embeddings.
MEMORY.md
— A curated, compact summary of the most important long-term knowledge. Key facts, recurring themes, lasting context. Memtrix actively maintains and prunes this file. Think of it as the brain.
memory/yyyy-mm-dd.md
— Chronological, append-only logs of each day's conversations. Summaries, learned facts, decisions, tasks, and notes — searchable via RAG.
Self-host your own AI — private, persistent, and entirely under your control.