Memtrix

Memtrix

The AI that remembers everything.

Not a chatbot. An agent — it searches the web, executes commands, manages its own memory, and evolves its personality over time based on your interactions.

New v2.7 — Inter-agent memory exchange

Your personal AI, running on your hardware.

Private by default. No cloud dependency. LLM, homeserver, search engine, vector DB — everything on a machine you control.

🏠

Fully Self-Hosted

LLM, Matrix homeserver, search engine, vector database — everything runs on your hardware. Your data never leaves your machine.

🧠

Persistent Memory

Daily journals with semantic search (RAG) powered by on-device embeddings. A curated core memory plus chronological logs — it actually remembers.

🛠️

Agentic Tool System

Auto-discovered tools with an iterative reasoning loop. Web search, file management, git, downloads — or drop in your own as a .py file.

🔌

Multi-Provider LLMs

Run local models via Ollama or tap into 200+ cloud models through OpenRouter. Switch providers with a single config change.

👤

Evolving Persona

Identity files that Memtrix reads, understands, and updates over time. Its personality and knowledge of you grows with every conversation.

🤖

Multi-Agent System

Create specialist sub-agents with their own identity, memory, and Matrix presence. Agents can consult each other autonomously.

💬

Matrix Chat Protocol

Communicates through the Matrix protocol. Use Element, FluffyChat, or any Matrix client. Each room gets its own session.

🔒

Security Hardened

Non-root container, read-only filesystem, all capabilities dropped, isolated workspaces. Human-in-the-loop for destructive operations.

🔍

Private Web Search

Built-in SearXNG instance for privacy-respecting web searches. Fetch, read, and summarize any URL — all self-hosted.

Up and running in minutes.

Clone, configure, launch. Memtrix handles the rest.

terminal — zsh
# Clone and enter the project
$ git clone https://github.com/nnxmms/Memtrix.git && cd memtrix

# Create directories, build image, start Conduit
$ ./setup.sh

# Interactive wizard — configure LLM, model, channel
$ ./onboard.sh

# Launch everything
$ docker compose up -d

Then open Element → connect to localhost:6167 → invite @memtrix:memtrix.local to a room.

Built from purpose-built components.

Every piece runs locally in Docker. No external services required.

🧠

Memtrix Agent

Python agent — orchestrates LLM calls, tool execution, memory operations, sessions, and sub-agent management.

🏛️

Conduit

Lightweight Matrix homeserver. Local-only, no federation — your messages stay on your machine.

🔎

SearXNG

Privacy-respecting metasearch engine for web access. Self-hosted, no tracking, no data leaks.

📐

ChromaDB

Embedded vector database for semantic memory search. Powers the RAG pipeline with on-device embeddings.

🦙

Ollama

Local LLM inference. Run models like Llama, Mistral, Gemma — entirely on your hardware.

☁️

OpenRouter

Cloud LLM gateway. Access 200+ models from OpenAI, Anthropic, Google, and more when you need scale.

22 tools. Auto-discovered. Extensible.

Every tool is a drop-in Python file. Add your own and Memtrix picks it up on the next restart.

Two-tier memory. Total recall.

A curated brain for long-term knowledge, plus daily journals for everything else — all searchable via semantic embeddings.

🧬

Core Memory

MEMORY.md — A curated, compact summary of the most important long-term knowledge. Key facts, recurring themes, lasting context. Memtrix actively maintains and prunes this file. Think of it as the brain.

📖

Daily Journals

memory/yyyy-mm-dd.md — Chronological, append-only logs of each day's conversations. Summaries, learned facts, decisions, tasks, and notes — searchable via RAG.

Ready to meet your agent?

Self-host your own AI — private, persistent, and entirely under your control.