Memtrix Memtrix
Website GitHub
Overview

Memtrix

A self-hosted, privacy-first personal AI agent with persistent memory and agentic tool use. Not a chatbot — an agent.

Memtrix runs entirely on your hardware. It communicates through the Matrix protocol, remembers every conversation through semantic search, executes tools autonomously, and evolves its personality based on your interactions.

What is Memtrix?

Memtrix is a personal AI agent that lives in a Docker container on your machine. You chat with it through any Matrix client (Element, FluffyChat, etc.) and it can:

  • Search the web and fetch URLs
  • Manage files, clone repos, download content
  • Remember everything through daily journals and semantic search
  • Create specialist sub-agents with their own identity and memory
  • Update its own personality and knowledge of you over time

Who is it for?

Developers and power users who want a personal AI assistant they control completely — no cloud dependency, no data leaving your machine, no subscriptions.

What makes it different?

  • Self-hosted — LLM, homeserver, search engine, vector DB — everything on your hardware
  • Multi-provider — local models via Ollama or 200+ cloud models via OpenRouter
  • Persistent memory — two-tier memory with semantic search (RAG) powered by on-device embeddings
  • Agentic — iterative reasoning loop with 22 auto-discovered tools
  • Multi-agent — create specialist sub-agents that can consult each other
  • Security hardened — non-root, read-only filesystem, all capabilities dropped

What you need

RequirementPurpose
Docker & Docker ComposeRuns all services
Ollama or an OpenRouter API keyLLM inference
Element Desktop (or any Matrix client)Chat interface
💡
Memtrix downloads the embedding model (~100 MB) on first launch. Subsequent starts reuse the cached model in data/models/.

Quick start

# Clone and enter the project
git clone https://github.com/nnxmms/Memtrix.git && cd memtrix

# Create directories, build image, start Conduit
./setup.sh

# Interactive wizard — configure LLM, model, channel
./onboard.sh

# Launch everything
docker compose up -d

Then open Element → connect to http://localhost:6167 → log in → invite @memtrix:memtrix.local to a room.

Overview

Features

Everything Memtrix brings to the table — from persistent memory to multi-agent orchestration.

Fully Self-Hosted

Every component runs on your hardware. The LLM (via Ollama or OpenRouter), the Matrix homeserver (Conduit), the search engine (SearXNG), the vector database (ChromaDB) — nothing phones home.

Persistent Memory

Memtrix has a two-tier memory system:

  • Core Memory (MEMORY.md) — curated long-term knowledge that Memtrix actively maintains
  • Daily Journals (memory/yyyy-mm-dd.md) — chronological logs searchable via RAG embeddings

Semantic search is powered by nomic-embed-text-v1.5 running entirely on-device. No external API calls.

Agentic Tool System

22 built-in tools auto-discovered at startup. The orchestrator runs an iterative reasoning loop (up to 10 iterations) where the LLM can call tools, observe results, and continue reasoning. New tools are just Python files dropped into src/tools/.

Multi-Provider LLMs

Run local models via Ollama (Llama, Mistral, Gemma, etc.) or tap 200+ cloud models through OpenRouter (OpenAI, Anthropic, Google). Configure multiple providers and switch models per-agent.

Evolving Persona

Memtrix's identity is defined by markdown files (SOUL.md, BEHAVIOR.md, USER.md, MEMORY.md) that it reads, understands, and updates itself over time. Its personality actually grows with every conversation.

Multi-Agent System

Create specialist sub-agents with their own identity, memory, Matrix presence, and workspace. Agents communicate via ask_agent — the main agent can consult sub-agents, and sub-agents can consult each other or the main agent.

Matrix Chat Protocol

Communicates through the Matrix protocol via a local Conduit homeserver. Use Element, FluffyChat, or any Matrix client. Each room maintains its own conversation session.

Security Hardened

Defense-in-depth: non-root container, read-only filesystem, all capabilities dropped, no shell access for the LLM, SSRF protection, human-in-the-loop for destructive operations, path traversal prevention, and prompt injection mitigation.

Built-in SearXNG instance for privacy-respecting web searches. Fetch, read, and summarize any URL — all self-hosted, no tracking.

Message Reactions

Memtrix can react to your messages with emoji in Matrix — just like a human would. The LLM decides when and what to react with naturally.

Overview

Architecture

How the pieces fit together — every component runs locally in Docker.

                          ┌──────────────────┐
                          │  Element Desktop  │
                          │  (Matrix Client)  │
                          └────────┬─────────┘
                                   │
┌──────────────────────────────────┼──────────────────────┐
│  Docker Compose                  │                      │
│                                  │                      │
│  ┌───────────┐    ┌──────────────┴──┐    ┌───────────┐  │
│  │  Memtrix  │◄──►│    Conduit      │    │  SearXNG  │  │
│  │  (Agent)  │    │ (Matrix Server) │    │ (Search)  │  │
│  └─────┬─────┘    └─────────────────┘    └─────▲─────┘  │
│        │                                       │        │
│        ├───> Sub-Agents (background threads)   │        │
│        │     Each with own Matrix user         │        │
│        │                                       │        │
│        ├───────────────────────────────────────┘        │
│        │                                                │
│        ├──► ChromaDB (vector memory, per-agent)         │
│        │                                                │
└────────┼────────────────────────────────────────────────┘
         │
         ▼
    Ollama (LLM)  /  OpenRouter (cloud LLMs)

Components

ComponentRole
MemtrixPython agent — orchestrates LLM calls, tool execution, memory, sessions, sub-agents
ConduitLightweight Matrix homeserver (local-only, no federation)
SearXNGPrivacy-respecting metasearch engine for web access
ChromaDBEmbedded vector database for semantic memory search
OllamaLocal LLM inference (runs separately on host)
OpenRouterCloud LLM gateway — OpenAI, Anthropic, Google, and more

Tech Stack

LayerTechnology
LanguagePython 3.13
LLM BackendOllama, OpenRouter
Embeddingsnomic-embed-text-v1.5 (local, sentence-transformers)
Vector StoreChromaDB (embedded, persistent)
CommunicationMatrix protocol (matrix-nio)
HomeserverConduit
Web SearchSearXNG
ContainerDocker (security-hardened)
TUIRich (onboarding wizard)

Message Flow

  1. You send a message in Element (or any Matrix client)
  2. Conduit receives it and delivers to Memtrix via matrix-nio
  3. Memtrix's orchestrator builds the system prompt (injecting persona files)
  4. The LLM is called with conversation history + tool schemas
  5. If the LLM requests tools → tools execute → results returned → loop continues (up to 10 iterations)
  6. Final response is sent back through Conduit to your Matrix client
First steps

Getting Started

Install Memtrix, run onboarding, and chat with your AI agent — all in about 5 minutes. By the end you will have a running Matrix homeserver, configured LLM, and a working chat session.

What you need

  • Docker & Docker Compose — version 2.0+ recommended
  • An LLM provider — either Ollama running locally, or an OpenRouter API key
  • A Matrix clientElement Desktop is recommended
💡
Check Docker is installed with docker --version and docker compose version. Linux users not in the docker group should prefix commands with sudo.

Quick setup

Clone the repository

git clone https://github.com/nnxmms/Memtrix.git && cd memtrix

Run setup

./setup.sh

This creates directories (data/, workspace/, agents/), copies default config and persona files, generates secrets for SearXNG and Conduit, builds the Docker image, and starts the Conduit homeserver.

📘
Setup waits up to 60 seconds for Conduit to be ready. You'll see a "Conduit is ready" message when it's done.

Run onboarding

./onboard.sh

The interactive wizard walks you through:

  1. Naming your agent — give it a custom name (default: "Memtrix")
  2. Configuring an LLM provider — choose Ollama or OpenRouter, enter connection details
  3. Setting up a model — pick which model to use (e.g. llama3, claude-sonnet-4-20250514)
  4. Configuring the channel — choose Matrix or CLI, register Matrix accounts
💡
The wizard auto-registers three Matrix accounts on Conduit: an admin, the bot, and your user account. It prints the credentials — save them.

Launch everything

docker compose up -d

This starts Memtrix, Conduit, and SearXNG. Check logs with:

docker compose logs -f memtrix

Connect and chat

Open Element Desktop and:

  1. Add a new homeserver: http://localhost:6167
  2. Log in with the user credentials from onboarding
  3. Create a new room
  4. Invite @memtrix:memtrix.local (or your custom name)
  5. Send a message!
📘
First startup: Memtrix downloads the embedding model (~100 MB) on first launch. This can take a couple of minutes. Subsequent starts reuse data/cache/.

What to do next

First steps

Onboarding

A detailed walkthrough of the interactive setup wizard that configures your Memtrix instance.

How it works

The onboarding wizard (./onboard.sh) runs the Python onboarding module inside a Docker container connected to the Conduit network. It uses Rich for a polished terminal UI.

Step 1: Name your agent

Choose a name for your main agent. This name is used for:

  • Matrix bot username (e.g. @memtrix:memtrix.local)
  • Display name in Matrix rooms
  • System prompt identity
  • Sub-agent naming conventions
  • Persona file templates

Default is "Memtrix" — but you can name it anything.

Step 2: Configure a provider

Providers are dynamically discovered from src/providers/. Built-in options:

Ollama (local)

For running models on your own hardware.

  • Required: base_url — URL of your Ollama instance (e.g. http://host.docker.internal:11434)
  • Make sure Ollama is running and has a model pulled (ollama pull llama3)

OpenRouter (cloud)

For accessing 200+ cloud models.

💡
Secret values (API keys, tokens) are stored as $PLACEHOLDER references in config and resolved from environment variables at runtime. They never appear in plain text in config files.

You can configure multiple providers — the wizard will ask if you want to add more.

Step 3: Set up a model

Select a provider, enter the model name (e.g. llama3, anthropic/claude-sonnet-4-20250514), and give it an instance name for reference.

💡
For best results, use a model that supports tool calling. Recommended: llama3 (Ollama) or anthropic/claude-sonnet-4-20250514 (OpenRouter).

Step 4: Configure the channel

Choose between Matrix (recommended) or CLI.

For Matrix, the wizard automatically:

  1. Registers an admin account on Conduit
  2. Registers the bot account
  3. Registers your user account
  4. Sets display names
  5. Collects the bot access token

Passwords are generated with Python's secrets module (cryptographically secure, 24 characters).

What onboarding produces

  • data/config.json — full configuration with all providers, models, channels
  • .env — all secrets (API keys, access tokens, registration token)
  • workspace/AGENT.md — system prompt updated with your agent's name
First steps

First Conversation

What to expect when you send your first message — and how Memtrix learns about you.

Starting a chat

After setup, open Element and invite the bot to a room. Memtrix auto-joins room invites. Each room gets its own independent conversation session — multiple rooms mean multiple contexts.

What happens behind the scenes

When you send a message, Memtrix:

  1. Builds the system prompt — injects SOUL.md, BEHAVIOR.md, USER.md, and MEMORY.md into the AGENT.md template
  2. Searches memory — silently checks daily journals for relevant context
  3. Calls the LLM with conversation history and all 22 tool schemas
  4. Executes tools if the LLM requests them (web search, memory lookup, etc.)
  5. Responds with the final answer
  6. Self-updates — silently updates USER.md, daily journal, and MEMORY.md with anything new it learned
📘
The "silent" operations happen through the tool loop — Memtrix calls search_memory, read_core_file, and write_memory_file as part of its reasoning, without announcing it to you.

Verbose & reasoning modes

Want to see what's happening under the hood?

  • /verbose on — shows tool call notifications in real-time
  • /reasoning on — shows the LLM's thinking process

Both are off by default. Turn them on to watch Memtrix's thought process.

First conversation tips

  • Introduce yourself — Memtrix will save your name and details to USER.md
  • Tell it about your preferences — it updates BEHAVIOR.md if you correct its style
  • Ask it to search the web — confirms tools are working
  • Try /help to see available commands
  • The more you talk, the better it gets — memory and persona evolve continuously
Guides

Sub-Agents

Create specialist agents with their own identity, memory, and Matrix presence. Agents can consult each other autonomously.

What sub-agents get

FeatureDetails
Matrix userA separate bot account (e.g. @dennis:memtrix.local)
Isolated workspaceOwn directory under agents/<name>/ with core files, memory, attachments
Own memorySeparate daily journals and ChromaDB vector index
Inherited behaviorCopies main agent's BEHAVIOR.md, symlinks USER.md (shared)
Custom personaSOUL.md and AGENT.md tailored to the agent's expertise
Full tool accessAll tools except agent management (create_agent, delete_agent)

Creating a sub-agent

Ask Memtrix to create one. It needs a real human name:

Example conversation
You:    Create me a cooking expert. Call him Dennis.

Memtrix: ⚠️ Create a new sub-agent?

         Name: Dennis
         Expertise: Cooking and recipe specialist

         Allow? (yes/no)

You:    yes

Memtrix: Dennis is ready! His Matrix user is @dennis:memtrix.local.
         Invite him to a room to start chatting.
📘
Agent creation requires your confirmation (human-in-the-loop). The name must be a real human name (2–24 characters, letters/spaces/hyphens). A slug is derived internally for directories and config keys.

Chatting with a sub-agent

Invite the agent's Matrix user (e.g. @dennis:memtrix.local) to a room and chat like you would with Memtrix. Dennis has his own memory — he'll remember your conversations and preferences independently.

Inter-agent communication

Agents can consult each other using the ask_agent tool:

  • Main agent → sub-agent
  • Sub-agent → main agent
  • Sub-agent → sub-agent
Cross-agent query
You:    I'm planning a dinner party. Ask Dennis for a menu.

Memtrix: (internally uses ask_agent to consult Dennis)

         Dennis suggests a three-course menu: roasted tomato soup
         to start, herb-crusted salmon as the main, and a lemon
         tart for dessert.

Safety guards

  • Depth limit: 2 hops maximum (prevents infinite recursion)
  • Deadlock prevention: non-blocking 5-second lock timeout per agent
  • Isolated sessions: inter-agent conversations use dedicated sessions, not user-facing history
  • No human-in-the-loop: destructive operations are denied during inter-agent calls
  • Context injection: the target agent sees recent user conversation for context (capped at 10 pairs / 4000 chars)

Managing sub-agents

  • list_agents — see all registered sub-agents and their status
  • delete_agent — permanently remove a sub-agent (workspace, memory, sessions all cleaned up)
  • ask_agent — query another agent by name

Memory exchange

As of v2.7, after an inter-agent call, a summary of the exchange is appended to the target agent's active user session. If Agent B asks Agent A something, A can later tell the user what B asked and what it answered.

Guides

Persona System

How Memtrix defines its identity — and how it evolves over time.

Persona files

Memtrix's identity is defined by markdown files in workspace/:

FilePurpose
AGENT.mdSystem prompt template — wires everything together via {{PLACEHOLDER}} markers
BEHAVIOR.mdCommunication style, tone, and habits
SOUL.mdCore values and personality
USER.mdEverything Memtrix knows about you
MEMORY.mdDistilled long-term memory (the "brain")

How injection works

AGENT.md contains placeholders like {{BEHAVIOR}}, {{SOUL}}, {{USER}}, {{MEMORY}}. At system prompt construction time, the orchestrator reads each file and replaces the placeholder with its contents.

Live updates

These files are live-editable by Memtrix itself. When you tell it to behave differently or share personal details, it updates the appropriate file using write_core_file — with the system prompt rebuilt immediately after.

⚠️
Read-before-write: Memtrix must read a core file before writing to it (enforced at code level, not just in the prompt). This prevents blind overwrites.

Default personality

Out of the box, Memtrix's personality (from SOUL.md) is:

  • A companion, not a product
  • Values privacy
  • Honest and direct
  • Curious, remembers things
  • Not trying to impress
  • Grows with the user

Its behavior (from BEHAVIOR.md) is:

  • Keep it short, casual — like texting a friend
  • No emojis, no "As an AI..." disclaimers
  • Have opinions, be measured
  • Ask questions only when necessary

All of this evolves as you interact. You can also edit the files directly.

Guides

Memory & RAG

Two-tier memory architecture — a curated brain plus searchable daily journals powered by on-device embeddings.

Core Memory

MEMORY.md is a curated, compact summary of the most important long-term knowledge. Key facts, recurring themes, lasting context. Memtrix actively maintains and prunes this file — think of it as the brain.

Daily Journals

Files in memory/yyyy-mm-dd.md are chronological, append-only logs of each day's conversations:

# 2026-03-18

## Conversations
- Brief summaries of what was discussed.

## Learned
- New facts about the user.

## Decisions
- Agreements or directions decided.

## Tasks
- Things requested, completed, or pending.

## Notes
- Anything else worth remembering.

Semantic Search (RAG)

Daily journals are embedded using nomic-embed-text-v1.5 via sentence-transformers and stored in ChromaDB. The model runs entirely on-device — no external API calls.

  • Embedding dimensions: 768→256 (Matryoshka truncation for efficiency)
  • Sync interval: every 300 seconds
  • Singleton model: loaded once, shared across all agents
  • Local-only loading: if cached, skips all HuggingFace Hub network calls

How search works

User:     "Remember that cake recipe I told you about?"
  → search_memory("cake recipe")
  → Finds 2026-03-12.md (distance: 0.23)
  → read_memory_file()
  → Returns full context from that day

Automatic memory management

Memtrix's system prompt (AGENT.md) instructs it to silently manage memory after every response:

  • Update USER.md with any new personal information
  • Append to today's daily journal
  • Update MEMORY.md with lasting facts
  • Update BEHAVIOR.md if the user corrects its communication style

This all happens in the tool loop — the user never sees these operations unless /verbose is enabled.

Guides

Custom Tools

Extend Memtrix by dropping Python files into src/tools/.

Adding a tool

Create a new .py file in src/tools/ that subclasses BaseTool:

src/tools/my_tool.py
from src.tools.base import BaseTool

class MyTool(BaseTool):

    def __init__(self, workspace_dir: str) -> None:
        super().__init__(
            name="my_tool",
            description="Does something useful.",
            parameters={
                "type": "object",
                "properties": {
                    "input": {
                        "type": "string",
                        "description": "The input."
                    }
                },
                "required": ["input"]
            }
        )

    def execute(self, **kwargs) -> str:
        return "result"

Restart Memtrix and the tool is automatically discovered and available to the LLM.

BaseTool interface

  • name — unique tool name (used by the LLM to call it)
  • description — explains what the tool does (guides the LLM)
  • parameters — JSON Schema defining the tool's input
  • execute(**kwargs) — called when the LLM invokes the tool, returns a string result

Injected parameters

The orchestrator injects these kwargs automatically:

KwargPurpose
_room_idCurrent room/session ID
_askHuman-in-the-loop confirmation callback
_reactEmoji reaction callback
_agent_depthCurrent inter-agent depth (0 = direct user call)
Guides

Custom Providers

Add support for any LLM API by dropping a provider file into src/providers/.

Adding a provider

src/providers/my_provider.py
from src.providers.base import BaseProvider

class MyProvider(BaseProvider):

    def __init__(self, api_key: str) -> None:
        super().__init__(name="myprovider")
        self._api_key = api_key

    def completions(self, model, history, tools=None):
        # Call your LLM API and return a message object
        ...

The onboarding wizard automatically discovers new providers and prompts for their constructor parameters. Secret fields (containing "key", "token", "secret" in the parameter name) are handled automatically.

Reference

Configuration

All configuration lives in data/config.json. Secrets are stored in .env and injected as environment variables.

Config structure

data/config.json
{
    "main-agent": {
        "name": "Memtrix",
        "provider": "my-ollama",
        "model": "my-model",
        "channel": "matrix",
        "sessions": {},
        "verbose": false,
        "reasoning": false
    },
    "agents": {},
    "workspace-directory": "/home/memtrix/workspace",
    "providers": {
        "my-ollama": {
            "type": "ollama",
            "base_url": "http://host.docker.internal:11434"
        },
        "my-openrouter": {
            "type": "openrouter",
            "api_key": "$OPENROUTER_API_KEY"
        }
    },
    "models": {
        "my-model": {
            "provider": "my-ollama",
            "model": "llama3",
            "think": true
        }
    },
    "channels": {
        "matrix": {
            "type": "matrix",
            "homeserver": "http://conduit:6167",
            "user_id": "@memtrix:memtrix.local",
            "access_token": "$MATRIX_ACCESS_TOKEN"
        }
    }
}

Secret management

Values starting with $ are resolved from environment variables at startup. The env var name is MEMTRIX_SECRET_ + the placeholder name.

Config valueEnvironment variable
$MATRIX_ACCESS_TOKENMEMTRIX_SECRET_MATRIX_ACCESS_TOKEN
$OPENROUTER_API_KEYMEMTRIX_SECRET_OPENROUTER_API_KEY
$REGISTRATION_TOKENMEMTRIX_SECRET_REGISTRATION_TOKEN

Secrets are resolved once at boot and then cleared from the process environment — they can't leak via /proc or subprocess inspection.

Main agent config

KeyTypeDescription
namestringAgent's display name
providerstringReference to a provider in providers
modelstringReference to a model in models
channelstringReference to a channel in channels
verbosebooleanShow tool call notifications
reasoningbooleanShow LLM thinking process

Model config

KeyTypeDescription
providerstringWhich provider runs this model
modelstringModel identifier (e.g. llama3, anthropic/claude-sonnet-4-20250514)
thinkbooleanEnable extended thinking / reasoning mode
Reference

Commands

Slash commands available in any chat room. Prefix with /.

CommandArgumentsDescription
/clearStart a fresh session in the current room. Also clears inter-agent sessions.
/verboseon | offToggle real-time tool execution notifications. Persists to config.
/reasoningon | offToggle display of model reasoning/thinking. Persists to config.
/helpList all available commands.
📘
/verbose and /reasoning are per-agent — using them in a sub-agent's room only affects that agent.
Reference

Tools Reference

All 22 built-in tools, auto-discovered at startup.

Time

ToolDescription
get_current_timeReturns the current date and time

Persona Files

ToolDescription
read_core_fileReads a core persona file (BEHAVIOR, SOUL, USER, MEMORY)
write_core_fileUpdates a core persona file (enforces read-before-write)

Memory

ToolDescription
read_memory_fileReads today's daily memory journal (date derived automatically)
write_memory_fileUpdates today's daily memory journal
search_memorySemantic search across all daily memories via embeddings

Web

ToolDescription
web_searchSearches the web via local SearXNG instance
fetch_urlFetches and extracts readable text from a URL

File Management

ToolDescription
read_fileReads a file from the workspace (text and PDF supported)
create_fileCreates or overwrites a text file (overwrite requires confirmation)
delete_filePermanently deletes a file from the workspace
create_directoryCreates a directory in the workspace
list_directoryLists contents of a directory
delete_directoryPermanently deletes a directory and its contents
git_cloneClones a public git repository into the workspace
download_fileDownloads a file from a URL (requires confirmation)
send_fileSends a file to the user via Matrix

Reactions

ToolDescription
react_to_messageReact to the user's message with an emoji in Matrix

Agent Management

ToolDescriptionAccess
create_agentCreate a new specialist sub-agentMain agent only
list_agentsList all registered sub-agentsMain agent only
delete_agentPermanently delete a sub-agentMain agent only
ask_agentAsk another agent a questionAll agents
⚠️
Read-before-write: Write operations for persona and memory files are rejected unless the file was read first in the same request. This is enforced at the code level.
Reference

Security

Defense-in-depth — multiple independent layers that each limit what the system and the LLM can do.

Container Isolation

  • Non-root user — runs as memtrix (UID 1000), never root
  • Read-only filesystem — immutable root via read_only: true; only workspace/, data/, and /tmp are writable
  • All capabilities droppedcap_drop: ALL with no-new-privileges: true
  • No shell toolscurl, wget, and other network utilities are not installed in the image
  • Internal-only networking — Memtrix, Conduit, and SearXNG on a private Docker network with no published ports for the bot

No Arbitrary Code Execution

The LLM has no shell access. There is no run_command tool — every action goes through a purpose-built tool with its own validation.

SSRF Protection

All outbound tools (fetch_url, download_file, git_clone) validate URLs against:

  • A hostname blocklist of internal Docker service names (conduit, searxng, localhost, etc.)
  • DNS resolution — hostnames are resolved and IPs checked against private, loopback, link-local, and reserved ranges

Human-in-the-Loop

Sensitive operations require explicit approval:

  • File downloads — user sees URL and destination, must confirm
  • File overwrites — overwriting existing files requires approval
  • Agent creation — user must confirm name and expertise

During inter-agent calls, confirm_with_user() returns false (deny) when no human callback is available — no auto-approval of destructive operations.

File System Protection

  • Path traversal prevention — every path validated with os.path.realpath()
  • Core file protection — system files only accessible through dedicated tools with strict allowlist
  • Memory protectionmemory/ directory off-limits to general file tools
  • Read-before-write — per-room tracking ensures reading before modifying

Prompt Injection Mitigation

  • Untrusted content tagged — web search results, fetched URLs, downloads, and attachments are prefixed with disclaimers
  • Filename sanitizationos.path.basename() with auto-increment on collision
  • Sender name sanitization — brackets stripped, 50-char limit to prevent prompt injection via Matrix profile names

Secret Management

  • Secrets live in .env (chmod 600) and are injected at container startup
  • Resolved once at boot, then cleared from process environment
  • SearXNG gets a randomly generated secret key during setup
  • Conduit registration token is randomly generated — no hardcoded defaults
Reference

Docker & Services

Three services in Docker Compose — all on a private internal network.

Services

ServiceImagePurpose
memtrixBuilt from DockerfileThe agent. Non-root, read-only root, all caps dropped.
conduitmatrixconduit/matrix-conduit:latestMatrix homeserver. Port 6167 exposed.
searxngsearxng/searxng:latestWeb search engine. Internal only.

Volumes

MountContainer pathContent
./workspace/home/memtrix/workspaceCore files, memory, attachments
./agents/home/memtrix/agentsSub-agent workspaces
./data/home/memtrix/dataConfig, sessions, vector index
./data/cache/home/memtrix/.cacheChromaDB, HuggingFace models

Dockerfile

  • Base: python:3.13-slim
  • System deps: git only (minimal attack surface)
  • User: memtrix (uid/gid 1000)
  • Runtime: python -m src.main
  • PYTHONUNBUFFERED=1 for real-time log output

Useful commands

# Start all services
docker compose up -d

# View logs
docker compose logs -f memtrix

# Restart Memtrix only
docker compose restart memtrix

# Stop everything
docker compose down

# Rebuild after code changes
docker compose build && docker compose up -d
Reference

Troubleshooting

Common issues and how to fix them.

Conduit won't start

Check if port 6167 is already in use:

lsof -i :6167

Verify the Conduit container is running:

docker compose ps conduit

Bot doesn't respond

  • Check logs: docker compose logs -f memtrix
  • Verify onboarding completed: the log should show "Starting Memtrix..." at boot
  • Make sure you invited the bot to the room (check the exact username from onboarding)
  • Check that the LLM provider is reachable (Ollama running? OpenRouter key valid?)

Can't connect to Ollama

If Ollama runs on your host machine, the container needs to reach it. Use:

http://host.docker.internal:11434

On Linux, you may need to add --add-host=host.docker.internal:host-gateway to the container or use your machine's LAN IP.

Embedding model download fails

Memtrix downloads nomic-embed-text-v1.5 on first launch. If it fails:

  • Check internet connectivity from the container
  • Ensure data/cache/ is writable (should be owned by uid 1000)
  • Try restarting: docker compose restart memtrix

Permission errors

All mounted directories should be owned by uid 1000:

sudo chown -R 1000:1000 workspace/ agents/ data/

Session or memory issues

  • /clear — resets the current session
  • Sessions are stored in data/ as JSON files organized by date
  • Memory files are in workspace/memory/
  • Vector index is in data/cache/ — deleting it triggers a full re-index on next sync

Full reset

⚠️
This deletes all data — config, sessions, memory, persona files, sub-agents. Only do this if you want a fresh start.
docker compose down
rm -rf data/ workspace/ agents/ .env
./setup.sh
./onboard.sh
docker compose up -d