How to build a private, always-on AI assistant for multiple users with proper isolation, scoped API access, and zero cloud dependency.
This guide documents the complete process of setting up OpenClaw (version 2026.2.x) as a personal AI assistant on a Mac Mini M4, serving two independent users via Telegram. The setup uses GPT-5.3-Codex through a ChatGPT subscription (OAuth-based, no separate API billing), with integrations for Notion, Brave Search, Apple Reminders, and Gemini-powered memory search.
The guiding principle throughout is least privilege: every component gets exactly the access it needs and nothing more. The AI agents run under Standard (non-admin) macOS accounts, API integrations are scoped to read-only where possible, shell access is restricted to an explicit allowlist of approved binaries, and the gateway is bound to loopback only.
By the end of this guide, you will have a self-hosted AI assistant that is always on, costs essentially nothing beyond the hardware, and follows security practices you would expect from a production deployment.
Before committing to dedicated hardware, I evaluated cloud hosting — specifically Hetzner, which offers excellent price-to-performance. However, research revealed a significant issue: IP reputation.
Hetzner's IP ranges are frequently flagged by Cloudflare, Microsoft 365, and other services due to historical abuse from other tenants on shared IP blocks. For an AI assistant that needs to reliably interact with web APIs, email providers, and SaaS platforms, this is a deal-breaker. Requests from flagged IPs get rate-limited, CAPTCHAed, or outright blocked.
A Mac Mini on a residential fiber connection (500 Mbit symmetric with a dedicated IP) provides:
Each user runs a completely isolated OpenClaw instance: separate gateway process, separate Telegram bot, separate API keys, separate conversation histories and memory. The only shared components are Homebrew and Node.js binaries (installed once, accessible to both users via a shared brew group).
The foundation of the security model is macOS user separation. Three accounts serve distinct roles:
| Account | Type | Purpose |
|---|---|---|
| sysadmin | Administrator | System management, package installation, user management |
| user-a | Standard | Runs OpenClaw instance #1 |
| user-b | Standard | Runs OpenClaw instance #2 |
If your users were previously admins, downgrade them from the admin account:
# Remove from admin group
sudo dseditgroup -o edit -d username -t user admin
# Verify
dseditgroup -o checkmember -m username admin
# Should return: "no, username is NOT a member"
Each home directory must be secured so users cannot read each other's files, while still allowing system services (Screen Sharing, LaunchAgents) to function:
# 711 = owner full access, others can traverse but not list/read
sudo chmod 711 /Users/user-a
sudo chmod 711 /Users/user-b
# Lock down OpenClaw configs specifically
sudo chmod 700 /Users/user-a/.openclaw
sudo chmod 700 /Users/user-b/.openclaw
chmod 700 on the home directory itself will break Screen Sharing and other macOS services that need to traverse the directory tree. Use 711 for the home directory and 700 for sensitive subdirectories.
Standard users need explicit group membership for remote access:
# Grant SSH access
sudo dseditgroup -o edit -a username -t user com.apple.access_ssh
# Grant Screen Sharing access
sudo dseditgroup -o edit -a username -t user com.apple.access_screensharing
# Verify membership
dseditgroup -o checkmember -m username com.apple.access_ssh
dseditgroup -o checkmember -m username com.apple.access_screensharing
All users who need to unlock the disk after a reboot must be FileVault-enabled. However, on a headless server, consider disabling FileVault entirely. FileVault requires physical keyboard input at the pre-boot screen after every reboot, which defeats the purpose of a headless setup. If you keep FileVault enabled, sudo fdesetup authrestart allows one reboot without the pre-boot prompt.
Install Homebrew and Node.js once, then share across users via a group:
# As sysadmin: install Homebrew and Node.js
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew install node@22
# Create shared brew group
sudo dseditgroup -o create brew
sudo dseditgroup -o edit -a user-a -t user brew
sudo dseditgroup -o edit -a user-b -t user brew
# Fix permissions
sudo chgrp -R brew /opt/homebrew
sudo chmod -R g+rwX /opt/homebrew
Each user needs Homebrew in their PATH. Add to each user's ~/.zshrc:
eval "$(/opt/homebrew/bin/brew shellenv)"
export PATH="/opt/homebrew/bin:$PATH"
OpenClaw installs globally via npm, but each user needs their own onboarding:
# Install once (from any user)
npm install -g openclaw@latest
# Onboard each user separately
# As user-a:
openclaw onboard --install-daemon
# As user-b:
openclaw onboard --install-daemon
During onboarding, each user configures their own model provider, Telegram bot, API keys, and gateway port. Use different ports for each user (e.g., 18789 and 18790).
The gateway is the core runtime that connects the AI model to channels (Telegram), tools (exec, web search), and skills. The key configuration file is ~/.openclaw/openclaw.json.
{
"gateway": {
"port": 18789,
"mode": "local",
"bind": "loopback",
"auth": {
"mode": "token",
"token": "your-secret-token"
}
}
}
mode: "local" — The gateway runs directly on this machine. Do not use "remote" unless connecting to a separate gateway server. This is a common onboarding mistake that blocks gateway startup.bind: "loopback" — Listens on 127.0.0.1 only. The gateway is not accessible from the network, even on your local LAN.auth.mode: "token" — Requires a token for all gateway connections. Even local processes must authenticate.OpenClaw installs as a per-user LaunchAgent (~/Library/LaunchAgents/ai.openclaw.gateway.plist). This means:
RunAtLoad: true)KeepAlive: true)# Install/update the LaunchAgent
openclaw gateway install
# Start/stop the gateway
openclaw daemon start
openclaw daemon stop
# Check status
openclaw daemon status
openclaw gateway install from an SSH session for a user who isn't logged into the macOS GUI, it will fail with "Domain does not support specified action." You must Screen Share in, Fast User Switch to that user, and run the command from Terminal in their GUI session.
In OpenClaw 2026.2.x, Telegram is a plugin-based channel, not a built-in one. This means you need both the channel config and the plugin enabled:
{
"channels": {
"telegram": {
"enabled": true,
"dmPolicy": "pairing",
"botToken": "your-bot-token",
"groupPolicy": "allowlist",
"streaming": "off"
}
},
"plugins": {
"entries": {
"telegram": {
"enabled": true
}
}
}
}
botToken in the config. The bot can only be connected to one gateway at a time, so make sure the old instance is stopped first.
After configuring, the bot uses a pairing code to link your Telegram account to the agent. Send /start to your bot in Telegram to initiate pairing.
The exec tool is how the AI agent runs shell commands on your machine. This is the most security-sensitive component of the entire setup, and it deserves careful configuration.
On a headless server, you cannot approve exec commands via a GUI popup. Setting "ask": "on-miss" works when you have a dashboard open, but fails silently on a headless machine. Setting "ask": "off" with "security": "allowlist" blocks any command not in the allowlist.
{
"tools": {
"exec": {
"host": "gateway",
"security": "allowlist",
"ask": "off"
}
}
}
The allowlist lives at ~/.openclaw/exec-approvals.json and specifies exactly which binaries the agent can execute. When a command is denied, the agent receives an error and cannot bypass it.
Start with "ask": "on-miss" while you have a dashboard session open. As the agent tries commands, approve the ones you want to allow. They get added to the allowlist automatically. Once your list is complete, switch to "ask": "off".
A typical allowlist for a personal assistant might include:
| Binary | Purpose | Risk Level |
|---|---|---|
/usr/bin/curl | API calls (Notion, webhooks) | Medium — can reach external services |
/opt/homebrew/bin/gog | Google Workspace CLI | Low with read-only OAuth scopes |
/opt/homebrew/bin/remindctl | Apple Reminders | Low |
/opt/homebrew/bin/openclaw | Self-status checks | Low |
/usr/bin/wc | Text processing | Low — read-only |
/bin/bash, /bin/sh, /usr/bin/python3, or /usr/bin/node to the allowlist. These interpreters can execute arbitrary code, effectively bypassing all restrictions. If the agent needs subshell expansion (e.g., $(cat file)), consider using "security": "full" with appropriate compensating controls, or restructure the skill to avoid subshell syntax.
Notion uses scoped integration tokens. Each integration only sees pages and databases that are explicitly shared with it:
mkdir -p ~/.config/notion
printf 'ntn_your_key_here' > ~/.config/notion/api_key
~/.openclaw/.env are loaded by the gateway for its internal features (like memory search), but they are not passed to commands executed via the exec tool. The exec tool spawns a separate shell process that only sees environment variables defined in the LaunchAgent plist. If a skill uses $NOTION_API_KEY, it won't find it. Skills should read tokens from files instead.
Brave Search provides web search capabilities. The free tier offers 2,000 searches per month. Add the API key to ~/.openclaw/.env:
BRAVE_API_KEY=BSA-your-key-here
Configure in openclaw.json:
{
"tools": {
"web": {
"search": { "enabled": true },
"fetch": { "enabled": true }
}
}
}
OpenClaw's memory search uses an embedding provider to create vector representations of conversations for semantic recall. The storage is 100% local (SQLite), but embedding generation requires an external API call. Gemini's embedding API works well and is free with a Google Workspace subscription:
# In ~/.openclaw/.env
GEMINI_API_KEY=your-gemini-key
# In openclaw.json
{
"agents": {
"defaults": {
"memorySearch": { "enabled": true }
}
}
}
Memory search lets the agent recall information from previous conversations, even across session resets. When you run /reset in Telegram, the current conversation is cleared, but the memory search index retains a semantic fingerprint of what was discussed.
The architecture is privacy-conscious: all stored data (conversation transcripts, vector embeddings, search indices) lives in local SQLite databases under ~/.openclaw/workspace/memory/. The only external call is to the embedding API (Gemini in this case) which receives short text chunks for vectorization.
# Check memory status
openclaw memory status --deep
# Verify the memory directory exists
ls -la ~/.openclaw/workspace/memory/
To test memory is working: tell the bot something memorable, run /reset, then ask about it in a fresh session. If the bot recalls the information, memory search is functioning correctly.
sudo pmset -a sleep 0
sudo pmset -a disablesleep 1
sudo pmset -a displaysleep 0
sudo pmset -a autorestart 1
macOS supports auto-login for one user. Set it to the primary OpenClaw user so their LaunchAgent starts automatically after a reboot:
sudo defaults write /Library/Preferences/com.apple.loginwindow autoLoginUser user-a
The second user's gateway will not start automatically. After a reboot, you need to Screen Share in, Fast User Switch to user-b, and let their session load. Once loaded, the LaunchAgent starts and the gateway runs until the next reboot.
pmset -g
Expected output should include: SleepDisabled 1, sleep 0, autorestart 1, displaysleep 0.
replayd to consume 99% CPU — kill it with sudo kill -9 $(pgrep replayd)duetexpertd (Apple Proactive Intelligence) may also spike CPU — unnecessary on a headless serverThe onboarding wizard sometimes sets "mode": "remote". For a local installation, this must be "mode": "local" in the gateway section of openclaw.json.
In OpenClaw 2026.2.x, the openclaw plugins install telegram command has a known bug. Workaround: manually add "plugins": { "entries": { "telegram": { "enabled": true } } } to your openclaw.json.
After restarts, orphaned node processes may remain. Check with ps aux | grep node | grep -v grep and kill with killall -9 node before starting fresh.
LaunchAgents require a GUI login session. If you get "Domain does not support specified action," Screen Share into the Mac Mini and run the command from the user's Terminal GUI session.
The LaunchAgent plist has its own environment, separate from your shell. Keys in ~/.openclaw/.env are loaded by the gateway process but not passed to exec-spawned commands. Store keys in files (e.g., ~/.config/notion/api_key) and have skills read from those files.
If Screen Sharing stops working after changing permissions, ensure home directories use chmod 711 (not 700). Also verify users are in the com.apple.access_screensharing group, and restart the service with sudo killall screensharingd.
Create a shared brew group, add users to it, then fix permissions: sudo chgrp -R brew /opt/homebrew && sudo chmod -R g+rwX /opt/homebrew.
Run through this checklist after completing the setup:
| Check | Command | Expected |
|---|---|---|
| Gateway running (per user) | openclaw status | Gateway: running, Telegram: ON/OK |
| Telegram bot responds | Send test message | Bot replies |
| Memory search active | openclaw memory status --deep | Embedding provider active, entries > 0 |
| Sleep disabled | pmset -g | grep sleep | sleep 0, SleepDisabled 1 |
| Auto-restart on | pmset -g | grep autorestart | autorestart 1 |
| FileVault status | sudo fdesetup status | Off (for headless) or On (with authrestart plan) |
| Auto-login set | sudo defaults read /Library/Preferences/com.apple.loginwindow autoLoginUser | Primary user name |
| CPU healthy | top -l 1 -o cpu | head -15 | No rogue processes > 50% |
| User isolation | ls /Users/other-user/.openclaw | Permission denied |
| Screen Sharing works | Connect via VNC | Standard mode connects reliably |
If all checks pass, your headless OpenClaw server is production-ready. The system will survive power outages (auto-restart), maintain user isolation, keep API access scoped, and run your AI assistant 24/7 with zero manual intervention.