Skip to content

First Chat

After installing, just run:

Terminal window
mycellm chat

This automatically discovers available models:

  1. Local node — checks localhost:8420 for loaded models
  2. LAN bootstrap — reads MYCELLM_BOOTSTRAP_PEERS from config
  3. Public network — falls back to api.mycellm.dev

No configuration needed for first use.

mycellm_ chat
────────────────────────────────────────
Model: Qwen2.5-3B-Instruct-Q8_0
Node: http://10.1.1.210:8420
Type /help for commands, /q to exit
╭──
│ What is distributed computing?
╰──
Distributed computing is a model where multiple computers work
together to solve a problem...
Qwen2.5-3B-Instruct-Q8_0 · via node 99e58f4c · 485ms

Features:

  • Streaming — tokens appear as they’re generated
  • Markdown rendering — code blocks with syntax highlighting
  • Per-message attribution — model name, node hash, latency
  • Multi-turn — conversation context maintained
  • Slash commands — manage your node inline
CommandDescription
/helpShow all commands
/statusNode status (name, peers, models, uptime)
/modelsList available models
/creditsCredit balance (earned/spent)
/fleetFleet nodes with online status
/configRuntime configuration
/use <model>Switch to a specific model
/clearClear conversation history
/qExit

Any tool that speaks the OpenAI protocol works:

from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8420/v1",
api_key="your-key", # optional
)
response = client.chat.completions.create(
model="auto",
messages=[{"role": "user", "content": "Hello"}],
)
print(response.choices[0].message.content)
Terminal window
curl http://localhost:8420/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "auto",
"messages": [{"role": "user", "content": "Hello"}]
}'
Terminal window
export OPENAI_BASE_URL=http://localhost:8420/v1
export OPENAI_API_KEY=your-key
export OPENAI_MODEL=auto