Free AI, powered by the crowd

Ask anything — served by GPUs worldwide on the mycellm distributed network. No signup.

Prompts are processed by distributed GPU nodes. Don't share passwords or API keys.

Why distributed inference?

No single point of failure

Models run across multiple peers. If one goes down, requests automatically route to another — no downtime.

Your GPU, your credits

Contribute idle compute and earn credits. Spend them to run larger models across the network. A fair exchange.

No vendor lock-in

Drop-in OpenAI-compatible API. Switch from any provider with one env var. Your tools, your choice.

Private & federated

Run your own swarm or federate with trusted peers. Control membership, models, and data with cross-network routing.

Open source

Apache 2.0 licensed. Audit the code, fork it, extend it. Every layer is transparent — no black boxes, no hidden costs.

Scales with community

Every new seeder makes the network faster and more capable. More contributors means more models and lower latency.

Get started in 60 seconds

pip, Docker, or one-liner — your choice.

curl -fsSL mycellm.dev/install.sh | sh

Installs, initializes, prints next steps.

Drop-in OpenAI replacement

Change one env var. Everything else stays the same.

# Point any OpenAI-compatible tool at mycellm
export OPENAI_BASE_URL=http://localhost:8420/v1
# Works with:
Python SDK· LangChain· LlamaIndex· OpenCode· Claude Code· aider· Continue.dev