A long-lived project that still receives updates
Used in conjunction with the Language Operator for Kubernetes
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
 Dependencies

Development

~> 2.0
~> 2.1
~> 13.0
~> 3.0
~> 1.60
~> 3.23
~> 0.9.37
~> 0.4

Runtime

 Project Readme

Language Operator

A Kubernetes operator for running AI agent clusters as native workloads.

What It Does

Language Operator provides a purpose-built set of CRDs for deploying and managing scalable AI agent clusters on Kubernetes:

Resource Purpose
LanguageCluster Managed namespace for AI clusters
LanguageAgent Autonomous, scheduled, and reactive agents
LanguageModel LLM (proxied through LiteLLM)
LanguageTool MCP server
LanguagePersona Behavior, tone, constraints

Installation

Requirements

  • Kubernetes 1.26+
  • NetworkPolicy-capable CNI (Cilium, Calico, Weave, Antrea)

Install the Operator

helm repo add language-operator \
  https://language-operator.github.io/language-operator
helm install language-operator language-operator/language-operator

Getting Started

These examples deploy openclaw or opencode — self-hosted AI coding assistants — to demonstrate the operator's deployment mechanics. LLM traffic routes through an operator-managed LiteLLM proxy rather than connecting to model APIs directly.

1. Create a cluster

A LanguageCluster is a managed namespace for logically grouped agents, models, and tools.

kubectl apply -f - <<EOF
apiVersion: langop.io/v1alpha1
kind: LanguageCluster
metadata:
  name: language-operator-openclaw
spec:
  domain: openclaw.langop.io
EOF

2. Configure an LLM

The LanguageModel holds the real API credential and exposes a LiteLLM proxy inside the cluster.

kubectl create secret generic anthropic-credentials \
  -n language-operator-openclaw \
  --from-literal=api-key=sk-ant-...

kubectl apply -n language-operator-openclaw -f - <<EOF
apiVersion: langop.io/v1alpha1
kind: LanguageModel
metadata:
  name: claude-sonnet
spec:
  provider: anthropic
  modelName: claude-sonnet-4-5
  apiKeySecretRef:
    name: anthropic-credentials
    key: api-key
EOF

3. Deploy an agent

Choose one of the following agents:

openclaw

The openclaw-adapter init container receives the resolved LiteLLM proxy URL via MODEL_ENDPOINTS (injected by the operator) and seeds openclaw.json so openclaw routes through the proxy on first run.

kubectl create secret generic openclaw-gateway \
  -n language-operator-openclaw \
  --from-literal=OPENCLAW_GATEWAY_TOKEN=$(openssl rand -hex 32)

kubectl apply -n language-operator-openclaw -f - <<EOF
apiVersion: langop.io/v1alpha1
kind: LanguageAgent
metadata:
  name: openclaw
spec:
  image: ghcr.io/openclaw/openclaw:latest
  port: 18789
  models:
    - name: claude-sonnet
  workspace:
    size: 10Gi
  deployment:
    initContainers:
      - name: openclaw-adapter
        image: ghcr.io/language-operator/openclaw-adapter:latest
        env:
          - name: OPENCLAW_STATE_DIR
            value: /workspace/.openclaw
        volumeMounts:
          - name: workspace
            mountPath: /workspace
    env:
      - name: OPENCLAW_HOME
        value: /workspace
    envFrom:
      - secretRef:
          name: openclaw-gateway
EOF

See examples/openclaw.yaml for the full annotated example.

opencode

The opencode-adapter init container reads MODEL_ENDPOINTS and LLM_MODEL (injected by the operator) and writes /etc/opencode/opencode.jsonc so opencode routes LLM traffic through the gateway. opencode's image has no default CMD, so args must supply the serve subcommand explicitly.

kubectl create secret generic opencode-server \
  -n language-operator-openclaw \
  --from-literal=OPENCODE_SERVER_PASSWORD=$(openssl rand -hex 32)

kubectl apply -n language-operator-openclaw -f - <<EOF
apiVersion: langop.io/v1alpha1
kind: LanguageAgent
metadata:
  name: opencode
spec:
  image: ghcr.io/anomalyco/opencode:latest
  port: 3000
  models:
    - name: claude-sonnet
  workspace:
    size: 10Gi
  deployment:
    args: ["serve", "--hostname", "0.0.0.0", "--port", "3000"]
    initContainers:
      - name: opencode-adapter
        image: ghcr.io/language-operator/opencode-adapter:latest
        volumeMounts:
          - name: opencode-config
            mountPath: /etc/opencode
    env:
      - name: HOME
        value: /workspace
      - name: XDG_DATA_HOME
        value: /workspace/.local/share
      - name: XDG_CACHE_HOME
        value: /workspace/.cache
    envFrom:
      - secretRef:
          name: opencode-server
    volumes:
      - name: opencode-config
        emptyDir: {}
    volumeMounts:
      - name: opencode-config
        mountPath: /etc/opencode
EOF

See examples/opencode.yaml for the full annotated example.

Connect:

# In one terminal — port-forward the service
kubectl port-forward -n language-operator-openclaw svc/opencode 3000:3000

# In another terminal — launch the opencode TUI pointed at the forwarded port
OPENCODE_SERVER_PASSWORD=$(kubectl get secret opencode-server \
  -n language-operator-openclaw \
  -o jsonpath='{.data.OPENCODE_SERVER_PASSWORD}' | base64 -d) \
  opencode --hostname localhost --port 3000

4. Check status

kubectl get languageagents -n language-operator-openclaw
kubectl get pods -n language-operator-openclaw

Development

# Install git hooks
./scripts/setup-hooks

# Build
cd src && make build

# Test
cd src && make test

# Regenerate CRDs and deepcopy after type changes
cd src && make generate && make helm-crds

Further Reading

Status

Pre-release — not ready for production.

License

MIT