devflow

Branch your code.
Branch everything else too.

devflow gives every branch its own databases, caches, and services — cloned instantly via Copy-on-Write. Switch branches and your entire stack switches with you, automatically.

Automatic
Git hooks sync your entire stack on checkout
Instant
Copy-on-Write — clone a 50 GB database in seconds
Multi-Service
Postgres, ClickHouse, MySQL, Redis — local, cloud, or plugin
CLI, TUI & GUI
Terminal dashboard, desktop app, or plain command line
AI-Ready
Agent skills for Claude Code, Cursor & OpenCode
Hooks & Plugins
MiniJinja lifecycle hooks, custom providers, reverse proxy
# Install and get started in 30 seconds
cargo install --git https://github.com/clement-tourriere/devflow.git
cd ~/my-project
devflow init
devflow switch -c feature/my-feature

01 Getting Started

Get from zero to isolated workspace environments in minutes. This section covers installation, your first workspace, and automatic Git integration.

Prefer a graphical interface? devflow ships with a Desktop GUI (Tauri + React) that lets you manage projects, workspaces, services, and hooks without the command line. Install it with mise run gui:build, or run mise run gui for dev mode. You can also use the TUI dashboard (devflow tui) for a keyboard-driven terminal UI.

Quick Start (CLI)

Step 1: Initialize

Navigate to your Git repository and run devflow init. The interactive wizard sets up your config and installs Git hooks automatically.

cd ~/my-project
devflow init

This creates a .devflow.yml config file and installs Git hooks for automatic workspace switching.

Step 2: Create a workspace

Create your first isolated workspace. devflow clones your services instantly using Copy-on-Write storage.

devflow switch -c feature/my-feature

You can also use Git directly — git checkout -b feature/my-feature triggers the same flow via the hooks that devflow init installed. If you skipped hook installation during init (or used --non-interactive), run devflow install-hooks first.

Step 3: Develop, switch, repeat

Make changes in isolation. When you switch branches, your services switch with you automatically.

# Work on your feature...
npm run migrate
npm test

# Switch back to main — your services switch automatically
git checkout main

# See all your workspaces
devflow list
What is a workspace?

A workspace is a named environment (usually matching a Git branch) that bundles its own isolated copies of every service you've configured — databases, caches, queues, or any Docker container. When you git checkout feature/auth, devflow pauses the current services and starts the ones for feature/auth, so your data never leaks between branches. Workspaces are created instantly via Copy-on-Write — even a 50 GB database is cloned in under a second.

Installation

Requirements

macOS

No special setup needed. APFS cloning is used automatically for Copy-on-Write storage.

# Build and install devflow
git clone https://github.com/clement-tourriere/devflow.git
cd devflow
mise trust && mise install   # installs the Rust toolchain
cargo install --path .

Ubuntu / Debian

# Install Docker
sudo apt-get update
sudo apt-get install -y docker.io
sudo usermod -aG docker $USER
newgrp docker

# Build and install devflow
git clone https://github.com/clement-tourriere/devflow.git
cd devflow
mise trust && mise install   # installs the Rust toolchain
cargo install --path .

Optional: ZFS on Linux (for instant branching)

If you're on ext4 (the default) and want near-instant Copy-on-Write branching:

# Install ZFS tools
sudo apt-get install -y zfsutils-linux

# Let devflow create a file-backed pool (recommended)
devflow setup-zfs                    # 10G pool named "devflow"
devflow setup-zfs --size 20G         # Custom size
devflow setup-zfs --pool-name mypool # Custom name

Verify installation

devflow --help
devflow doctor

Adding Services (optional)

By default, devflow init asks you to add services. You can also add or change them at any time with devflow service add.

Add a database

Use the interactive wizard, or pass flags directly:

# Interactive — walks you through service type, provider, and name
devflow service add app-db

# Or specify everything on the command line
devflow service add app-db --provider local --service-type postgres

Seed from existing data (optional)

If you have existing schema or data you want every workspace to inherit:

# Seed from a running PostgreSQL instance
devflow service add app-db --provider local --service-type postgres \
  --from postgresql://user:pass@localhost:5432/myapp

# Or seed from a SQL dump file
devflow service add app-db --provider local --service-type postgres \
  --from ./backup.sql

Every workspace created from main automatically inherits this data via Copy-on-Write — no extra copies needed.

Check connection info

# Get connection string
devflow connection main
# → postgresql://postgres:postgres@localhost:55432/myapp

# Or export as environment variable
eval $(devflow connection main --format env)
# → sets DATABASE_URL=postgresql://...

Shell Integration

devflow emits DEVFLOW_CD=<path> on context-changing commands (for example switch, init <dir>, and TUI open with o). To auto-cd, add the shell wrapper to your profile:

# Bash (~/.bashrc)
eval "$(devflow shell-init bash)"

# Zsh (~/.zshrc)
eval "$(devflow shell-init zsh)"

# Fish (~/.config/fish/config.fish)
devflow shell-init fish | source

# Auto-detect shell
eval "$(devflow shell-init)"

The wrapper detects DEVFLOW_CD=<path> directives and automatically changes your working directory in the parent shell.

Adding devflow to an Existing Project

Already have a project with a running database? devflow can adopt it. Here's how to add workspace-isolated services to an existing codebase.

Initialize in your existing repo

Navigate to your project's Git repository root and run devflow init. Then add a service with devflow service add. The interactive wizard walks you through service type, provider, and name selection.

cd ~/my-existing-project
devflow init
devflow service add app-db --provider local --service-type postgres

This creates a .devflow.yml config file and starts a fresh "main" service workspace. If you already have a .devflow.yml, use --force to overwrite:

devflow init --force

Seed from your existing database

If you have existing schema/data you want to keep, seed when adding the service:

# Seed from a running PostgreSQL instance
devflow service add app-db --provider local --service-type postgres --from postgresql://user:pass@localhost:5432/myapp

# Or seed from a SQL dump file
devflow service add app-db --provider local --service-type postgres --from ./backup.sql

# Or seed from an S3 backup
devflow service add app-db --provider local --service-type postgres --from s3://my-backups/postgres/latest.dump

Every workspace created from main will automatically inherit this data via Copy-on-Write — no extra copies needed.

Add local overrides (optional)

If team members need different settings (ports, images, credentials), create a .devflow.local.yml next to .devflow.yml. This file is merged on top of the main config and should be gitignored.

# .devflow.local.yml — personal overrides, not committed
services:
  - name: app-db
    local:
      port_range_start: 60000    # Avoid conflicts with your local PG
# Add to .gitignore
echo ".devflow.local.yml" >> .gitignore

Verify your setup

devflow doctor

devflow doctor checks your config, Docker, Git hooks, storage driver, and connectivity. Fix any issues it reports before continuing.

Update your app's connection string

Point your application at the devflow-managed database instead of your manually-managed one. Use devflow connection to get the new connection info:

# Get connection string
devflow connection main
# → postgresql://postgres:postgres@localhost:55432/myapp

# Or export as environment variable
eval $(devflow connection main --format env)
# → sets DATABASE_URL=postgresql://...

Update your .env, settings.py, database.yml, or equivalent to use this URL. From now on, devflow manages your database lifecycle.

Start branching

You're set. Create a feature workspace and devflow handles the rest:

git checkout -b feature/new-schema
# → devflow automatically creates an isolated database clone

# Run your migrations safely — they only affect this workspace
npm run migrate   # or: python manage.py migrate / rails db:migrate

# Switch back — main database is untouched
git checkout main
Framework examples: See the Examples & Recipes section for framework-specific walkthroughs (Django, Rails, Node.js/Prisma) including migration hooks and Docker Compose integration.

Using mise as a Task Runner

mise is a polyglot dev tool manager and task runner. devflow ships with a mise.toml that provides pre-configured tasks for building, testing, and serving documentation. If you use mise, it also manages your Rust and Cargo toolchain automatically.

Install mise

# macOS
brew install mise

# Linux
curl https://mise.jdx.dev/install.sh | sh

# Then activate (add to your shell profile)
eval "$(mise activate bash)"   # or zsh/fish

Available tasks

CommandDescription
mise run buildBuild devflow (debug mode)
mise run build:releaseBuild devflow (optimized release)
mise run testRun all tests
mise run lintRun clippy lints
mise run fmtFormat code
mise run docsServe documentation site locally at localhost:8787
mise run ciRun the full CI pipeline (fmt + clippy + test)
With mise, new contributors can get started with just mise install && mise run build — no need to manually install Rust or configure toolchains.

02 Core Concepts

Understand how devflow workspaces your development environment and the architecture behind instant, isolated environment branching.

How Branching Works

┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ Desktop GUI │ │ TUI │ │ CLI │ │ (Tauri+React)│ │ (ratatui) │ │ (clap) │ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ └─────────────────┼─────────────────┘ ▼ ┌─────────────────────────────────────────────────────┐ │ devflow-core │ ├─────────────────────────────────────────────────────┤ │ Hook Engine │ │ (MiniJinja templates, conditions, approvals) │ ├──────────────┬──────────────────────────────────────┤ │ VCS Layer │ Service Layer │ │ │ │ │ ┌─────────┐ │ ┌──────────────────────────────┐ │ │ │ Git │ │ │ ServiceProvider trait │ │ │ │workspace│ │ │ (create/delete/switch/ │ │ │ │ + │ │ │ connect/start/stop) │ │ │ │ worktree│ │ ├───────┬───────┬──────────────┤ │ │ ├─────────┤ │ │Postgre│Click- │MySQL/ │ │ │ │ jj │ │ │SQL │House │Generic/ │ │ │ │ support │ │ │(local,│(local)│Plugin │ │ │ └─────────┘ │ │ neon, │ │ │ │ │ │ │ dblab,│ │ │ │ │ │ │ xata) │ │ │ │ │ │ └───────┴───────┴──────────────┘ │ ├──────────────┴──────────────┬───────────────────────┤ │ Configuration Layer │ Reverse Proxy │ │ (.devflow.yml / env vars) │ (HTTPS *.localhost) │ ├─────────────────────────────┴───────────────────────┤ │ State Management │ │ (~/.config/devflow/local_state.yml) │ └─────────────────────────────────────────────────────┘

The core loop is simple:

  1. Git checkout triggers the post-checkout hook
  2. The hook calls devflow git-hook, which detects the workspace change
  3. devflow orchestrates across all configured providers — creating or switching service workspaces
  4. Lifecycle hooks fire (e.g., update .env, run migrations)
  5. If worktrees are enabled, the shell wrapper cds you into the right directory

Copy-on-Write Storage

When you create a workspace with the local Docker provider, devflow clones the entire data directory from the parent. With Copy-on-Write filesystems, this clone is near-instant and uses almost no extra disk space. Only blocks that change after the clone are duplicated.

Filesystem Platform CoW Method Setup Required
APFS macOS cp -c clone None (automatic)
ZFS Linux Snapshots + clones devflow setup-zfs
Btrfs Linux Reflink copy None (if filesystem is Btrfs)
XFS Linux Reflink copy None (if created with reflink support)
ext4 / other Any Full copy (fallback) None (works, just slower)
devflow auto-detects the best storage method available. Run devflow doctor to see which method is being used. On macOS, APFS cloning is always available with no setup.

ZFS Deep Dive

ZFS provides the most powerful Copy-on-Write support with true snapshots and clones:

# devflow creates a ZFS dataset per project
# zfs list shows something like:
devflow/myapp           # project dataset
devflow/myapp@main      # snapshot of main workspace
devflow/myapp/feature   # clone from snapshot (instant, zero-copy)

The devflow setup-zfs command creates a file-backed ZFS pool — no spare disk required. During devflow init on Linux, devflow auto-detects ZFS tools and offers to create the pool for you.

Service Lifecycle

Each local service workspace goes through these states:

Provisioning ──▶ Running ──▶ Stopped ──▶ (deleted) │ │ │ │ ▼ │ └──────────▶ Failed ◀──────┘
StateDescriptionCommands
ProvisioningContainer being created, data being clonedcreate
RunningContainer active and accepting connectionsstart, switch
StoppedContainer paused, data preservedstop
FailedContainer crashed or failed to startlogs, reset

03 Configuration

devflow is configured via YAML files and environment variables. From a minimal single-line setup to a complex multi-service orchestration, the config system scales with your needs.

Config Hierarchy

Configuration is merged from three sources (highest precedence first):

PrioritySourcePurpose
1 Environment Variables Quick toggles, CI/CD overrides, secrets
2 .devflow.local.yml Machine-specific overrides (add to .gitignore)
3 .devflow.yml Team shared configuration (committed to Git)
Services are stored in local state (~/.config/devflow/local_state.yml), not in the committed config file. This keeps API keys and machine-specific settings out of your repo.

Use devflow config -v to see the effective config with precedence details (which value came from which source).

Full Configuration Schema

All sections are optional. An empty .devflow.yml file is valid.

Minimal Config

# This is all you need — devflow init generates this
services:
  - name: app-db
    type: local
    service_type: postgres
    default: true
    local:
      image: postgres:17

Complete Reference

# ============================================
# Git Integration
# ============================================
git:
  auto_create_on_workspace: true          # Create service workspaces on git checkout
  auto_switch_on_workspace: true          # Switch services on git checkout
  main_workspace: main                    # Main git workspace (auto-detected on init)
  workspace_filter_regex: "^feature/.*"  # Regex — only workspace for matching patterns
  exclude_workspaces:                    # Never create service workspaces for these
    - main
    - master
    - develop

# ============================================
# Behavior
# ============================================
behavior:
  max_workspaces: 10                     # Max workspaces before cleanup triggers

# ============================================
# Services
# ============================================
services:
  - name: app-db                       # Unique name for this service
    type: local                        # Provider type
    service_type: postgres             # Service type
    auto_workspace: true                  # Workspace this service with git (default: true)
    default: true                      # Default target for -s flag
    local:
      image: postgres:17

  - name: analytics
    type: local
    service_type: clickhouse
    auto_workspace: true
    clickhouse:
      image: clickhouse/clickhouse-server:latest
      port_range_start: 59000          # HTTP port (native = HTTP + 877)
      data_root: ~/.local/share/devflow
      user: default                    # ClickHouse user
      password: ""                     # ClickHouse password

  - name: app-mysql
    type: local
    service_type: mysql
    auto_workspace: true
    mysql:
      image: mysql:8                   # Docker image
      port_range_start: 53306          # First port to try
      data_root: ~/.local/share/devflow
      root_password: dev               # MySQL root password
      database: myapp                  # Default database
      user: dev                        # Application user
      password: dev                    # Application password

  - name: cache
    type: local
    service_type: generic
    auto_workspace: false                 # Shared across all workspaces
    generic:
      image: redis:7-alpine            # Any Docker image
      port_mapping: "6379:6379"        # host:container port mapping
      port_range_start: 56000          # Used if port_mapping is omitted
      environment:                     # Environment variables for container
        REDIS_MAXMEMORY: "100mb"
      volumes:                         # Additional volume mounts
        - "/data/redis:/data"
      command: "redis-server --save 60 1"  # Override container command
      healthcheck: "redis-cli ping"    # Healthcheck command

  - name: cloud-db
    type: neon
    service_type: postgres
    auto_workspace: true
    neon:
      api_key: ${NEON_API_KEY}         # Supports ${ENV_VAR} interpolation
      project_id: ${NEON_PROJECT_ID}
      base_url: https://console.neon.tech/api/v2  # Default

  - name: staging-db
    type: dblab
    service_type: postgres
    auto_workspace: true
    dblab:
      api_url: https://dblab.example.com
      auth_token: ${DBLAB_TOKEN}

  - name: xata-db
    type: xata
    service_type: postgres
    auto_workspace: true
    xata:
      api_key: ${XATA_API_KEY}
      organization_id: my-org
      project_id: my-project
      base_url: https://api.xata.tech  # Default

  - name: custom-service
    type: local
    service_type: plugin
    auto_workspace: true
    plugin:
      name: my-plugin                  # Resolved as devflow-plugin-my-plugin on PATH
      # path: /usr/local/bin/my-plugin # Or use an explicit path
      timeout: 30                      # Timeout per invocation (seconds)
      config:                          # Opaque JSON forwarded to the plugin
        region: us-east-1
        tier: development

# ============================================
# Worktree Configuration
# ============================================
worktree:
  enabled: true                        # Create Git worktrees per workspace
  path_template: "../{repo}.{workspace}"  # Where worktrees are created
  copy_files:                          # Files to copy into new worktrees
    - .env.local
    - .env
  copy_ignored: true                   # Copy files even if gitignored

# ============================================
# Lifecycle Hooks (MiniJinja templates)
# ============================================
hooks:
  post-create:
    install: "npm ci"
    env-setup:
      command: "echo DATABASE_URL={{ service['app-db'].url }} > .env.local"
      working_dir: "."
      continue_on_error: false
      condition: "file_exists:package.json"
      environment:
        NODE_ENV: development
      background: false
    migrate: "npm run migrate"

  post-switch:
    update-env:
      command: "echo DATABASE_URL={{ service['app-db'].url }} > .env.local"

  pre-merge:
    test:
      command: "npm test"
      continue_on_error: false

# ============================================
# AI Commit Messages
# ============================================
commit:
  generation:
    command: "claude -p --model haiku"  # External CLI (preferred)
    # Falls back to OpenAI-compatible API if no command set:
    # api_key: ${DEVFLOW_LLM_API_KEY}
    # api_url: https://api.openai.com/v1
    # model: gpt-4o-mini

# ============================================
# AI Agent Integration
# ============================================
agent:
  command: claude                       # Default agent tool
  workspace_prefix: "agent/"              # Prefix for agent workspaces
Use the services array syntax to configure one or more service providers.

Config Value Interpolation

API keys and secrets in service configs support ${ENV_VAR} syntax for environment variable interpolation:

neon:
  api_key: ${NEON_API_KEY}        # Resolved at runtime from environment
  project_id: ${NEON_PROJECT_ID}

Workspace Name Sanitization

devflow automatically sanitizes Git workspace names for use as database identifiers:

Example: feature/Auth-System becomes devflow_feature_auth_system (with prefix naming strategy).

Environment Variables

VariableDescriptionDefault
DEVFLOW_DISABLEDCompletely disable devflowfalse
DEVFLOW_SKIP_HOOKSSkip Git hook executionfalse
DEVFLOW_AUTO_CREATEOverride auto_create_on_workspaceconfig value
DEVFLOW_AUTO_SWITCHOverride auto_switch_on_workspaceconfig value
DEVFLOW_BRANCH_FILTER_REGEXOverride workspace filter regexconfig value
DEVFLOW_DISABLED_BRANCHESComma-separated workspaces to skip
DEVFLOW_CURRENT_BRANCH_DISABLEDDisable for current workspace onlyfalse
DEVFLOW_ZFS_DATASETForce a specific ZFS datasetauto-detected
DEVFLOW_LLM_API_KEYAPI key for AI commit messages
DEVFLOW_LLM_API_URLLLM endpoint URLOpenAI
DEVFLOW_LLM_MODELLLM model namegpt-4o-mini
DEVFLOW_COMMIT_COMMANDExternal CLI for commit messages (e.g., claude -p)
DEVFLOW_AGENT_COMMANDDefault agent command (e.g., claude, codex)claude
DEVFLOW_CONTEXT_BRANCHOverride current workspace context (for CI)auto-detected

04 Providers

Providers define where and how your service workspaces run. From local Docker containers to cloud APIs to custom plugins, devflow supports a range of deployment targets.

local Local Docker

Docker containers with Copy-on-Write storage. Supports PostgreSQL, ClickHouse, MySQL, and any Docker image.

neon Neon

Neon's serverless Postgres with instant branching via their API.

dblab DBLab

Database Lab Engine for thin-clone development databases.

xata Xata

Xata's managed Postgres with workspace support.

plugin Plugin

Custom providers via JSON-over-stdio protocol. Build your own in any language.

Local Docker Provider

The default and most feature-rich provider. Each workspace gets its own Docker container with bind-mounted data. Copy-on-Write storage makes branching near-instant.

PostgreSQL (Local)

services:
  - name: app-db
    type: local
    service_type: postgres
    default: true
    local:
      image: postgres:17           # Any PostgreSQL Docker image
      port_range_start: 55432      # Ports are auto-assigned starting here
      postgres_user: postgres      # Superuser name
      postgres_password: postgres  # Superuser password
      postgres_db: myapp           # Default database
      data_root: ~/.local/share/devflow  # Data directory root
      storage: auto                # auto | zfs | apfs | reflink | copy

Containers are named devflow-{project}-{workspace}. Each gets a unique port, so multiple workspaces can run simultaneously.

ClickHouse (Local)

services:
  - name: analytics
    type: local
    service_type: clickhouse
    auto_workspace: true
    clickhouse:
      image: clickhouse/clickhouse-server:latest
      port_range_start: 59000    # HTTP port; native port = HTTP + 877
      data_root: ~/.local/share/devflow
      user: default              # ClickHouse user
      password: ""               # ClickHouse password

Containers are named devflow-ch-{service}-{workspace}. Two consecutive ports are allocated (HTTP + native protocol).

MySQL / MariaDB (Local)

services:
  - name: app-mysql
    type: local
    service_type: mysql
    auto_workspace: true
    mysql:
      image: mysql:8             # Or mariadb:11
      port_range_start: 53306
      data_root: ~/.local/share/devflow
      root_password: dev         # MySQL root password
      database: myapp            # Default database
      user: dev                  # Application user
      password: dev              # Application password

Containers are named devflow-mysql-{service}-{workspace}.

Generic Docker (Any Image)

services:
  - name: cache
    type: local
    service_type: generic
    auto_workspace: false           # Shared across workspaces (no data cloning)
    generic:
      image: redis:7-alpine
      port_mapping: "6379:6379"  # Explicit host:container mapping
      environment:
        REDIS_MAXMEMORY: "100mb"
      volumes:
        - "/tmp/redis-data:/data"
      command: "redis-server --save 60 1"
      healthcheck: "redis-cli ping"

Containers are named devflow-{service}-{workspace}. The generic provider does not support data cloning from parent workspaces — set auto_workspace: false to share a single instance across workspaces, or true for separate empty containers per workspace.

If port_mapping is omitted, devflow auto-assigns ports starting from port_range_start (default: 56000).

Neon

Neon provides serverless PostgreSQL with instant, copy-on-write branching built into their cloud platform.

services:
  - name: cloud-db
    type: neon
    service_type: postgres
    auto_workspace: true
    neon:
      api_key: ${NEON_API_KEY}
      project_id: ${NEON_PROJECT_ID}
      base_url: https://console.neon.tech/api/v2  # Default

Neon supports point-in-time branching — you can create a workspace from a specific timestamp, not just the current state.

DBLab

Database Lab Engine creates thin clones of production-size databases for development and testing.

services:
  - name: staging-db
    type: dblab
    service_type: postgres
    auto_workspace: true
    dblab:
      api_url: https://dblab.example.com
      auth_token: ${DBLAB_TOKEN}

Xata

Xata provides managed PostgreSQL with workspace support through their platform API.

services:
  - name: xata-db
    type: xata
    service_type: postgres
    auto_workspace: true
    xata:
      api_key: ${XATA_API_KEY}
      organization_id: my-org
      project_id: my-project
      base_url: https://api.xata.tech  # Default

Plugin Provider

Build custom providers in any language. Plugins are standalone executables that communicate with devflow via a JSON-over-stdio protocol.

Configuration

services:
  - name: custom-service
    type: local
    service_type: plugin
    auto_workspace: true
    plugin:
      name: my-plugin              # Resolved as devflow-plugin-my-plugin on PATH
      # path: /usr/local/bin/my-plugin  # Or explicit path
      timeout: 30                  # Per-invocation timeout (seconds)
      config:                      # Opaque JSON forwarded to the plugin
        region: us-east-1
        tier: development

Protocol

Requests are JSON objects written to the plugin's stdin, one per line. The plugin writes a JSON response to stdout.

Request format:

{
  "method": "create_workspace",
  "params": {
    "workspace_name": "feature-auth",
    "from_workspace": "main"
  },
  "config": {
    "region": "us-east-1",
    "tier": "development"
  },
  "service_name": "custom-service"
}

Response format (success):

{
  "ok": true,
  "result": {
    "host": "localhost",
    "port": 5432,
    "database": "mydb",
    "user": "dev",
    "password": "secret"
  }
}

Response format (error):

{
  "ok": false,
  "error": "Failed to create workspace: quota exceeded"
}

Required Methods

MethodParamsExpected Result
create_workspaceworkspace_name, from_workspaceConnectionInfo
delete_workspaceworkspace_name{}
list_workspaces[{name, created_at}]
workspace_existsworkspace_name{exists: bool}
switch_to_branchworkspace_nameConnectionInfo
get_connection_infoworkspace_nameConnectionInfo
doctor{healthy: bool, messages: [...]}

Scaffolding a Plugin

# Print a Bash plugin scaffold script (stdout)
devflow plugin init my-plugin --lang bash > devflow-plugin-my-plugin
chmod +x devflow-plugin-my-plugin

# Print a Python plugin scaffold script (stdout)
devflow plugin init my-plugin --lang python > devflow-plugin-my-plugin.py

# Verify a plugin is working
devflow plugin check my-plugin

05 Services

Services are the stateful resources (databases, caches, queues) that devflow workspaces alongside your Git workflow. One project can manage many services simultaneously.

Single vs. Multi-Service

For simple projects with one database:

# Single service — simple and clean
services:
  - name: app-db
    type: local
    service_type: postgres
    default: true
    local:
      image: postgres:17

For projects with multiple databases or services:

# Multiple services — each branched independently
services:
  - name: app-db
    type: local
    service_type: postgres
    auto_workspace: true
    default: true
    local:
      image: postgres:17

  - name: cache
    type: local
    service_type: generic
    auto_workspace: false         # Shared — not branched per git workspace
    generic:
      image: redis:7-alpine
      port_mapping: "6379:6379"

auto_workspace and default

FlagDescription
auto_workspace: true This service is automatically branched when you create/switch/delete Git workspaces. Set to false to share a single instance across all workspaces (e.g., Redis cache).
default: true When you run devflow connection <workspace> without -s, this is the service whose connection info is shown. Only one service should be marked as default.

Targeting Specific Services

Use the -s flag to target a specific named service:

# Get connection info for the analytics service
devflow connection feature/auth -s analytics

# Start a specific service
devflow service start feature/auth -s app-db

# Seed a specific service
devflow service seed feature/auth -s app-db --from dump.sql

Multi-Provider Orchestration

When you run devflow service create, switch, or service delete, devflow orchestrates across all services with auto_workspace: true. Operations are performed sequentially with partial failure tolerance — if one provider fails, the others continue.

Connection Info Formats

# URI format (default)
devflow connection feature/auth
# → postgresql://postgres:postgres@localhost:55433/myapp

# Environment variable format
devflow connection feature/auth --format env
# → DATABASE_URL=postgresql://postgres:postgres@localhost:55433/myapp
# → DATABASE_HOST=localhost
# → DATABASE_PORT=55433
# → DATABASE_USER=postgres
# → DATABASE_PASSWORD=postgres
# → DATABASE_NAME=myapp

# JSON format
devflow connection feature/auth --format json
# → {"host":"localhost","port":55433,"database":"myapp",...}

06 Worktrees

Git worktrees let you have multiple workspaces checked out simultaneously in different directories. devflow manages worktrees automatically and integrates them with service branching.

Why Worktrees?

Configuration

worktree:
  enabled: true
  path_template: "../{repo}.{workspace}"  # Where worktrees are created
  copy_files:                          # Files to copy into new worktrees
    - .env.local
    - .env
    - .env.development
  copy_ignored: true                   # Copy even if files are gitignored

Path Template Variables

VariableDescriptionExample
{repo}Repository directory namemy-project
{workspace}Workspace name (slashes become dashes)feature-auth

With the default template ../{repo}.{workspace}, workspace feature/auth of repo my-project creates a worktree at ../my-project.feature-auth/.

The Switch Command

devflow switch is the primary command for working with worktrees. It handles Git checkout, worktree creation, service switching, and hook execution in one step.

# Interactive picker — fuzzy search across all workspaces
devflow switch

# Switch to an existing workspace
devflow switch feature/auth

# Create a new workspace and switch to it
devflow switch -c feature/payment

# Create from a specific base workspace
devflow switch -c feature/payment -b develop

# Switch without touching services
devflow switch feature/auth --no-services

# Switch and run a command in the worktree
devflow switch feature/auth -x "npm install"

# Dry run — show what would happen
devflow switch feature/auth --dry-run

Shell Integration (Required for auto-cd)

For devflow switch to automatically change your shell's working directory to the worktree, you need the shell wrapper:

# Add to your shell profile
eval "$(devflow shell-init)"

Without this, devflow switch will print the path but won't cd into it (because a child process can't change the parent shell's directory).

Worktree Setup in Existing Worktrees

If you manually created a worktree (e.g., via git worktree add), you can retroactively set up devflow:

# In the worktree directory
devflow worktree-setup

This copies config files from the main worktree and creates the corresponding service workspace.

Example: Full Worktree Workflow

# Enable shell integration (one-time setup in ~/.zshrc)
eval "$(devflow shell-init)"

# Start a new feature — creates worktree + isolated service environment
devflow switch -c feature/auth
# → Created worktree at ../my-project.feature-auth
# → Created service workspace feature/auth
# → cd ../my-project.feature-auth

# Work on the feature...
npm run migrate
npm test

# Jump to another workspace for a PR review (without losing your work)
devflow switch -c fix/login-bug
# → Created worktree at ../my-project.fix-login-bug
# → cd ../my-project.fix-login-bug

# Switch back to your feature — instant, no rebuild needed
devflow switch feature/auth
# → cd ../my-project.feature-auth

# Interactive picker for when you forget workspace names
devflow switch
# → Shows fuzzy search across all workspaces

07 Hooks

Lifecycle hooks are MiniJinja-templated commands that run at specific phases of the workspace lifecycle. Use them to run migrations, update .env files, restart services, or run tests.

All Hook Phases

PhaseFires When...Blocking?
pre-switchBefore switching to a workspaceYes
post-createAfter creating a new workspaceYes
post-startAfter starting a stopped workspaceNo
post-switchAfter switching to a workspaceNo
pre-removeBefore removing a workspaceYes
post-removeAfter removing a workspaceNo
pre-commitBefore committing (Git pre-commit hook)Yes
pre-mergeBefore merging workspacesYes
post-mergeAfter merging (Git post-merge hook)No
post-rewriteAfter rebase/amend (Git post-rewrite hook)No
pre-service-createBefore creating a service workspaceYes
post-service-createAfter creating a service workspaceNo
pre-service-deleteBefore deleting a service workspaceYes
post-service-deleteAfter deleting a service workspaceNo
post-service-switchAfter switching a service workspaceNo
Blocking hooks run synchronously and must complete before the operation continues. Non-blocking hooks run in the background via tokio::spawn.

Simple vs. Extended Hooks

Simple (string command)

hooks:
  post-create:
    migrate: "npm run migrate"
    seed: "npm run seed"

Extended (full options)

hooks:
  post-create:
    migrate:
      command: "npm run migrate"
      working_dir: "./backend"       # Run in a subdirectory
      continue_on_error: false       # Fail the operation if hook fails
      condition: "file_exists:package.json"  # Only run if condition is true
      background: false              # Run in foreground (blocking)
      environment:                   # Extra environment variables
        NODE_ENV: development
        DATABASE_URL: "{{ service['app-db'].url }}"

Template Variables

Hook commands are rendered with MiniJinja (Jinja2-compatible). All template variables are available in both commands and environment values.

VariableDescriptionExample
{{ workspace }}Current Git workspace namefeature/auth
{{ repo }}Repository directory namemy-project
{{ worktree_path }}Worktree path (if enabled)../my-project.feature-auth
{{ default_workspace }}Default workspacemain
{{ commit }}HEAD commit SHAa1b2c3d
{{ target }}Target workspace (merge hooks)main
{{ base }}Base/parent workspace (create hooks)main
{{ service.<name>.host }}Service hostlocalhost
{{ service.<name>.port }}Service port55433
{{ service.<name>.database }}Database namemyapp
{{ service.<name>.user }}Service userpostgres
{{ service.<name>.password }}Service passwordpostgres
{{ service.<name>.url }}Full connection URLpostgresql://...
Use named service access in templates, especially for names containing hyphens (for example: {{ service['app-db'].url }}).

Custom Filters

FilterDescriptionExample
sanitize Replace / and \ with - {{ workspace | sanitize }}feature-auth
sanitize_db Database-safe identifier (max 63 chars, hash suffix) {{ workspace | sanitize_db }}feature_auth
hash_port Deterministic port in range 10000–19999 {{ workspace | hash_port }}14523
lower Lowercase {{ workspace | lower }}
upper Uppercase {{ workspace | upper }}
replace String replacement {{ workspace | replace("/", "-") }}
truncate Truncate to N characters {{ workspace | truncate(20) }}

Conditions

Hook conditions determine whether a hook should run. Conditions are themselves template-rendered.

ConditionDescription
file_exists:pathRun only if file exists
dir_exists:pathRun only if directory exists
always / trueAlways run
never / falseNever run (disabled)
Any other stringExecuted as a shell command; exit 0 = true
hooks:
  post-create:
    # Only run if package.json exists
    npm-install:
      command: "npm ci"
      condition: "file_exists:package.json"

    # Only run if Python project
    pip-install:
      command: "pip install -r requirements.txt"
      condition: "file_exists:requirements.txt"

    # Custom shell condition
    run-if-staging:
      command: "npm run seed:staging"
      condition: "test '{{ workspace }}' = 'staging'"

Hook Approval System

For security, hooks from .devflow.yml require user approval before running for the first time. This prevents unexpected commands from executing via Git hooks. In --non-interactive mode, unapproved hooks fail instead of prompting.

When a new hook is detected, devflow prompts:

Approvals are stored in ~/.config/devflow/hook_approvals.yml, keyed by project path and command hash.

# View approved hooks
devflow hook approvals

# Clear all approvals (will re-prompt)
devflow hook approvals clear

Running Hooks Manually

# Show all configured hooks
devflow hook show

# Show hooks for a specific phase
devflow hook show post-create

# Run all hooks for a phase
devflow hook run post-create

# Run a specific named hook
devflow hook run post-create migrate

# Run hooks for a different workspace
devflow hook run post-create --workspace feature/auth

Complete Hook Example

hooks:
  post-create:
    # Install dependencies
    install:
      command: "npm ci"
      condition: "file_exists:package.json"
      continue_on_error: false

    # Generate .env file with all service connection strings
    env-setup:
      command: |
        cat > .env.local << EOF
        DATABASE_URL={{ service['app-db'].url }}
        CLICKHOUSE_URL=http://{{ service.analytics.host }}:{{ service.analytics.port }}
        REDIS_URL=redis://{{ service.cache.host }}:{{ service.cache.port }}
        EOF
      working_dir: "."

    # Run database migrations
    migrate:
      command: "npm run migrate"
      environment:
        DATABASE_URL: "{{ service['app-db'].url }}"
      continue_on_error: false

  post-switch:
    # Update .env on every workspace switch
    update-env:
      command: |
        cat > .env.local << EOF
        DATABASE_URL={{ service['app-db'].url }}
        CLICKHOUSE_URL=http://{{ service.analytics.host }}:{{ service.analytics.port }}
        REDIS_URL=redis://{{ service.cache.host }}:{{ service.cache.port }}
        EOF

  pre-merge:
    # Run tests before merging
    test:
      command: "npm test"
      continue_on_error: false
    lint:
      command: "npm run lint"
      continue_on_error: false

  post-start:
    # Start dev server on a deterministic port per workspace
    dev-server:
      command: "npm run dev -- --port {{ workspace | hash_port }}"
      background: true

  post-remove:
    # Custom cleanup
    cleanup:
      command: "docker stop {{ repo }}-{{ workspace | sanitize }}-* 2>/dev/null || true"
      continue_on_error: true

08 Seeding

Seed your workspaces with data from production dumps, live databases, or S3 backups. Seeding works during initialization or at any time after workspace creation.

Seed Sources

PostgreSQL URL

Live pg_dump from a running PostgreSQL server. Supports any postgresql:// or postgres:// URL.

Local File

Restore from a .sql (plain text) or .dump (custom format) file on disk.

S3 Object

Download and restore from an S3 bucket. Uses AWS_DEFAULT_REGION / AWS_REGION env vars.

Seeding During Service Setup

# Seed main workspace from a local dump
devflow service add app-db --provider local --service-type postgres --from /path/to/dump.sql

# Seed from a live database
devflow service add app-db --provider local --service-type postgres --from postgresql://readonly:pass@replica:5432/mydb

# Seed from S3
devflow service add app-db --provider local --service-type postgres --from s3://my-bucket/backups/latest.dump

Seeding an Existing Workspace

# Seed from a local file
devflow service seed main --from dump.sql

# Seed from a live database
devflow service seed feature/auth --from postgresql://readonly:pass@replica:5432/mydb

# Seed from S3
devflow service seed main --from s3://my-bucket/backups/latest.dump

# Seed a specific service (multi-service)
devflow service seed main -s app-db --from dump.sql

How Seeding Works

From a PostgreSQL URL

  1. devflow runs pg_dump in an ephemeral Docker container, connecting to the source database
  2. The dump is downloaded to a temporary file
  3. The dump is uploaded to the target container and restored with pg_restore
localhost URLs are automatically rewritten to host.docker.internal so Docker containers can reach databases running on the host machine.

From a Local File

From S3

  1. devflow downloads the object from S3 to a temporary file using the s3 crate
  2. AWS credentials are read from the standard environment variables
  3. The file is then restored using the local file method

09 Reverse Proxy

The devflow reverse proxy auto-discovers running Docker containers and serves them over HTTPS on *.localhost domains. It handles TLS termination with auto-generated certificates, HTTP-to-HTTPS redirection, and container-to-container DNS resolution via a shared Docker network.

How It Works

The proxy monitors Docker events in real time. When a container starts, devflow automatically maps it to a *.localhost domain, generates a TLS certificate signed by a local CA, and begins forwarding HTTPS traffic to the container.

Run devflow proxy trust install once to trust the local CA system-wide. After that, all *.localhost domains work with HTTPS — no browser warnings or -k flags needed.

Quick Start

# Start the proxy (needs root for ports 80/443, or use custom ports)
devflow proxy start --daemon

# Install the CA certificate so browsers trust *.localhost
devflow proxy trust install

# Start any container — it's automatically proxied
docker run -d --name myapp nginx

# Access it over HTTPS
curl https://myapp.localhost

Domain Resolution

Domains are resolved in priority order. The first match wins.

PrioritySourceFormatExample
1devproxy.domains labelComma-separated, fully qualifiedapp.localhost, api.localhost
2devproxy.domain labelComma-separated, fully qualifiedmyapp.test
3VIRTUAL_HOST env varComma-separated (nginx-proxy compatible)myapp.localhost
4devflow labels{service}.{workspace}.{project}.{suffix}postgres.feat-1.myapp.localhost
5Compose labels{service}.{project}.{suffix}web.myapp.localhost
6Container name{name}.{suffix}myapp.localhost

Levels 4–6 are auto-generated from container metadata. Level 4 uses devflow.service, devflow.workspace, and devflow.project labels. Level 5 uses com.docker.compose.service and com.docker.compose.project labels. Level 6 is the fallback for standalone containers. All domains are lowercased.

Custom domain via label

# docker-compose.yml
services:
  web:
    image: nginx
    labels:
      devproxy.domains: "app.localhost, api.localhost"

nginx-proxy compatible

services:
  web:
    image: nginx
    environment:
      VIRTUAL_HOST: myapp.localhost

Automatic Compose domain (no config needed)

# docker-compose.yml with project "myapp" and service "web"
# → automatically available at https://web.myapp.localhost
docker compose -p myapp up -d

Port Detection

The upstream port is resolved in priority order:

PrioritySourceExample
1devproxy.port labeldevproxy.port=8080
2DEVPROXY_PORT env varDEVPROXY_PORT=3000
3VIRTUAL_PORT env varVIRTUAL_PORT=8080
4Container’s exposed portsEXPOSE 3000 in Dockerfile
5Fallback80

Example

services:
  api:
    build: .
    labels:
      devproxy.port: "3000"

Container Filtering

By default, all running containers are proxied. You can exclude specific containers:

Containers with explicit domain labels (devproxy.domains, devproxy.domain) or the VIRTUAL_HOST env var are always included, even if they match an auto-skip pattern. Non-running containers are always skipped.

Container-to-Container DNS

When auto_network is enabled (the default), the proxy creates a devflow bridge network and connects every discovered container to it with DNS aliases matching their domain names.

How it works

Each container gets two aliases: the full domain (e.g. web.myapp.localhost) and a suffix-stripped form (e.g. web.myapp). The short form exists because glibc resolves .localhost to 127.0.0.1 per RFC 6761 before consulting Docker DNS.

Testing container-to-container resolution

# Start two containers
docker run -d --name web1 nginx
docker run -d --name web2 nginx

# From web2, reach web1 by name
docker exec web2 curl -s http://web1.localhost

# Verify DNS resolution
docker exec web2 nslookup web1.localhost 127.0.0.11

# With Compose services
docker compose -p myapp up -d
docker exec myapp-api-1 curl -s http://web.myapp

Verify network membership

docker network inspect devflow --format '{{range .Containers}}{{.Name}} {{end}}'

The devflow network persists across proxy restarts. Remove it manually with docker network rm devflow if needed.

Disable auto-networking with --no-auto-network or set auto_network: false in global config.

HTTPS & Certificates

On first start, the proxy generates a local Certificate Authority:

The CA signs short-lived (1 year) per-domain certificates on demand via SNI. Certificates are cached in memory for the lifetime of the proxy process. HTTP requests to known domains are redirected to HTTPS with a 301 Moved Permanently.

Trust Management

devflow proxy trust install   # Install CA to system trust store
devflow proxy trust verify    # Check if CA is trusted
devflow proxy trust remove    # Remove CA from system trust store
devflow proxy trust info      # Show manual installation instructions

Platform details

PlatformTrust store
macOSLogin keychain (~/Library/Keychains/login.keychain-db)
Debian/Ubuntu/usr/local/share/ca-certificates/devflow.crt + update-ca-certificates
Fedora/RHEL/etc/pki/ca-trust/source/anchors/devflow.crt + update-ca-trust
Alpine Linux/usr/local/share/ca-certificates/devflow.crt + update-ca-certificates

On Linux, sudo is used when a TTY is available; pkexec is tried otherwise. If neither works, manual instructions are printed.

Configuration

CLI Flags

devflow proxy start [OPTIONS]

Options:
  --daemon              Run as a background daemon
  --https-port <PORT>   HTTPS listen port [default: 443]
  --http-port <PORT>    HTTP listen port [default: 80]
  --api-port <PORT>     API listen port [default: 2019]
  --domain-suffix <S>   Domain suffix for auto-discovered containers [default: localhost]
  --no-auto-network     Disable auto-connecting containers to shared devflow network

Global Config

Stored at ~/.config/devflow/config.yml:

proxy:
  domain_suffix: localhost
  https_port: 443
  http_port: 80
  api_port: 2019

Precedence

CLI flags override global config, which overrides built-in defaults.

Other Commands

devflow proxy stop     # Stop the daemon (sends SIGTERM)
devflow proxy status   # Show running state, target count, CA status
devflow proxy list     # List all proxied containers with domains and upstreams

All commands support --json for machine-readable output.

API Endpoints

The API server listens on 127.0.0.1:2019 (localhost only).

GET /api/status

Returns proxy running state and summary.

{
  "running": true,
  "targets": 3,
  "https_port": 443,
  "http_port": 80,
  "ca_installed": true
}

GET /api/targets

Returns all currently proxied targets.

[
  {
    "domain": "web.myapp.localhost",
    "container_ip": "172.18.0.2",
    "port": 80,
    "container_id": "abc123...",
    "container_name": "myapp-web-1",
    "project": "myapp",
    "service": "web",
    "workspace": null
  }
]

GET /api/ca

Returns CA certificate path and trust status.

{
  "cert_path": "/home/user/.devflow/proxy/ca.crt",
  "installed": true,
  "info": "CA certificate: /home/user/.devflow/proxy/ca.crt\n..."
}

Label & Environment Variable Reference

NameTypePurpose
devproxy.domainsLabelCustom domain(s), comma-separated. Highest priority.
devproxy.domainLabelCustom domain(s), comma-separated. Alias for devproxy.domains.
devproxy.portLabelOverride upstream port.
devproxy.enabledLabelSet to false to exclude a container.
devflow.projectLabelProject name for auto-generated domains (level 4).
devflow.workspaceLabelWorkspace name for auto-generated domains (level 4).
devflow.serviceLabelService name for auto-generated domains (level 4).
VIRTUAL_HOSTEnv varCustom domain(s), comma-separated. nginx-proxy compatible.
VIRTUAL_PORTEnv varOverride upstream port. nginx-proxy compatible.
DEVPROXY_PORTEnv varOverride upstream port.

10 Desktop GUI

A native desktop application (Tauri 2 + React) for graphical management of projects, workspaces, services, hooks, proxy, and configuration.

Getting Started

# Development mode (hot-reload)
mise run gui

# Production build
mise run gui:build

# Install frontend dependencies only
mise run gui:install

Requires bun (or Node.js 18+) and the Tauri CLI.

Features

Dashboard

Project overview with configuration status, workspace counts, service counts, and proxy status at a glance.

Workspace Management

Create, switch, and delete workspaces. View worktree paths, parent relationships, and connection info. Choose between branch or worktree creation mode.

Service Control

Start, stop, and reset service instances. View logs, check health diagnostics, and manage the full service lifecycle per workspace.

Hook Editor

Three-panel hook management: phase list, hook entries with run/edit/delete, and live MiniJinja template preview with variable browser.

Proxy Dashboard

Proxy status, container discovery table with HTTPS links, and one-click CA trust management. Filter containers by domain, name, or project.

Configuration Editor

Section-based form editor (General, Git, Worktree, Services, Hooks, Agent, Commit) — no raw YAML editing needed. Validation before save.

System Tray

The GUI runs in the system tray with quick access to:

11 TUI Dashboard

An interactive terminal dashboard for managing workspaces, services, and system diagnostics — without leaving your terminal.

Launch

devflow tui

Tabs

TabWhat it showsKey actions
Workspaces Workspace tree with parent/child relationships, service states, worktree paths Switch workspace, start/stop services, press o to open workspace and exit
Services Configured services, provider state, and capability information Inspect service inventory and provider support without leaving the terminal
Proxy Proxy status, trusted CA state, and discovered localhost endpoints Check reverse proxy health and discover running container routes
System Configuration overview, hook list with template variable reference and scaffold snippets, doctor diagnostics Browse hooks, view template context, check system health
Logs Service container logs with workspace and service picker Select workspace/service, filter logs, keyboard navigation

Navigation

With shell integration enabled (eval "$(devflow shell-init)"), pressing o in the TUI automatically cds your shell into the selected workspace's worktree directory.

12 AI & Automation

devflow is designed for both human developers and autonomous agents. It provides AI-powered commit messages, structured JSON output for CI/CD pipelines, agent context and skills, optional sandboxed workspaces, and isolated service environments for parallel tasks.

Why devflow for AI agents? Each agent task gets its own database — fully isolated, instantly cloned via Copy-on-Write. No shared state between parallel agents, no conflicts, no cleanup headaches. The --json --non-interactive flags make every command machine-safe, and llms.txt gives agents all the context they need.

Sandboxed Workspaces

For higher-risk automation or experimental tasks, create a workspace in sandboxed mode:

devflow switch -c agent/fix-login --sandboxed

Sandbox support is platform-aware and reduces filesystem or command access where the current system supports it.

Smart Merge

devflow also includes an advanced merge workflow with readiness checks, rebase helpers, and merge train support. This feature set is controlled by the Smart Merge setting in the desktop app or global config.

devflow merge --check-only
devflow rebase develop
devflow train add
devflow train status
devflow train run --cleanup

Agent Integration

devflow follows the Agent Skills open standard and the llms.txt convention, giving AI coding agents everything they need to manage isolated workspaces autonomously.

Agent Skills

Run devflow agent skill to generate workspace management skills that AI tools discover automatically:

devflow agent skill

This creates the following files:

SkillWhat It Teaches the Agent
devflow-workspace-listList all workspaces with status (devflow --json list)
devflow-workspace-switchSwitch to an existing workspace (devflow switch <name>)
devflow-workspace-createCreate a new workspace with isolated services (devflow switch -c <name>)
devflowOverview: configured services, hook phases, automation flags, suggested workflow

Skills are written directly to .claude/skills/, the universal standard directory. Claude Code, Cursor, and OpenCode all discover these skills automatically.

Agent Context Files

AGENTS.md

Agent-first onboarding guide: recommended flags, bootstrap scripts, suggested agent loop, and automation contract. Most AI coding tools automatically read this file.

llms.txt

Curated index following the llms.txt convention — a concise map of all agent-relevant files, docs, and source code entry points.

llms-full.txt

Comprehensive flat context dump: every CLI command, the full config schema, all hook phases, and template variables. Designed to be ingested in a single prompt.

examples/agent-*.sh

Ready-to-use scripts: agent-bootstrap.sh for idempotent repo setup and agent-task.sh for task-scoped workspace environments.

Agent Lifecycle

The typical agent workflow follows this pattern:

┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ Bootstrap │ ──▶ │ Create │ ──▶ │ Work │ ──▶ │ Cleanup │ │ │ │ Workspace │ │ │ │ │ │ devflow init │ │ + Get Conn │ │ Agent runs │ │ devflow │ │ agent skill │ │ devflow │ │ migrations, │ │ delete │ │ (idempotent) │ │ switch/ │ │ tests, code │ │ │ │ │ │ create │ │ against an │ │ (or reset │ │ │ │ │ │ isolated DB │ │ to retry) │ └──────────────┘ └──────────────┘ └──────────────┘ └──────────────┘ once per task per task per task

Quick start for agents

# One-time bootstrap (idempotent — safe to re-run)
./examples/agent-bootstrap.sh

# Per-task: create isolated environment and get connection string
./examples/agent-task.sh issue-123

Integration with AI coding tools

Popular AI coding assistants automatically discover devflow's agent context:

ToolContext FileSkillsHow It Works
Claude CodeCLAUDE.md.claude/skills/Skills auto-loaded as slash commands; workspace ops available out of the box
CursorAGENTS.md.claude/skills/Skills discoverable via .claude/skills/ standard
OpenCodeAGENTS.md.claude/skills/Skills discoverable via .claude/skills/ standard
GitHub CopilotAGENTS.mdAvailable as workspace context; agent scripts run in Copilot Workspace
Custom agentsllms.txt / llms-full.txtMachine-readable context; ingest llms-full.txt for complete reference
Run devflow agent skill after devflow init to give your AI tools workspace management capabilities. The generated skills teach agents how to create, switch, and list workspaces — so they can isolate their own work automatically.

Agent Commands

devflow includes built-in commands for AI-friendly context, skill installation, and automation-safe workspace management:

# Install workspace skills for AI coding tools
devflow agent skill

# Check tracked agent-oriented workspaces
devflow agent status

# Get project context for the current workspace
devflow agent context
devflow agent context --format json

# Create an isolated workspace for an agent task
devflow --json --non-interactive switch -c agent/fix-login

AI Commit Messages

Generate conventional commit messages from your staged changes using any OpenAI-compatible LLM:

# Generate and commit with AI message
devflow commit --ai

# Generate message but open in editor for review
devflow commit --ai --edit

# Dry run — see the generated message without committing
devflow commit --ai --dry-run

# Manual commit message (no AI)
devflow commit -m "feat: add user authentication"

# Open editor for manual message
devflow commit

LLM Configuration

Configure via environment variables:

VariableDescriptionDefault
DEVFLOW_LLM_API_KEYAPI key for the LLM providerRequired (unless local)
DEVFLOW_LLM_API_URLAPI endpoint URLOpenAI (https://api.openai.com/v1)
DEVFLOW_LLM_MODELModel namegpt-4o-mini

Provider Examples

# OpenAI (default)
export DEVFLOW_LLM_API_KEY="sk-..."

# Anthropic (via proxy)
export DEVFLOW_LLM_API_KEY="sk-ant-..."
export DEVFLOW_LLM_API_URL="https://api.anthropic.com/v1"
export DEVFLOW_LLM_MODEL="claude-3-haiku-20240307"

# Ollama (local — no API key needed)
export DEVFLOW_LLM_API_URL="http://localhost:11434/v1"
export DEVFLOW_LLM_MODEL="llama3.1"
When the API URL contains localhost or 127.0.0.1, devflow skips the API key requirement — perfect for local models with Ollama.

The system prompt enforces Conventional Commits format. Diffs larger than 32KB are truncated with a notice to the model.

JSON Output Mode

All core commands support --json for structured output, making devflow trivially scriptable from any language or agent framework.

# Create a workspace and get structured output
devflow --json switch -c feature/auth
# → {"name": "feature/auth", "state": "running", ...}

# Get connection info as JSON
devflow --json connection feature/auth
# → {"host": "localhost", "port": 55433, "database": "myapp", ...}

# List all workspaces as JSON
devflow --json list
# → [{"name": "main", "status": "running"}, ...]

# Status as JSON
devflow --json status
# → {"project": "myapp", "current_workspace": "main", "services": [...]}

Automation Capabilities

Use this command to detect machine-facing guarantees at runtime:

devflow --json capabilities
# → {"schema_version":"1.0", "non_interactive": {...}, ...}

Non-Interactive Mode

Skip all prompts and use sensible defaults. Essential for CI/CD, scripts, and AI agents:

# All commands in non-interactive mode
devflow --non-interactive switch -c feature/auth
devflow --non-interactive switch feature/auth        # Hooks run (require pre-approval)
devflow --non-interactive switch feature/auth --no-verify  # Skip ALL hooks entirely
devflow --non-interactive destroy --force

Automation Contract

AI Agent Workflow

Give each AI agent task an isolated environment with its own database, test data, and connection strings. Here's the complete pattern:

#!/bin/bash
# === AI Agent Sandbox Script ===

# 0. One-time bootstrap (idempotent)
./examples/agent-bootstrap.sh

# 1. Create isolated environment for the agent task
BRANCH="agent-task-$TASK_ID"
OUTPUT=$(devflow --json --non-interactive switch -c "$BRANCH")

# 2. Switch to worktree directory if worktrees are enabled
WORKTREE=$(echo "$OUTPUT" | jq -r '.worktree_path // empty')
[ -n "$WORKTREE" ] && cd "$WORKTREE"

CONN=$(devflow --json connection "$BRANCH" | jq -r '.connection_string')

# 3. Seed with test data if needed
devflow --non-interactive service seed "$BRANCH" --from test-data.sql

# 4. Agent works against an isolated development environment
export DATABASE_URL="$CONN"
python agent.py --task "$TASK_ID"

# 5. Reset to clean state if agent needs to retry
devflow --non-interactive service reset "$BRANCH"

# 6. Re-run agent
python agent.py --task "$TASK_ID"

# 7. Check container logs on failure
devflow service logs "$BRANCH"

# 8. Clean up
devflow --non-interactive remove "$BRANCH" --force

CI/CD Pipeline Integration

# GitHub Actions example
name: PR Preview Database
on:
  pull_request:
    types: [opened, synchronize]

jobs:
  preview:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Install devflow
        run: cargo install --path .

      - name: Create preview environment
        run: |
          devflow --json --non-interactive init myapp
          # --no-verify skips hooks (appropriate for CI where hooks aren't pre-approved)
          devflow --json --non-interactive switch -c pr-${{ github.event.number }} --no-verify

      - name: Run migrations
        run: |
          CONN=$(devflow --json connection pr-${{ github.event.number }} | jq -r '.connection_string')
          DATABASE_URL="$CONN" npm run migrate

      - name: Run tests
        run: |
          CONN=$(devflow --json connection pr-${{ github.event.number }} | jq -r '.connection_string')
          DATABASE_URL="$CONN" npm test

      - name: Cleanup
        if: always()
        run: devflow --non-interactive remove pr-${{ github.event.number }} --force

13 VCS Support

devflow supports both Git and Jujutsu (jj) version control systems through a unified provider abstraction.

Git Integration

Git is the primary and most feature-complete VCS provider, powered by the git2 library (libgit2 bindings).

Installed Git Hooks

devflow install-hooks installs four hooks:

HookPurpose
post-checkoutAuto-create/switch service workspaces on git checkout
post-mergeRun post-merge lifecycle hooks after git merge
pre-commitRun pre-commit lifecycle hooks before git commit
post-rewriteRun post-rewrite hooks after git rebase / git commit --amend

All hook scripts include a devflow auto-generated hook marker, so devflow uninstall-hooks can safely remove only devflow hooks without affecting other hooks.

Worktree-Aware Hooks

The post-checkout hook is worktree-aware. It detects whether the checkout happened in the main worktree or a linked worktree (via git rev-parse --git-dir vs --git-common-dir) and passes the appropriate flags to devflow.

Main Workspace Detection

devflow auto-detects the main workspace using this priority:

  1. Remote HEAD reference (origin/HEAD)
  2. Common names: main, master, develop
  3. Tracking workspace of the current workspace
  4. Current workspace (fallback)

Jujutsu (jj) Support

devflow supports Jujutsu, a modern VCS that can operate alongside Git. Key mappings:

Git Conceptjj Equivalent
BranchesBookmarks
WorktreesWorkspaces
git checkoutjj workspace add

Auto-Detection

devflow walks up the directory tree looking for .jj/ and .git/ directories. When both exist (colocated jj repo), jj is preferred. You can have a Git repository managed by jj and devflow will use the jj provider.

Jujutsu support is functional but less mature than Git. Hook installation and some worktree features may have limitations compared to the Git provider.

14 CLI Reference

Complete reference for all devflow commands, organized by category. Core automation commands support --json and --non-interactive global flags.

Global Flags

devflow [--json] [--non-interactive] [-s <service-name>] <command>
FlagDescription
--jsonOutput structured JSON for core automation commands
--non-interactiveSkip all prompts, use defaults
-s <name>Target a specific named service

Setup & Configuration

devflow init [path]

Initialize devflow in the current directory (or create/init a path). Creates .devflow.yml.

devflow init                            # Initialize current directory
devflow init myapp                      # Create ./myapp and initialize it
devflow init myapp --name app           # Explicit project name
devflow init myapp --force              # Overwrite existing config

devflow service add <name>

Add and configure a service provider. Use --from to seed main-workspace service data.

devflow service add app-db --provider local --service-type postgres
devflow service add app-db --provider local --service-type postgres --from ./backup.sql
devflow service add app-db --provider local --service-type postgres --from postgresql://user:pass@host:5432/db

devflow config

Show the current configuration.

devflow config                           # Show config
devflow config -v                        # Verbose: show precedence details

devflow install-hooks

Install devflow Git hooks (post-checkout, post-merge, pre-commit, post-rewrite).

devflow uninstall-hooks

Remove devflow Git hooks. Only removes hooks with the devflow marker.

devflow shell-init [shell]

Print shell wrapper function for auto-cd when devflow emits DEVFLOW_CD.

eval "$(devflow shell-init)"             # Auto-detect shell
eval "$(devflow shell-init bash)"        # Bash
eval "$(devflow shell-init zsh)"         # Zsh
devflow shell-init fish | source         # Fish

devflow tui

Launch the interactive terminal UI dashboard. Provides a rich overview of workspaces, services, and worktrees with keyboard-driven navigation.

devflow setup-zfs

Create a file-backed ZFS pool for Copy-on-Write storage (Linux only).

devflow setup-zfs                        # 10G pool named "devflow"
devflow setup-zfs --size 20G             # Custom size
devflow setup-zfs --pool-name mypool     # Custom pool name

devflow doctor

Run system diagnostics. Checks config, Docker, VCS, hooks, storage, and connectivity.


Workspace Management

devflow switch [workspace]

Switch to an existing workspace/worktree. Use -c to create before switching.

devflow switch                           # Interactive fuzzy picker
devflow switch feature/auth              # Switch to existing workspace/worktree
devflow switch -c feature/new            # Create and switch
devflow switch -c feature/new --from develop  # Create from specific parent
devflow switch feature/auth -x "npm ci"  # Run command after switch
devflow switch feature/auth --no-services # Skip service operations
devflow switch feature/auth --no-verify  # Skip hooks
devflow switch --template                # Switch to main/template
devflow switch feature/auth --dry-run    # Preview what would happen

devflow link <workspace>

Link an existing VCS workspace into the devflow registry (and materialize service workspaces if configured).

devflow link feature/auth
devflow link feature/auth --from main

devflow service create <workspace>

Create service workspace(es) only (without switching VCS/worktree).

devflow service create feature/auth
devflow service create feature/auth --from develop

devflow service delete <workspace>

Delete service workspace(es) only. Keeps the Git workspace and worktree.

devflow service delete feature/auth

devflow remove <workspace>

Full cleanup: deletes the Git workspace, worktree, and all service workspaces.

devflow remove feature/auth
devflow remove feature/auth --force       # Skip confirmation
devflow remove feature/auth --keep-services  # Keep service workspaces

devflow list

List all workspaces with enriched status (Git workspace + worktree + service status across all providers).

devflow graph

Render the full environment graph: workspace tree with service status, worktree paths, and provider info.

devflow graph
devflow --json graph

devflow merge [target]

Merge the current workspace into the target (defaults to main).

devflow merge                            # Merge into main
devflow merge develop                    # Merge into develop
devflow merge --cleanup                  # Delete source workspace after merge
devflow merge --dry-run                  # Preview the merge

devflow cleanup

Alias for devflow service cleanup. Remove old service workspaces, keeping the most recent N.

devflow cleanup                          # Use max_workspaces from config
devflow cleanup --max-count 5            # Keep only 5 most recent

devflow service cleanup

Clean old workspaces for a specific service target.

devflow service cleanup
devflow service cleanup --max-count 5

Workspace Lifecycle (Local Provider)

devflow service start <workspace>

Start a stopped container.

devflow service stop <workspace>

Stop a running container (preserves data).

devflow service reset <workspace>

Reset workspace data to the state of the parent workspace.

devflow service destroy

Destroy all workspaces and data for a service.

devflow service destroy                   # With confirmation prompt
devflow service destroy --force           # Skip confirmation

In --json or --non-interactive mode, --force is required.

devflow service seed <workspace> --from <source>

Seed a workspace from a PostgreSQL URL, file, or S3 object.

devflow service seed main --from dump.sql
devflow service seed main --from postgresql://user:pass@host/db
devflow service seed main --from s3://bucket/path/to/dump.sql

devflow service logs <workspace>

Show Docker container logs.

devflow service logs feature/auth                # Last 100 lines (default)
devflow service logs feature/auth --tail 50      # Last 50 lines

Info & Diagnostics

devflow status

Show project and service status with multi-provider aggregation.

devflow capabilities

Show machine-readable automation guarantees for agents and CI.

devflow capabilities
devflow --json capabilities

devflow service capabilities

Show the service provider capability matrix (which operations each configured provider supports).

devflow service capabilities
devflow --json service capabilities

devflow connection <workspace>

Show connection info for a workspace.

devflow connection feature/auth              # URI (default)
devflow connection feature/auth --format env # Env vars (DATABASE_URL=...)
devflow connection feature/auth --format json # JSON object
devflow connection feature/auth -s analytics # Specific service

VCS

devflow commit

Commit staged changes with optional AI-generated messages.

devflow commit                           # Open editor
devflow commit -m "feat: add auth"       # Inline message
devflow commit --ai                      # AI-generated message
devflow commit --ai --edit               # AI message + editor review
devflow commit --ai --dry-run            # Preview AI message

AI Agents

devflow agent status

Show agent status across all workspaces.

devflow agent context

Output project context (workspace info, services, connections) for AI agents.

devflow agent context                    # Markdown format
devflow agent context --format json      # JSON format
devflow agent context --workspace feat/x    # Specific workspace

devflow agent skill

Generate project-specific skills/rules for AI tools.

devflow agent skill
devflow --json agent skill

Hooks

devflow hook show [phase]

Show configured hooks.

devflow hook run <phase> [name]

Manually run hooks for a specific phase.

devflow hook run post-create              # All post-create hooks
devflow hook run post-create migrate      # Just the "migrate" hook
devflow hook run post-create --workspace feat/x  # For a different workspace

devflow hook explain <phase>

Explain what a hook phase does and show configured hooks for it.

devflow hook vars

Show all template variables available for the current workspace.

devflow hook render <template>

Render a MiniJinja template string against the current workspace context.

devflow hook render "DATABASE_URL={{ service['app-db'].url }}"

devflow hook approvals

Manage the hook approval store.

devflow hook approvals list               # List approved hooks
devflow hook approvals add "npm test"     # Approve a command
devflow hook approvals clear              # Clear all approvals

devflow hook triggers

Show the VCS event to hook phase mapping.

devflow hook actions

List built-in action types such as shell, replace, write-file, write-env, copy, docker-exec, http, and notify.

devflow hook recipes

List built-in hook recipe bundles you can install into .devflow.yml.

devflow hook install <recipe>

Install a built-in hook recipe without overwriting existing entries.

devflow hook recipes
devflow hook install sync-ai-configs

AI Agents

devflow agent status

Show agent status across all workspaces.

devflow agent context

Output project context for AI agents.

devflow agent context                    # Markdown format
devflow agent context --format json
devflow agent context --workspace feat/x

devflow agent skill

Generate project-specific skills/rules for AI tools.

devflow agent skill
devflow --json agent skill

devflow sync-ai-configs

Copy AI tool configuration additions from the current worktree back to the main worktree.


Reverse Proxy

devflow proxy start

Start the native HTTPS reverse proxy.

devflow proxy start                      # Foreground
devflow proxy start --daemon             # Background
devflow proxy start --https-port 8443    # Custom HTTPS port

devflow proxy stop

Stop the proxy daemon.

devflow proxy status

Show proxy status.

devflow proxy list

List proxied containers with HTTPS URLs.

devflow proxy trust

Manage CA certificate trust.

devflow proxy trust install              # Install CA to system trust
devflow proxy trust verify               # Check if CA is trusted
devflow proxy trust remove               # Remove CA from system trust
devflow proxy trust info                 # Platform-specific instructions

Plugins

devflow plugin list

List all configured plugin providers.

devflow plugin check <name>

Verify a plugin provider is reachable and responds correctly.

devflow plugin init <name>

Generate a plugin scaffold.

devflow plugin init my-plugin --lang bash    # Bash scaffold
devflow plugin init my-plugin --lang python  # Python scaffold

Worktree

devflow worktree-setup

Set up devflow in an existing Git worktree (copy files, create service workspaces).


Other

devflow destroy

Tear down the entire devflow project (inverse of init). Requires --force in non-interactive mode.

devflow service discover

Auto-discover running Docker containers and suggest adding them as services.

devflow service discover
devflow service discover --service-type postgres

devflow gc

Garbage collection — detect and clean up orphaned projects.

devflow gc                               # Interactive cleanup
devflow gc --list                        # List orphans only
devflow gc --all                         # Clean all orphans
devflow gc --force                       # Skip confirmation

devflow capabilities

Show machine-readable automation contract summary.

devflow capabilities
devflow --json capabilities

15 Examples & Recipes

Real-world configuration examples for different frameworks, stacks, and workflows. Copy, paste, and adapt.

Node.js / Express with Prisma

# .devflow.yml
services:
  - name: app-db
    type: local
    service_type: postgres
    default: true
    local:
      image: postgres:17

hooks:
  post-create:
    env:
      command: "echo DATABASE_URL={{ service['app-db'].url }} > .env.local"
    migrate: "npx prisma migrate deploy"
    generate: "npx prisma generate"

  post-switch:
    env:
      command: "echo DATABASE_URL={{ service['app-db'].url }} > .env.local"

  pre-merge:
    test: "npm test"
    lint: "npx eslint ."

Django Project

# .devflow.yml
git:
  auto_create_on_workspace: true
  auto_switch_on_workspace: true
  main_workspace: main
  exclude_workspaces: [main, master, develop]

behavior:
  max_workspaces: 5

services:
  - name: app-db
    type: local
    service_type: postgres
    default: true
    local:
      image: postgres:17

hooks:
  post-create:
    update-env:
      command: "echo DATABASE_URL={{ service['app-db'].url }} > .env.local"
      condition: "file_exists:manage.py"
    migrate:
      command: "python manage.py migrate"
      condition: "file_exists:manage.py"
    restart-services:
      command: "docker compose restart"
      continue_on_error: true

  post-switch:
    update-env:
      command: "echo DATABASE_URL={{ service['app-db'].url }} > .env.local"
      condition: "file_exists:manage.py"
    restart-services:
      command: "docker compose restart"
      continue_on_error: true

Ruby on Rails

# .devflow.yml
services:
  - name: app-db
    type: local
    service_type: postgres
    default: true
    local:
      image: postgres:17

hooks:
  post-create:
    env:
      command: |
        cat > .env.local << EOF
        DATABASE_URL={{ service['app-db'].url }}
        RAILS_ENV=development
        EOF
    migrate: "bundle exec rails db:migrate"

  post-switch:
    env:
      command: |
        cat > .env.local << EOF
        DATABASE_URL={{ service['app-db'].url }}
        RAILS_ENV=development
        EOF

  pre-merge:
    test: "bundle exec rspec"

Multi-Service: PostgreSQL + ClickHouse + Redis

# .devflow.yml — Full-stack application
git:
  auto_create_on_workspace: true
  auto_switch_on_workspace: true
  main_workspace: main
  exclude_workspaces: [main, master, develop]

behavior:
  max_workspaces: 10

services:
  - name: app-db
    type: local
    service_type: postgres
    auto_workspace: true
    default: true
    local:
      image: postgres:17

  - name: analytics
    type: local
    service_type: clickhouse
    auto_workspace: true
    clickhouse:
      image: clickhouse/clickhouse-server:latest

  - name: cache
    type: local
    service_type: generic
    auto_workspace: false               # Shared across workspaces
    generic:
      image: redis:7-alpine
      port_mapping: "6379:6379"

worktree:
  enabled: true
  path_template: "../{repo}.{workspace}"
  copy_files: [.env.local, .env]
  copy_ignored: true

hooks:
  post-create:
    install: "npm ci"
    env-setup:
      command: |
        cat > .env.local << EOF
        DATABASE_URL={{ service['app-db'].url }}
        CLICKHOUSE_URL=http://{{ service.analytics.host }}:{{ service.analytics.port }}
        REDIS_URL=redis://localhost:6379
        EOF
      working_dir: "."
    migrate: "npm run migrate"

  post-switch:
    update-env:
      command: |
        cat > .env.local << EOF
        DATABASE_URL={{ service['app-db'].url }}
        CLICKHOUSE_URL=http://{{ service.analytics.host }}:{{ service.analytics.port }}
        REDIS_URL=redis://localhost:6379
        EOF

  pre-merge:
    test: "npm test"
    lint: "npm run lint"

Neon Cloud Setup

# .devflow.yml — Zero Docker, fully managed
services:
  - name: cloud-db
    type: neon
    service_type: postgres
    auto_workspace: true
    default: true
    neon:
      api_key: ${NEON_API_KEY}
      project_id: ${NEON_PROJECT_ID}

hooks:
  post-create:
    migrate: "npx prisma migrate deploy"
  post-switch:
    env:
      command: "echo DATABASE_URL={{ service.cloud-db.url }} > .env.local"
# Set environment variables
export NEON_API_KEY="neon_..."
export NEON_PROJECT_ID="proj_..."

# Initialize
devflow init myapp
devflow service add mydb --provider neon --service-type postgres
devflow install-hooks

# Now every git checkout creates a Neon workspace
git checkout -b feature/auth

Plugin Provider: Custom Bash Plugin

#!/bin/bash
# devflow-plugin-my-custom (saved as executable on PATH)
# Minimal devflow plugin that manages SQLite databases

set -euo pipefail

# Read JSON request from stdin
REQUEST=$(cat)
METHOD=$(echo "$REQUEST" | jq -r '.method')
PARAMS=$(echo "$REQUEST" | jq -r '.params // {}')
BRANCH=$(echo "$PARAMS" | jq -r '.workspace_name // ""')

case "$METHOD" in
  create_workspace)
    cp "main.db" "${BRANCH}.db" 2>/dev/null || touch "${BRANCH}.db"
    echo '{"ok": true, "result": {"host": "localhost", "database": "'"${BRANCH}.db"'"}}'
    ;;
  delete_workspace)
    rm -f "${BRANCH}.db"
    echo '{"ok": true, "result": {}}'
    ;;
  list_workspaces)
    BRANCHES=$(ls *.db 2>/dev/null | sed 's/.db$//' | jq -R . | jq -s 'map({name: .})')
    echo '{"ok": true, "result": '"$BRANCHES"'}'
    ;;
  workspace_exists)
    if [ -f "${BRANCH}.db" ]; then
      echo '{"ok": true, "result": {"exists": true}}'
    else
      echo '{"ok": true, "result": {"exists": false}}'
    fi
    ;;
  get_connection_info)
    echo '{"ok": true, "result": {"host": "localhost", "database": "'"${BRANCH}.db"'"}}'
    ;;
  switch_to_branch)
    echo '{"ok": true, "result": {"host": "localhost", "database": "'"${BRANCH}.db"'"}}'
    ;;
  doctor)
    echo '{"ok": true, "result": {"healthy": true, "messages": ["SQLite plugin OK"]}}'
    ;;
  *)
    echo '{"ok": false, "error": "Unknown method: '"$METHOD"'"}'
    ;;
esac
# .devflow.yml — Using the custom plugin
services:
  - name: custom-db
    type: local
    service_type: plugin
    auto_workspace: true
    plugin:
      name: my-custom               # Finds devflow-plugin-my-custom on PATH
      timeout: 10

Seeding from Production

# Initial setup with production seed
devflow init myapp
devflow service add app-db --provider local --service-type postgres --from postgresql://readonly:pass@replica.prod.internal:5432/myapp

# Or seed later
devflow service seed main --from s3://my-company-backups/postgres/nightly-latest.dump

# Every workspace created from main inherits the seeded data via CoW
git checkout -b feature/auth
# → Database is an instant copy of the seeded main workspace

AI Agent Sandboxing

#!/bin/bash
# === Batch AI Agent Runner ===
# Creates isolated environments for multiple agent tasks in parallel

TASKS=("fix-login-bug" "add-user-search" "optimize-queries")

for TASK in "${TASKS[@]}"; do
  (
    # Create isolated workspace
    OUTPUT=$(devflow --json --non-interactive switch -c "agent/$TASK")

    # Switch to worktree if enabled
    WORKTREE=$(echo "$OUTPUT" | jq -r '.worktree_path // empty')
    [ -n "$WORKTREE" ] && cd "$WORKTREE"

    # Get connection string
    CONN=$(devflow --json connection "agent/$TASK" | jq -r '.connection_string')

    # Run agent with an isolated development environment
    DATABASE_URL="$CONN" python run_agent.py --task "$TASK"

    # Cleanup
    devflow --non-interactive remove "agent/$TASK" --force
  ) &
done

# Wait for all agents to complete
wait
echo "All agent tasks completed."

Typical Development Flow

# === Day-to-day workflow ===

# Morning: start a new feature
devflow switch -c feature/user-profiles
# → Creates worktree at ../my-app.feature-user-profiles
# → Creates an isolated development workspace environment (CoW clone)
# → Copies .env.local, runs npm ci, runs migrations

# Make schema changes, add data, iterate...
npm run migrate
npm test

# Afternoon: review a teammate's PR
devflow switch feature/payment-v2
# → Switches to their worktree + service environment
# → Your feature workspace's database is untouched
npm test
devflow service logs feature/payment-v2  # Check if anything went wrong

# Back to your feature
devflow switch feature/user-profiles
# → Instant — everything is exactly as you left it

# Done? Merge and clean up
devflow merge --cleanup
# → Merges into main
# → Deletes the feature workspace, worktree, and service workspaces

16 Troubleshooting

Common issues and how to resolve them. Start with devflow doctor for automated diagnostics.

devflow doctor

The doctor command checks your entire setup:

devflow doctor

It verifies:

Common Issues

"Permission denied" when running Docker commands

# Add yourself to the docker group
sudo usermod -aG docker $USER
newgrp docker

# Or on macOS, make sure Docker Desktop is running

Port conflicts

If devflow can't find a free port, increase the range or change the start port:

local:
  port_range_start: 60000  # Start from a higher range

Service workspace already exists

# Delete the service workspace and recreate
devflow service delete feature/auth
devflow service create feature/auth

Container won't start

# Check container logs
devflow service logs feature/auth

# Reset the workspace data
devflow service reset feature/auth

# Nuclear option: destroy everything and reinitialize
devflow destroy --force
devflow init myapp

"Cannot remove current workspace" or "Cannot remove main workspace"

Switch to a different workspace first:

git checkout main
devflow remove feature/auth

Hooks not firing on git checkout

# Verify hooks are installed
devflow doctor

# Reinstall hooks
devflow install-hooks

# Check if hooks are disabled
echo $DEVFLOW_DISABLED      # Should not be "true"
echo $DEVFLOW_SKIP_HOOKS    # Should not be "true"

Slow branching on Linux (ext4)

ext4 doesn't support Copy-on-Write, so devflow does a full copy. Install ZFS for instant branching:

sudo apt-get install -y zfsutils-linux
devflow setup-zfs
devflow doctor  # Verify ZFS is detected

devflow switch doesn't change directory

You need the shell wrapper. Without it, devflow switch prints the path but can't change the parent shell's directory.

# Add to your shell profile:
eval "$(devflow shell-init)"

Temporarily disabling devflow

# Disable for one command
DEVFLOW_DISABLED=true git checkout feature/auth

# Disable for the current session
export DEVFLOW_DISABLED=true

# Disable for specific workspaces
export DEVFLOW_DISABLED_BRANCHES=main,release/*

# Disable just the current workspace
export DEVFLOW_CURRENT_BRANCH_DISABLED=true

# Skip hooks only
export DEVFLOW_SKIP_HOOKS=true

State & Data Locations

PathContents
~/.config/devflow/local_state.ymlProject state (services, current workspace per project)
~/.config/devflow/hook_approvals.ymlHook approval records
~/.local/share/devflow/Default data root for container volumes
.devflow.ymlProject config (committed)
.devflow.local.ymlLocal config overrides (gitignored)

Resetting Everything

# Remove all containers and data for this project
devflow destroy --force

# Remove devflow hooks
devflow uninstall-hooks

# Remove local state (affects all projects)
rm ~/.config/devflow/local_state.yml

# Remove hook approvals
rm ~/.config/devflow/hook_approvals.yml

# Remove all container data
rm -rf ~/.local/share/devflow/