Skip to content

URL Schemes & Shell Autocompletion

DBCrust provides a comprehensive URL scheme system for connecting to databases with intelligent shell autocompletion support. This guide covers all supported connection methods and how to set up enhanced shell completion.

DBCrust supports 8 different URL schemes, each optimized for specific use cases:

PostgreSQL

Scheme: postgres://

Terminal window
# Standard connection
dbcrust postgres://username:password@localhost:5432/database_name
# With SSL (recommended)
dbcrust postgres://username:password@localhost:5432/database_name?sslmode=require
# Alternative scheme (auto-converted to postgres://)
dbcrust postgres://username:password@localhost:5432/database_name

MySQL

Scheme: mysql://

Terminal window
# Standard connection
dbcrust mysql://username:password@localhost:3306/database_name
# With SSL
dbcrust mysql://username:password@localhost:3306/database_name?ssl-mode=REQUIRED
# Custom port
dbcrust mysql://root:secret@mysql-server:3307/production

SQLite

Scheme: sqlite://

Terminal window
# Absolute path
dbcrust sqlite:///path/to/database.db
# Relative path
dbcrust sqlite://./myapp.db
# Memory database
dbcrust sqlite://:memory:
# Current directory
dbcrust sqlite://database.db

ClickHouse

Scheme: clickhouse://

Terminal window
# Standard connection (HTTP interface on port 8123)
dbcrust clickhouse://localhost:8123/default
# With credentials
dbcrust clickhouse://username:password@localhost:8123/database_name
# Remote ClickHouse server
dbcrust clickhouse://user:pass@clickhouse.company.com:8123/analytics
# Without authentication (when CLICKHOUSE_SKIP_USER_SETUP=1)
dbcrust clickhouse://localhost:8123/default

Features:

  • Uses ClickHouse HTTP interface (port 8123) for optimal performance
  • Support for ClickHouse-specific data types (UInt32, DateTime, etc.)
  • Compatible with ClickHouse system tables and functions
  • Automatic FORMAT handling for dynamic query results

MongoDB

Schemes: mongodb:// or mongodb+srv://

Terminal window
# Standard MongoDB connection
dbcrust mongodb://user:password@localhost:27017/database_name
# MongoDB Atlas (SRV record)
dbcrust mongodb+srv://user:password@cluster.mongodb.net/database_name
# Without authentication (local development)
dbcrust mongodb://localhost:27017/myapp
# With connection options
dbcrust mongodb://user:pass@host:27017/db?authSource=admin&ssl=true
# Replica set connection
dbcrust mongodb://user:pass@host1:27017,host2:27017/db?replicaSet=myReplSet

Features:

  • SQL-like Query Interface: Use familiar SELECT syntax that translates to MongoDB queries
  • Advanced Filtering: LIKE patterns, IN operators, BETWEEN ranges, IS NULL checks, OR conditions
  • Database Management: CREATE/DROP databases and collections via SQL syntax
  • Native MongoDB Commands: Direct access to find, aggregate, and admin operations
  • Schema Inference: Dynamic column detection from document structure
  • Text Search: Full-text search capabilities with \search command
  • Index Management: Create, drop, and list MongoDB indexes

Elasticsearch

Schemes: elasticsearch://, elastic://, es://

Terminal window
# Standard connection (HTTP interface on port 9200)
dbcrust elasticsearch://localhost:9200
# With authentication
dbcrust elasticsearch://elastic:password@localhost:9200
# Alternative schemes (all equivalent)
dbcrust elastic://localhost:9200
dbcrust es://localhost:9200
# With SSL (skip certificate verification for development)
dbcrust "elasticsearch://elastic:password@localhost:9200?ssl=true&verify_certs=false"
# Remote Elasticsearch cluster
dbcrust elasticsearch://user:pass@elasticsearch.company.com:9200
# Docker container
dbcrust docker://my-elasticsearch-container

Features:

  • Intelligent Index Handling: Automatic quoting for index names with special characters (logs-2024.01.01)
  • Smart SELECT * Queries: Auto-excludes array fields that cause SQL API failures
  • Comprehensive Field Mapping: Shows nested fields (author.login) and multi-fields (field.keyword)
  • Field Capabilities Analysis: Displays what operations each field supports (filter, search, agg, etc.)
  • SQL API Integration: Uses Elasticsearch SQL for familiar query syntax
  • Container Auto-Discovery: Detects Elasticsearch containers and excludes Kibana
  • Advanced Query Rewriting: Handles complex queries with proper escaping

DBCrust can query file formats directly using Apache DataFusion, a powerful SQL query engine that operates on Parquet, CSV, and JSON files.

Parquet

Scheme: parquet://

Terminal window
# Single Parquet file
dbcrust parquet:///data/sales_2024.parquet
# Multiple files with glob pattern
dbcrust 'parquet:///data/sales_*.parquet'
# Directory-based table name
dbcrust 'parquet:///data/warehouse/'
> SELECT * FROM warehouse LIMIT 10;

Features:

  • Columnar Storage Format: Optimized for analytical queries with efficient compression
  • Full Schema Preservation: Maintains complete type information including nested structures
  • Nested Field Support: Query deeply nested Struct, Array, and Map types
  • Predicate Pushdown: Filters applied at file level for optimal performance
  • Multi-Level Autocomplete: Navigate nested fields like data.exact_paths.auth/token/create
  • Glob Pattern Support: Query multiple files matching patterns

Multi-Level Nested Field Navigation:

-- Autocomplete shows all nested levels
SELECT data.[TAB] FROM sales
-- → data, data.customer, data.customer.address, data.customer.address.city
-- Navigate to any depth
SELECT data.customer.address.[TAB] FROM sales
-- → Shows: city, state, zip, country
-- Works with complex field names
SELECT data.exact_paths.[TAB] FROM policies
-- → Shows: auth/token/create, aws_okta/creds/management-ecr, ...

CSV

Scheme: csv://

Terminal window
# CSV with header row (default)
dbcrust csv:///data/users.csv
# Custom delimiter (tab-separated)
dbcrust 'csv:///data/data.tsv?delimiter=\t'
# No header row
dbcrust 'csv:///data/logs.csv?header=false'
# Multiple CSV files with glob pattern
dbcrust 'csv:///logs/*.csv?header=true'
> SELECT date, COUNT(*) FROM logs GROUP BY date;

Query Parameters:

  • ?header=true|false - CSV has header row (default: true)
  • ?delimiter=, - Field delimiter character (default: ’,’)

Features:

  • Schema Inference: Automatic column type detection
  • Flexible Delimiters: Support for CSV, TSV, and custom-delimited files
  • Glob Patterns: Query multiple files matching *.csv patterns
  • Header Detection: Configurable header row handling

JSON

Scheme: json://

Terminal window
# Standard JSON file
dbcrust json:///api_responses.json
# NDJSON (newline-delimited JSON)
dbcrust json:///events.ndjson
# JSON with nested structures
dbcrust json:///vault_policies.json
> SELECT data.exact_paths FROM res;

Features:

  • Automatic NDJSON Detection: Recognizes and converts newline-delimited JSON
  • Deep Nested Field Support: Navigate multi-level nested objects with autocomplete
  • Schema Inference: Automatic type detection from JSON structure
  • Multi-Level Field Access: Use dot notation like data.field.subfield.value
  • Complex Type Display: Shows Struct<N fields> with expandable details

Nested Field Schema Display:

\d res
Table: res
Column | Type
-----------------+--------------------
data | Struct<4 fields>
timestamp | Utf8
id | Int64
Nested field details:
data (Struct):
- chroot_namespace: Utf8
- exact_paths: Struct<25 fields>
- glob_paths: Struct<10 fields>
- root: Utf8
data.exact_paths (Struct):
- auth/token/create: Struct<1 fields>
- auth/token/lookup-self: Struct<1 fields>
... and 23 more fields

DataFusion SQL Features:

All file formats support the full DataFusion SQL dialect:

  • Aggregate Functions: COUNT, SUM, AVG, MIN, MAX, STDDEV, VAR, MEDIAN, APPROX_DISTINCT
  • Window Functions: ROW_NUMBER, RANK, DENSE_RANK, LAG, LEAD, FIRST_VALUE, LAST_VALUE
  • String Functions: CONCAT, UPPER, LOWER, SUBSTRING, TRIM, REPLACE, SPLIT_PART, LENGTH
  • Date/Time Functions: NOW, CURRENT_DATE, DATE_TRUNC, EXTRACT, TO_TIMESTAMP
  • Array Functions: ARRAY_AGG, ARRAY_LENGTH, ARRAY_CONTAINS, ARRAY_CONCAT
  • Type Conversion: CAST, TRY_CAST

Cross-Format Queries:

You can JOIN data across different file formats:

-- Join Parquet and CSV data
SELECT
u.name,
o.total
FROM 'users.parquet' u
JOIN 'orders.csv' o ON u.id = o.user_id
WHERE o.total > 100;
-- Combine JSON and Parquet
SELECT
p.data.customer.name,
s.amount
FROM 'policies.json' p
JOIN 'sales.parquet' s ON p.id = s.policy_id;

Docker Containers

Scheme: docker://

Terminal window
# Interactive container selection
dbcrust docker://
# → Shows list of running database containers
# Direct container connection
dbcrust docker://postgres-container
dbcrust docker://my-mysql-db
# With credentials and database
dbcrust docker://user:pass@container-name/database
# Examples
dbcrust docker://postgres-dev
dbcrust docker://app_user:secret@mysql-prod/app_db

Features:

  • Automatic discovery of running database containers
  • Support for PostgreSQL, MySQL, SQLite, ClickHouse, and MongoDB containers
  • OrbStack integration on macOS
  • Intelligent port mapping and network resolution
  • Special handling for ClickHouse containers with CLICKHOUSE_SKIP_USER_SETUP=1
  • MongoDB container detection for mongo, mongodb, and bitnami/mongodb images

Saved Sessions

Scheme: session://

Terminal window
# Interactive session selection
dbcrust session://
# → Shows list of saved sessions
# Direct session connection
dbcrust session://production_db
dbcrust session://staging_postgres
dbcrust session://local_dev

Session Management:

Terminal window
# Save current connection as a session
\ss production_db
# List all saved sessions
\s
# Delete a session
\sd old_session
# Connect to specific session
\s production_db

Recent Connections

Scheme: recent://

5432/mydb
# Interactive recent connection selection
dbcrust recent://
# → Shows numbered list of recent connections
# 2. docker://postgres-dev/testdb
# 3. mysql://root@mysql-server:3306/app

Recent Connection Management:

Terminal window
# List recent connections
\r
# Clear recent connection history
\rc

HashiCorp Vault

Schemes: vault:// or vaultdb:// (deprecated)

Terminal window
# Full vault URL
dbcrust vault://role-name@mount-path/database-path
# Interactive vault connection
dbcrust vault://
# → Prompts for role, mount, and database
# Alternative scheme (deprecated)
dbcrust vault://app-role@database/postgres-prod

Configuration:

[vault]
addr = "https://vault.company.com"
token_file = "~/.vault-token"

DBCrust provides intelligent shell autocompletion that understands URL schemes and provides contextual suggestions.

Bash

Terminal window
# Generate completion script
dbcrust --completions bash > ~/.local/share/bash-completion/completions/dbcrust
# Or install system-wide
sudo dbcrust --completions bash > /etc/bash_completion.d/dbcrust
# Reload your shell
source ~/.bashrc

Zsh

Terminal window
# Create completions directory if it doesn't exist
mkdir -p ~/.local/share/zsh/site-functions
# Generate completion script
dbcrust --completions zsh > ~/.local/share/zsh/site-functions/_dbcrust
# Add to your .zshrc if not already present
echo 'fpath=(~/.local/share/zsh/site-functions $fpath)' >> ~/.zshrc
echo 'autoload -Uz compinit && compinit' >> ~/.zshrc
# Reload your shell
source ~/.zshrc

Fish

Terminal window
# Generate completion script
dbcrust --completions fish > ~/.config/fish/completions/dbcrust.fish
# Reload fish completions
fish -c "complete --erase --command dbcrust; source ~/.config/fish/completions/dbcrust.fish"

PowerShell

Terminal window
# Generate completion script
dbcrust --completions powershell > $PROFILE.CurrentUserAllHosts.Replace("profile.ps1", "Completions/dbcrust.ps1")
# Add to your PowerShell profile
Add-Content $PROFILE.CurrentUserAllHosts '. $PSScriptRoot/Completions/dbcrust.ps1'

Type a partial scheme and press TAB to see available options:

Terminal window
dbc pos[TAB] postgres://
dbc my[TAB] mysql://
dbc sq[TAB] sqlite://
dbc cl[TAB] clickhouse://
dbc mo[TAB] mongodb://
dbc par[TAB] parquet://
dbc cs[TAB] csv://
dbc js[TAB] json://
dbc doc[TAB] docker://
dbc ses[TAB] session://
dbc rec[TAB] recent://
dbc va[TAB] vault://

DBCrust provides smart contextual completions based on the URL scheme:

Docker Containers

Terminal window
# Shows running database containers
dbc docker://[TAB]
# → postgres-dev mysql-test clickhouse-analytics mongodb-cache
dbc docker://post[TAB] docker://postgres-dev
dbc docker://my[TAB] docker://mysql-test
dbc docker://cl[TAB] docker://clickhouse-analytics
dbc docker://mo[TAB] docker://mongodb-cache

How it works:

  • Queries Docker API for running containers
  • Filters for database containers (PostgreSQL, MySQL, SQLite, ClickHouse, MongoDB images)
  • Only shows containers that are currently running
  • Matches container names that start with your input

Saved Sessions

Terminal window
# Shows your saved sessions
dbc session://[TAB]
# → production_db staging_postgres local_dev
dbc session://prod[TAB] session://production_db
dbc session://loc[TAB] session://local_dev

How it works:

  • Reads from ~/.config/dbcrust/sessions.toml
  • Shows all saved session names
  • Matches sessions that start with your input

SQLite Files

Terminal window
# Delegates to shell file completion
dbc sqlite://[TAB]
# → Uses your shell's built-in file completion
dbc sqlite://./[TAB] sqlite://./myapp.db sqlite://./test.db
Terminal window
# Scheme completion
dbc [TAB]
# → postgres:// mysql:// sqlite:// clickhouse:// mongodb:// docker:// session:// recent:// vault://
# Docker container completion
dbc docker://[TAB]
# → postgres-dev mysql-prod clickhouse-analytics mongodb-cache
# Session completion
dbc session://[TAB]
# → production staging development local
# Recent connection (interactive)
dbc recent://[ENTER]
# → 1. postgres://user@localhost:5432/mydb
# 2. docker://postgres-dev/testdb
# 3. mysql://root@server:3306/app
# Select connection [1-3]:

Configure automatic SSH tunnels based on hostname patterns:

[ssh_tunnel_patterns]
"^db\\.internal\\..*\\.com$" = "user@jumphost.example.com:2222"
"^.*\\.prod\\.company\\.com$" = "deploy@bastion.company.com"

When connecting to a matching hostname, DBCrust automatically establishes an SSH tunnel:

Terminal window
# This automatically uses the SSH tunnel
dbcrust postgres://app@db.internal.example.com:5432/prod
# → Tunnels through user@jumphost.example.com:2222

URLs without schemes default to PostgreSQL:

Terminal window
# These are equivalent
dbcrust localhost:5432/mydb
dbcrust postgres://localhost:5432/mydb

All successful connections are automatically saved to recent connection history:

[[recent_connections]]
connection_url = "postgres://user@localhost:5432/testdb"
display_name = "user@localhost:5432/testdb"
timestamp = "2024-01-15T14:22:33Z"
database_type = "PostgreSQL"
success = true
[[recent_connections]]
connection_url = "docker://postgres-dev/myapp"
display_name = "docker://postgres-dev/myapp"
timestamp = "2024-01-15T14:20:15Z"
database_type = "PostgreSQL"
success = true
  1. Save frequently used connections as sessions:

    Terminal window
    # Connect to production
    dbcrust postgres://readonly@prod.db.company.com:5432/analytics
    # Save as session
    \ss prod_analytics
    # Later, reconnect easily
    dbcrust session://prod_analytics
  2. Use meaningful session names:

    Terminal window
    \ss prod_readonly # Good: describes environment and access
    \ss staging_full # Good: describes environment and permissions
    \ss db1 # Bad: not descriptive
  1. Use interactive mode for exploration:

    Terminal window
    # Explore available containers
    dbcrust docker://
  2. Use direct connection for automation:

    Terminal window
    # Script-friendly (no interaction)
    dbcrust docker://postgres-prod/analytics -c "SELECT COUNT(*) FROM users"

Choose the right scheme for your use case:

Use CaseRecommended SchemeExample
Local developmentpostgres://, mysql://, sqlite://, mongodb://postgres://localhost:5432/dev
Production accesssession:// or vault://session://prod_readonly
Container developmentdocker://docker://postgres-dev
Quick reconnectionrecent://recent://
Team sharingsession:// with shared configsession://shared_staging
Document databasesmongodb:// or mongodb+srv://mongodb://localhost:27017/app

Completions not working?

Terminal window
# Check if completion script is installed
ls ~/.local/share/bash-completion/completions/dbcrust # Bash
ls ~/.zfunc/_dbcrust # Zsh
ls ~/.config/fish/completions/dbcrust.fish # Fish
# Regenerate completion scripts
dbcrust --completions bash > ~/.local/share/bash-completion/completions/dbcrust
dbcrust --completions zsh > ~/.zfunc/_dbcrust # Zsh
dbc --completions zsh > ~/.zfunc/_dbc # Zsh (dbc binary)
# Test basic completion
type _dbcrust # Should show function definition

Docker completions not showing containers?

Terminal window
# Check Docker connectivity
docker ps --format '{{.Names}}' | grep -E 'postgres|mysql|mariadb|sqlite|mongo|clickhouse'
# Check Docker permissions
docker info # Should not require sudo

Session not found?

Terminal window
# Check available sessions
\s
# Check session file
cat ~/.config/dbcrust/sessions.toml

Docker connection failed?

Terminal window
# Check if container is running
docker ps | grep container-name
# Check container database type
docker inspect container-name | grep -i image

Autocompletion feels slow?

DBCrust caches autocompletion data for performance. If you notice slow completions:

  1. Docker completions cache running containers for 30 seconds
  2. Session completions read from disk but are very fast
  3. Scheme completions are instant (hardcoded)

Once connected to a database, DBCrust provides advanced SQL autocompletion within the interactive session. This is separate from shell completion and works by understanding SQL syntax context.

DBCrust analyzes your SQL statement in real-time to provide relevant suggestions based on the SQL clause you’re in:

-- After SELECT keyword, suggests aggregates and wildcards
SELECT [TAB]
-- Suggestions: *, COUNT(, SUM(, AVG(, MAX(, MIN(, DISTINCT
-- Column suggestions work when table is visible before cursor
SELECT * FROM users WHERE [TAB]
-- Suggestions: id, name, email, created_at, status
-- After WHERE, suggests ONLY column names (no functions or tables)
SELECT * FROM users WHERE [TAB]
-- Suggestions: id, name, email, created_at, status, active
-- NOT suggested: users, orders, *, COUNT(
-- Understands multiple tables in FROM clause
SELECT * FROM users u JOIN orders o ON u.id = o.user_id WHERE [TAB]
-- Suggestions: columns from BOTH users and orders tables
-- After FROM, suggests table names (preserves existing behavior)
SELECT * FROM [TAB]
-- Suggestions: users, orders, products, categories
-- NOT suggested: column names, functions, or wildcards
-- Suggests column names from tables in FROM clause
SELECT * FROM users ORDER BY [TAB]
-- Suggestions: id, name, email, created_at, status
SELECT COUNT(*) FROM orders GROUP BY [TAB]
-- Suggestions: status, user_id, product_id, created_at
-- Suggests both aggregate functions AND column names
SELECT status, COUNT(*) FROM users GROUP BY status HAVING [TAB]
-- Suggestions: COUNT(, SUM(, AVG(, MAX(, MIN(, status
-- Handles complex multi-table scenarios
SELECT u.name, o.total
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
JOIN products p ON o.product_id = p.id
WHERE [TAB]
-- Suggests: u.id, u.name, u.email, o.id, o.total, o.status, p.name, p.price
-- Prefix matching for table names
SELECT * FROM us[TAB]
-- Suggestions: users, user_sessions, user_preferences
-- After table.dot, suggests columns from that specific table
SELECT users.[TAB] FROM users
-- Suggestions: id, name, email, created_at, status, active
-- Works with table aliases
SELECT u.[TAB] FROM users u
-- Suggestions: id, name, email, created_at, status, active
-- Keyword expansion
SEL[TAB] → SELECT
FR[TAB] → FROM
WH[TAB] → WHERE
-- Full statement completion
SEL[TAB] name FR[TAB] users WH[TAB] active = true
-- Expands to: SELECT name FROM users WHERE active = true

DBCrust also provides completion for backslash commands:

-- After \n, suggests named query names
\n [TAB]
-- Suggestions: active_users, monthly_report, user_orders
-- After \nd, suggests named queries to delete
\nd [TAB]
-- Suggestions: old_report, unused_query
-- After \s, suggests saved session names
\s [TAB]
-- Suggestions: production, staging, development, local
-- After \sd, suggests sessions to delete
\sd [TAB]
-- Suggestions: old_staging, temp_connection
  • Real-time Context Analysis: SQL parsing happens instantly with no noticeable delay
  • Schema Caching: Table and column metadata is cached for fast repeated access
  • Smart Invalidation: Cache automatically refreshes when database schema changes
  • Background Loading: Schema data loads proactively for better responsiveness
  • Multi-Database Support: Works with PostgreSQL, MySQL, SQLite, ClickHouse, and MongoDB
  • Case-Insensitive Matching: Completions work regardless of case
  • Backwards Compatible: All existing completion behavior is preserved
  • Error Recovery: Completion works even with partial or incomplete SQL statements