URL Schemes & Shell Autocompletion
URL Schemes & Shell Autocompletion
Section titled “URL Schemes & Shell Autocompletion”DBCrust provides a comprehensive URL scheme system for connecting to databases with intelligent shell autocompletion support. This guide covers all supported connection methods and how to set up enhanced shell completion.
🔗 Supported URL Schemes
Section titled “🔗 Supported URL Schemes”DBCrust supports 8 different URL schemes, each optimized for specific use cases:
Standard Database Schemes
Section titled “Standard Database Schemes”PostgreSQL
Scheme: postgres://
# Standard connectiondbcrust postgres://username:password@localhost:5432/database_name
# With SSL (recommended)dbcrust postgres://username:password@localhost:5432/database_name?sslmode=require
# Alternative scheme (auto-converted to postgres://)dbcrust postgres://username:password@localhost:5432/database_nameMySQL
Scheme: mysql://
# Standard connectiondbcrust mysql://username:password@localhost:3306/database_name
# With SSLdbcrust mysql://username:password@localhost:3306/database_name?ssl-mode=REQUIRED
# Custom portdbcrust mysql://root:secret@mysql-server:3307/productionSQLite
Scheme: sqlite://
# Absolute pathdbcrust sqlite:///path/to/database.db
# Relative pathdbcrust sqlite://./myapp.db
# Memory databasedbcrust sqlite://:memory:
# Current directorydbcrust sqlite://database.dbClickHouse
Scheme: clickhouse://
# Standard connection (HTTP interface on port 8123)dbcrust clickhouse://localhost:8123/default
# With credentialsdbcrust clickhouse://username:password@localhost:8123/database_name
# Remote ClickHouse serverdbcrust clickhouse://user:pass@clickhouse.company.com:8123/analytics
# Without authentication (when CLICKHOUSE_SKIP_USER_SETUP=1)dbcrust clickhouse://localhost:8123/defaultFeatures:
- Uses ClickHouse HTTP interface (port 8123) for optimal performance
- Support for ClickHouse-specific data types (UInt32, DateTime, etc.)
- Compatible with ClickHouse system tables and functions
- Automatic FORMAT handling for dynamic query results
MongoDB
Schemes: mongodb:// or mongodb+srv://
# Standard MongoDB connectiondbcrust mongodb://user:password@localhost:27017/database_name
# MongoDB Atlas (SRV record)dbcrust mongodb+srv://user:password@cluster.mongodb.net/database_name
# Without authentication (local development)dbcrust mongodb://localhost:27017/myapp
# With connection optionsdbcrust mongodb://user:pass@host:27017/db?authSource=admin&ssl=true
# Replica set connectiondbcrust mongodb://user:pass@host1:27017,host2:27017/db?replicaSet=myReplSetFeatures:
- SQL-like Query Interface: Use familiar SELECT syntax that translates to MongoDB queries
- Advanced Filtering: LIKE patterns, IN operators, BETWEEN ranges, IS NULL checks, OR conditions
- Database Management: CREATE/DROP databases and collections via SQL syntax
- Native MongoDB Commands: Direct access to find, aggregate, and admin operations
- Schema Inference: Dynamic column detection from document structure
- Text Search: Full-text search capabilities with
\searchcommand - Index Management: Create, drop, and list MongoDB indexes
Elasticsearch
Schemes: elasticsearch://, elastic://, es://
# Standard connection (HTTP interface on port 9200)dbcrust elasticsearch://localhost:9200
# With authenticationdbcrust elasticsearch://elastic:password@localhost:9200
# Alternative schemes (all equivalent)dbcrust elastic://localhost:9200dbcrust es://localhost:9200
# With SSL (skip certificate verification for development)dbcrust "elasticsearch://elastic:password@localhost:9200?ssl=true&verify_certs=false"
# Remote Elasticsearch clusterdbcrust elasticsearch://user:pass@elasticsearch.company.com:9200
# Docker containerdbcrust docker://my-elasticsearch-containerFeatures:
- Intelligent Index Handling: Automatic quoting for index names with special characters (
logs-2024.01.01) - Smart SELECT * Queries: Auto-excludes array fields that cause SQL API failures
- Comprehensive Field Mapping: Shows nested fields (
author.login) and multi-fields (field.keyword) - Field Capabilities Analysis: Displays what operations each field supports (filter, search, agg, etc.)
- SQL API Integration: Uses Elasticsearch SQL for familiar query syntax
- Container Auto-Discovery: Detects Elasticsearch containers and excludes Kibana
- Advanced Query Rewriting: Handles complex queries with proper escaping
File Format Schemes
Section titled “File Format Schemes”DBCrust can query file formats directly using Apache DataFusion, a powerful SQL query engine that operates on Parquet, CSV, and JSON files.
Parquet
Scheme: parquet://
# Single Parquet filedbcrust parquet:///data/sales_2024.parquet
# Multiple files with glob patterndbcrust 'parquet:///data/sales_*.parquet'
# Directory-based table namedbcrust 'parquet:///data/warehouse/'> SELECT * FROM warehouse LIMIT 10;Features:
- Columnar Storage Format: Optimized for analytical queries with efficient compression
- Full Schema Preservation: Maintains complete type information including nested structures
- Nested Field Support: Query deeply nested Struct, Array, and Map types
- Predicate Pushdown: Filters applied at file level for optimal performance
- Multi-Level Autocomplete: Navigate nested fields like
data.exact_paths.auth/token/create - Glob Pattern Support: Query multiple files matching patterns
Multi-Level Nested Field Navigation:
-- Autocomplete shows all nested levelsSELECT data.[TAB] FROM sales-- → data, data.customer, data.customer.address, data.customer.address.city
-- Navigate to any depthSELECT data.customer.address.[TAB] FROM sales-- → Shows: city, state, zip, country
-- Works with complex field namesSELECT data.exact_paths.[TAB] FROM policies-- → Shows: auth/token/create, aws_okta/creds/management-ecr, ...CSV
Scheme: csv://
# CSV with header row (default)dbcrust csv:///data/users.csv
# Custom delimiter (tab-separated)dbcrust 'csv:///data/data.tsv?delimiter=\t'
# No header rowdbcrust 'csv:///data/logs.csv?header=false'
# Multiple CSV files with glob patterndbcrust 'csv:///logs/*.csv?header=true'> SELECT date, COUNT(*) FROM logs GROUP BY date;Query Parameters:
?header=true|false- CSV has header row (default: true)?delimiter=,- Field delimiter character (default: ’,’)
Features:
- Schema Inference: Automatic column type detection
- Flexible Delimiters: Support for CSV, TSV, and custom-delimited files
- Glob Patterns: Query multiple files matching
*.csvpatterns - Header Detection: Configurable header row handling
JSON
Scheme: json://
# Standard JSON filedbcrust json:///api_responses.json
# NDJSON (newline-delimited JSON)dbcrust json:///events.ndjson
# JSON with nested structuresdbcrust json:///vault_policies.json> SELECT data.exact_paths FROM res;Features:
- Automatic NDJSON Detection: Recognizes and converts newline-delimited JSON
- Deep Nested Field Support: Navigate multi-level nested objects with autocomplete
- Schema Inference: Automatic type detection from JSON structure
- Multi-Level Field Access: Use dot notation like
data.field.subfield.value - Complex Type Display: Shows
Struct<N fields>with expandable details
Nested Field Schema Display:
\d res
Table: resColumn | Type-----------------+--------------------data | Struct<4 fields>timestamp | Utf8id | Int64
Nested field details: data (Struct): - chroot_namespace: Utf8 - exact_paths: Struct<25 fields> - glob_paths: Struct<10 fields> - root: Utf8
data.exact_paths (Struct): - auth/token/create: Struct<1 fields> - auth/token/lookup-self: Struct<1 fields> ... and 23 more fieldsDataFusion SQL Features:
All file formats support the full DataFusion SQL dialect:
- Aggregate Functions:
COUNT,SUM,AVG,MIN,MAX,STDDEV,VAR,MEDIAN,APPROX_DISTINCT - Window Functions:
ROW_NUMBER,RANK,DENSE_RANK,LAG,LEAD,FIRST_VALUE,LAST_VALUE - String Functions:
CONCAT,UPPER,LOWER,SUBSTRING,TRIM,REPLACE,SPLIT_PART,LENGTH - Date/Time Functions:
NOW,CURRENT_DATE,DATE_TRUNC,EXTRACT,TO_TIMESTAMP - Array Functions:
ARRAY_AGG,ARRAY_LENGTH,ARRAY_CONTAINS,ARRAY_CONCAT - Type Conversion:
CAST,TRY_CAST
Cross-Format Queries:
You can JOIN data across different file formats:
-- Join Parquet and CSV dataSELECT u.name, o.totalFROM 'users.parquet' uJOIN 'orders.csv' o ON u.id = o.user_idWHERE o.total > 100;
-- Combine JSON and ParquetSELECT p.data.customer.name, s.amountFROM 'policies.json' pJOIN 'sales.parquet' s ON p.id = s.policy_id;Advanced Connection Schemes
Section titled “Advanced Connection Schemes”Docker Containers
Scheme: docker://
# Interactive container selectiondbcrust docker://# → Shows list of running database containers
# Direct container connectiondbcrust docker://postgres-containerdbcrust docker://my-mysql-db
# With credentials and databasedbcrust docker://user:pass@container-name/database
# Examplesdbcrust docker://postgres-devdbcrust docker://app_user:secret@mysql-prod/app_dbFeatures:
- Automatic discovery of running database containers
- Support for PostgreSQL, MySQL, SQLite, ClickHouse, and MongoDB containers
- OrbStack integration on macOS
- Intelligent port mapping and network resolution
- Special handling for ClickHouse containers with
CLICKHOUSE_SKIP_USER_SETUP=1 - MongoDB container detection for mongo, mongodb, and bitnami/mongodb images
Saved Sessions
Scheme: session://
# Interactive session selectiondbcrust session://# → Shows list of saved sessions
# Direct session connectiondbcrust session://production_dbdbcrust session://staging_postgresdbcrust session://local_devSession Management:
# Save current connection as a session\ss production_db
# List all saved sessions\s
# Delete a session\sd old_session
# Connect to specific session\s production_dbRecent Connections
Scheme: recent://
# Interactive recent connection selectiondbcrust recent://# → Shows numbered list of recent connections# 2. docker://postgres-dev/testdb# 3. mysql://root@mysql-server:3306/appRecent Connection Management:
# List recent connections\r
# Clear recent connection history\rcHashiCorp Vault
Schemes: vault:// or vaultdb:// (deprecated)
# Full vault URLdbcrust vault://role-name@mount-path/database-path
# Interactive vault connectiondbcrust vault://# → Prompts for role, mount, and database
# Alternative scheme (deprecated)dbcrust vault://app-role@database/postgres-prodConfiguration:
[vault]addr = "https://vault.company.com"token_file = "~/.vault-token"🚀 Shell Autocompletion
Section titled “🚀 Shell Autocompletion”DBCrust provides intelligent shell autocompletion that understands URL schemes and provides contextual suggestions.
Installation
Section titled “Installation”Bash
# Generate completion scriptdbcrust --completions bash > ~/.local/share/bash-completion/completions/dbcrust
# Or install system-widesudo dbcrust --completions bash > /etc/bash_completion.d/dbcrust
# Reload your shellsource ~/.bashrcZsh
# Create completions directory if it doesn't existmkdir -p ~/.local/share/zsh/site-functions
# Generate completion scriptdbcrust --completions zsh > ~/.local/share/zsh/site-functions/_dbcrust
# Add to your .zshrc if not already presentecho 'fpath=(~/.local/share/zsh/site-functions $fpath)' >> ~/.zshrcecho 'autoload -Uz compinit && compinit' >> ~/.zshrc
# Reload your shellsource ~/.zshrcFish
# Generate completion scriptdbcrust --completions fish > ~/.config/fish/completions/dbcrust.fish
# Reload fish completionsfish -c "complete --erase --command dbcrust; source ~/.config/fish/completions/dbcrust.fish"PowerShell
# Generate completion scriptdbcrust --completions powershell > $PROFILE.CurrentUserAllHosts.Replace("profile.ps1", "Completions/dbcrust.ps1")
# Add to your PowerShell profileAdd-Content $PROFILE.CurrentUserAllHosts '. $PSScriptRoot/Completions/dbcrust.ps1'Autocompletion Features
Section titled “Autocompletion Features”URL Scheme Completion
Section titled “URL Scheme Completion”Type a partial scheme and press TAB to see available options:
dbc pos[TAB] → postgres://dbc my[TAB] → mysql://dbc sq[TAB] → sqlite://dbc cl[TAB] → clickhouse://dbc mo[TAB] → mongodb://dbc par[TAB] → parquet://dbc cs[TAB] → csv://dbc js[TAB] → json://dbc doc[TAB] → docker://dbc ses[TAB] → session://dbc rec[TAB] → recent://dbc va[TAB] → vault://Contextual Completions
Section titled “Contextual Completions”DBCrust provides smart contextual completions based on the URL scheme:
Docker Containers
# Shows running database containersdbc docker://[TAB]# → postgres-dev mysql-test clickhouse-analytics mongodb-cache
dbc docker://post[TAB] → docker://postgres-devdbc docker://my[TAB] → docker://mysql-testdbc docker://cl[TAB] → docker://clickhouse-analyticsdbc docker://mo[TAB] → docker://mongodb-cacheHow it works:
- Queries Docker API for running containers
- Filters for database containers (PostgreSQL, MySQL, SQLite, ClickHouse, MongoDB images)
- Only shows containers that are currently running
- Matches container names that start with your input
Saved Sessions
# Shows your saved sessionsdbc session://[TAB]# → production_db staging_postgres local_dev
dbc session://prod[TAB] → session://production_dbdbc session://loc[TAB] → session://local_devHow it works:
- Reads from
~/.config/dbcrust/sessions.toml - Shows all saved session names
- Matches sessions that start with your input
SQLite Files
# Delegates to shell file completiondbc sqlite://[TAB]# → Uses your shell's built-in file completion
dbc sqlite://./[TAB] → sqlite://./myapp.db sqlite://./test.dbComplete Examples
Section titled “Complete Examples”# Scheme completiondbc [TAB]# → postgres:// mysql:// sqlite:// clickhouse:// mongodb:// docker:// session:// recent:// vault://
# Docker container completiondbc docker://[TAB]# → postgres-dev mysql-prod clickhouse-analytics mongodb-cache
# Session completiondbc session://[TAB]# → production staging development local
# Recent connection (interactive)dbc recent://[ENTER]# → 1. postgres://user@localhost:5432/mydb# 2. docker://postgres-dev/testdb# 3. mysql://root@server:3306/app# Select connection [1-3]:🔧 Advanced Configuration
Section titled “🔧 Advanced Configuration”SSH Tunnel Patterns
Section titled “SSH Tunnel Patterns”Configure automatic SSH tunnels based on hostname patterns:
[ssh_tunnel_patterns]"^db\\.internal\\..*\\.com$" = "user@jumphost.example.com:2222""^.*\\.prod\\.company\\.com$" = "deploy@bastion.company.com"When connecting to a matching hostname, DBCrust automatically establishes an SSH tunnel:
# This automatically uses the SSH tunneldbcrust postgres://app@db.internal.example.com:5432/prod# → Tunnels through user@jumphost.example.com:2222Default URL Handling
Section titled “Default URL Handling”URLs without schemes default to PostgreSQL:
# These are equivalentdbcrust localhost:5432/mydbdbcrust postgres://localhost:5432/mydbConnection History
Section titled “Connection History”All successful connections are automatically saved to recent connection history:
[[recent_connections]]connection_url = "postgres://user@localhost:5432/testdb"display_name = "user@localhost:5432/testdb"timestamp = "2024-01-15T14:22:33Z"database_type = "PostgreSQL"success = true
[[recent_connections]]connection_url = "docker://postgres-dev/myapp"display_name = "docker://postgres-dev/myapp"timestamp = "2024-01-15T14:20:15Z"database_type = "PostgreSQL"success = true🎯 Best Practices
Section titled “🎯 Best Practices”Session Management
Section titled “Session Management”-
Save frequently used connections as sessions:
Terminal window # Connect to productiondbcrust postgres://readonly@prod.db.company.com:5432/analytics# Save as session\ss prod_analytics# Later, reconnect easilydbcrust session://prod_analytics -
Use meaningful session names:
Terminal window \ss prod_readonly # Good: describes environment and access\ss staging_full # Good: describes environment and permissions\ss db1 # Bad: not descriptive
Docker Workflows
Section titled “Docker Workflows”-
Use interactive mode for exploration:
Terminal window # Explore available containersdbcrust docker:// -
Use direct connection for automation:
Terminal window # Script-friendly (no interaction)dbcrust docker://postgres-prod/analytics -c "SELECT COUNT(*) FROM users"
URL Scheme Selection
Section titled “URL Scheme Selection”Choose the right scheme for your use case:
| Use Case | Recommended Scheme | Example |
|---|---|---|
| Local development | postgres://, mysql://, sqlite://, mongodb:// | postgres://localhost:5432/dev |
| Production access | session:// or vault:// | session://prod_readonly |
| Container development | docker:// | docker://postgres-dev |
| Quick reconnection | recent:// | recent:// |
| Team sharing | session:// with shared config | session://shared_staging |
| Document databases | mongodb:// or mongodb+srv:// | mongodb://localhost:27017/app |
🔍 Troubleshooting
Section titled “🔍 Troubleshooting”Autocompletion Issues
Section titled “Autocompletion Issues”Completions not working?
# Check if completion script is installedls ~/.local/share/bash-completion/completions/dbcrust # Bashls ~/.zfunc/_dbcrust # Zshls ~/.config/fish/completions/dbcrust.fish # Fish
# Regenerate completion scriptsdbcrust --completions bash > ~/.local/share/bash-completion/completions/dbcrustdbcrust --completions zsh > ~/.zfunc/_dbcrust # Zshdbc --completions zsh > ~/.zfunc/_dbc # Zsh (dbc binary)
# Test basic completiontype _dbcrust # Should show function definitionDocker completions not showing containers?
# Check Docker connectivitydocker ps --format '{{.Names}}' | grep -E 'postgres|mysql|mariadb|sqlite|mongo|clickhouse'
# Check Docker permissionsdocker info # Should not require sudoConnection Issues
Section titled “Connection Issues”Session not found?
# Check available sessions\s
# Check session filecat ~/.config/dbcrust/sessions.tomlDocker connection failed?
# Check if container is runningdocker ps | grep container-name
# Check container database typedocker inspect container-name | grep -i imagePerformance
Section titled “Performance”Autocompletion feels slow?
DBCrust caches autocompletion data for performance. If you notice slow completions:
- Docker completions cache running containers for 30 seconds
- Session completions read from disk but are very fast
- Scheme completions are instant (hardcoded)
🧠 Interactive SQL Autocompletion
Section titled “🧠 Interactive SQL Autocompletion”Once connected to a database, DBCrust provides advanced SQL autocompletion within the interactive session. This is separate from shell completion and works by understanding SQL syntax context.
Context-Aware SQL Completion
Section titled “Context-Aware SQL Completion”DBCrust analyzes your SQL statement in real-time to provide relevant suggestions based on the SQL clause you’re in:
SELECT Context Intelligence
Section titled “SELECT Context Intelligence”-- After SELECT keyword, suggests aggregates and wildcardsSELECT [TAB]-- Suggestions: *, COUNT(, SUM(, AVG(, MAX(, MIN(, DISTINCT
-- Column suggestions work when table is visible before cursorSELECT * FROM users WHERE [TAB]-- Suggestions: id, name, email, created_at, statusWHERE Clause Precision
Section titled “WHERE Clause Precision”-- After WHERE, suggests ONLY column names (no functions or tables)SELECT * FROM users WHERE [TAB]-- Suggestions: id, name, email, created_at, status, active-- NOT suggested: users, orders, *, COUNT(
-- Understands multiple tables in FROM clauseSELECT * FROM users u JOIN orders o ON u.id = o.user_id WHERE [TAB]-- Suggestions: columns from BOTH users and orders tablesFROM Clause Behavior
Section titled “FROM Clause Behavior”-- After FROM, suggests table names (preserves existing behavior)SELECT * FROM [TAB]-- Suggestions: users, orders, products, categories-- NOT suggested: column names, functions, or wildcardsAdvanced SQL Context Support
Section titled “Advanced SQL Context Support”ORDER BY and GROUP BY
Section titled “ORDER BY and GROUP BY”-- Suggests column names from tables in FROM clauseSELECT * FROM users ORDER BY [TAB]-- Suggestions: id, name, email, created_at, status
SELECT COUNT(*) FROM orders GROUP BY [TAB]-- Suggestions: status, user_id, product_id, created_atHAVING Clause
Section titled “HAVING Clause”-- Suggests both aggregate functions AND column namesSELECT status, COUNT(*) FROM users GROUP BY status HAVING [TAB]-- Suggestions: COUNT(, SUM(, AVG(, MAX(, MIN(, statusComplex Query Support
Section titled “Complex Query Support”-- Handles complex multi-table scenariosSELECT u.name, o.totalFROM users u LEFT JOIN orders o ON u.id = o.user_id JOIN products p ON o.product_id = p.idWHERE [TAB]-- Suggests: u.id, u.name, u.email, o.id, o.total, o.status, p.name, p.priceSchema-Based Completions
Section titled “Schema-Based Completions”Table Names
Section titled “Table Names”-- Prefix matching for table namesSELECT * FROM us[TAB]-- Suggestions: users, user_sessions, user_preferencesColumn Names with Dot Notation
Section titled “Column Names with Dot Notation”-- After table.dot, suggests columns from that specific tableSELECT users.[TAB] FROM users-- Suggestions: id, name, email, created_at, status, active
-- Works with table aliasesSELECT u.[TAB] FROM users u-- Suggestions: id, name, email, created_at, status, activeSQL Keywords
Section titled “SQL Keywords”-- Keyword expansionSEL[TAB] → SELECTFR[TAB] → FROMWH[TAB] → WHERE
-- Full statement completionSEL[TAB] name FR[TAB] users WH[TAB] active = true-- Expands to: SELECT name FROM users WHERE active = trueBackslash Command Completion
Section titled “Backslash Command Completion”DBCrust also provides completion for backslash commands:
Named Queries
Section titled “Named Queries”-- After \n, suggests named query names\n [TAB]-- Suggestions: active_users, monthly_report, user_orders
-- After \nd, suggests named queries to delete\nd [TAB]-- Suggestions: old_report, unused_querySession Management
Section titled “Session Management”-- After \s, suggests saved session names\s [TAB]-- Suggestions: production, staging, development, local
-- After \sd, suggests sessions to delete\sd [TAB]-- Suggestions: old_staging, temp_connectionPerformance & Caching
Section titled “Performance & Caching”- Real-time Context Analysis: SQL parsing happens instantly with no noticeable delay
- Schema Caching: Table and column metadata is cached for fast repeated access
- Smart Invalidation: Cache automatically refreshes when database schema changes
- Background Loading: Schema data loads proactively for better responsiveness
Technical Features
Section titled “Technical Features”- Multi-Database Support: Works with PostgreSQL, MySQL, SQLite, ClickHouse, and MongoDB
- Case-Insensitive Matching: Completions work regardless of case
- Backwards Compatible: All existing completion behavior is preserved
- Error Recovery: Completion works even with partial or incomplete SQL statements