Not yet another AI CLI. Built for failures, not for code.
Most AI CLIs generate code. Shello debugs production systems: Cloud βοΈ, Kubernetes βΈοΈ, Docker π³, and log failures.
The Problem: Other AI CLIs fail when logs explode. They either refuse to run commands, flood your terminal with 50K lines, or burn thousands of tokens trying to process everything.
Shello's Solution: Execute real shell commands, cache full output (100MB), and show you what mattersβerrors, warnings, and critical contextβusing semantic truncation that keeps failures visible.
Logs too big? Errors hidden? Shello handles it pretty well.
One-line installation:
# Windows (PowerShell - Recommended, no admin needed)
Invoke-WebRequest -Uri "https://github.com/om-mapari/shello-cli/releases/latest/download/shello.exe" -OutFile "$env:LOCALAPPDATA\Microsoft\WindowsApps\shello.exe"# Linux
curl -L https://github.com/om-mapari/shello-cli/releases/latest/download/shello -o /tmp/shello && sudo mv /tmp/shello /usr/local/bin/shello && sudo chmod +x /usr/local/bin/shello
# macOS
curl -L https://github.com/om-mapari/shello-cli/releases/latest/download/shello-macos -o /tmp/shello && sudo mv /tmp/shello /usr/local/bin/shello && sudo chmod +x /usr/local/bin/shelloVerify installation:
shello --versionConfigure (first time):
shello setupThe interactive wizard will guide you through API key and model configuration.
Start chatting:
shelloπ‘ Start by describing what you'd like to do...
ββ Starting new conversation ββ
# Direct commands execute instantly (no AI call)
π user [~/projects]
βββββ© ls -la
# Executes immediately, output cached as cmd_001
π user [~/projects]
βββββ© cd myapp
π user [~/projects/myapp]
βββββ© pwd
/home/user/projects/myapp
# Natural language queries route to AI
π user [~/projects/myapp]
βββββ© find all python files with TODO comments
π Shello
ββ[π» user@hostname]β[~/projects/myapp]
ββ$ grep -r "TODO" --include="*.py" .
./main.py:# TODO: Add error handling
./utils.py:# TODO: Optimize this function
./tests/test_main.py:# TODO: Add more test cases
Found 3 TODO comments in your Python files.
# AI understands context and can chain commands
π user [~/projects/myapp]
βββββ© now check if there are any FIXME comments too
π Shello
ββ[π» user@hostname]β[~/projects/myapp]
ββ$ grep -r "FIXME" --include="*.py" .
./api/client.py:# FIXME: Handle timeout errors
Found 1 FIXME comment.
# Large output gets smart truncation
π user [~/projects/myapp]
βββββ© npm install
# Direct execution, output streams in real-time
added 1247 packages, and audited 1248 packages in 45s
... (installation output) ...
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
π OUTPUT SUMMARY
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Total: 45,000 chars (850 lines) | Shown: 8,000 chars (150 lines)
Strategy: FIRST_LAST (20% first + 80% last)
Optimizations: Progress bars compressed (saved 200 lines)
πΎ Cache ID: cmd_002
π‘ Use get_cached_output(cache_id="cmd_002", lines="-100") to see last 100 lines
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# Retrieve cached output anytime
π user [~/projects/myapp]
βββββ© show me the last 50 lines from that install
π Shello
Using get_cached_output to retrieve lines from cmd_002...
[Last 50 lines of npm install output]
# AI analyzes failures and helps debug
π user [~/projects/myapp]
βββββ© run the tests
π Shello
ββ[π» user@hostname]β[~/projects/myapp]
ββ$ npm test
FAIL tests/api.test.js
β should handle API errors (15ms)
Expected status 200, received 500
Tests: 1 failed, 12 passed, 13 total
I see one test is failing. The API error handling test is receiving a 500
status code instead of the expected 200. This looks like the error handler
isn't catching the exception properly. Want me to check the API client code?
# Production debugging example
π user [~/k8s-cluster]
βββββ© why is my pod crashing?
π Shello
ββ[π» user@hostname]β[~/k8s-cluster]
ββ$ kubectl get pods
NAME READY STATUS RESTARTS AGE
api-deployment-abc123 0/1 CrashLoopBackOff 5 3m
Let me check the logs...
ββ[π» user@hostname]β[~/k8s-cluster]
ββ$ kubectl logs api-deployment-abc123
[... 2000 lines of startup logs ...]
Error: ECONNREFUSED connect to database:5432
at TCPConnectWrap.afterConnect [as oncomplete]
Found it. Your pod can't connect to the database. The connection is being
refused on port 5432. Check if your database service is running and if the
connection string in your deployment config is correct.
Unlike code-generation AI CLIs, Shello is engineered for production debugging:
- β‘ Executes Real Commands - Runs shell commands instantly, no refusal, no suggestionsβactual execution
- π§ Smart Output Management - Semantic truncation keeps errors visible even in 50K-line logs without token waste
- πΎ Persistent Output Cache - 100MB cache stores full command outputβretrieve any section anytime during debugging
- π JSON Intelligence - Auto-analyzes massive JSON with jq paths instead of flooding your terminal
- π― Failure-First Truncation - Logs show end (where errors are), builds show both ends, lists show start
- π Semantic Error Detection - Critical errors always visible regardless of position in output
- βοΈ Progress Bar Compression - npm install with 500 progress lines? Compressed to final state
- βοΈ Production-Ready - Built for Cloud, Kubernetes, Docker debugging with comprehensive test coverage
- Executes real commands - No refusal, no suggestionsβruns kubectl, docker, aws, gcloud commands instantly
- Failure-first output - Semantic truncation ensures errors are always visible, even in massive logs
- 100MB output cache - Full command output storedβretrieve any section during debugging session
- JSON analysis - Large JSON responses auto-analyzed with jq paths instead of terminal flooding
- Multi-platform - Windows, Linux, macOS with automatic shell detection (bash/PowerShell/cmd)
- Character-based limits - 5K-20K chars depending on command type (not arbitrary line counts)
- Context-aware truncation - Logs show end (where errors are), builds show both ends, lists show start
- Semantic error detection - Errors, warnings, stack traces always visible regardless of position
- Progress bar compression - npm install with 500 progress lines? Compressed to final state
- Token optimization - 2-3x reduction in token usage compared to naive log processing
- Real-time streaming - See output as it happens, AI gets processed summary
- Zero data loss - Full output always cached, retrieve any section on demand
- Context preservation - Working directory persists across commands
- Flexible AI providers - OpenAI, AWS Bedrock, OpenRouter, or local models (LM Studio, Ollama)
- Project configs - Team-specific settings via
.shello/settings.json - Custom instructions - Add project context in
.shello/SHELLO.md
- Smart allowlist/denylist - Configure which commands execute automatically vs require approval
- AI safety integration - AI can flag dangerous commands for review
- YOLO mode - Bypass approval checks for automation and CI/CD debugging
- Critical warnings - Denylist commands show prominent warnings before execution
- Flexible approval modes - Choose between AI-driven or user-driven approval workflows
Run the interactive setup wizard:
shello setupThis will guide you through:
- AI provider selection (OpenAI-compatible API or AWS Bedrock)
- Provider-specific configuration (API keys or AWS credentials)
- Default model selection
The setup wizard generates a well-documented ~/.shello_cli/user-settings.yml file with all available options as comments, making it easy to customize later.
Using AWS Bedrock? See the AWS Bedrock Setup Guide for detailed instructions on configuring AWS credentials and accessing Claude, Nova, and other foundation models.
View current settings:
shello configEdit settings in your default editor:
shello config --editGet/set specific values:
shello config get provider
shello config set provider bedrock
shello config set openai_config.default_model gpt-4o-miniReset to defaults:
shello config resetSee DEVELOPMENT_SETUP.md for detailed configuration options.
While chatting:
/new- Start fresh conversation/switch- Switch between AI providers (OpenAI, Bedrock, etc.)/help- Show available commands/quit- Exit
CLI commands:
shello setup- Interactive configuration wizardshello config- Show current settingsshello --version- Display version
Shello supports multiple AI providers for debugging flexibility:
OpenAI-compatible APIs:
- OpenAI (GPT-4o, GPT-4 Turbo, GPT-3.5)
- OpenRouter (access to Claude, Gemini, and 200+ models)
- Custom endpoints (LM Studio, Ollama, vLLM, etc.)
AWS Bedrock:
- Anthropic Claude (3.5 Sonnet, 3 Opus, 3 Sonnet)
- Amazon Nova (Pro, Lite, Micro)
- Other Bedrock foundation models
Choose your provider during setup or switch between providers at runtime:
# Initial setup - choose your provider
shello setup
# Switch providers during a chat session
π user [~/projects]
βββββ© /switch
π Switch Provider:
1. [β] OpenAI-compatible API
2. [ ] AWS Bedrock
Select provider (or 'c' to cancel): 2
β Switched to bedrock
Model: anthropic.claude-3-5-sonnet-20241022-v2:0
Conversation history preservedSwitch between configured providers without losing your conversation:
- Use
/switchcommand during any chat session - Conversation history is preserved across providers
- Compare responses from different models
- Seamlessly switch if one provider is unavailable
Example workflow:
# Start with OpenAI
shello
π user [~/projects]
βββββ© analyze this codebase structure
π Shello (gpt-4o)
[Analysis from GPT-4o...]
# Switch to Claude via Bedrock
π user [~/projects]
βββββ© /switch
[Select AWS Bedrock]
π user [~/projects]
βββββ© now give me a second opinion on the architecture
π Shello (claude-3-5-sonnet)
[Analysis from Claude...]All provider credentials support environment variable overrides:
OpenAI-compatible APIs:
export OPENAI_API_KEY="your-api-key"AWS Bedrock:
export AWS_REGION="us-east-1"
export AWS_PROFILE="default"
# Or explicit credentials:
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"Environment variables take precedence over configuration files, making it easy to switch credentials per session or use different credentials in CI/CD.
Global settings: ~/.shello_cli/user-settings.yml
The settings file uses YAML format with helpful comments and documentation. After running shello setup, you'll have a file like this:
OpenAI-compatible API configuration:
# =============================================================================
# SHELLO CLI USER SETTINGS
# =============================================================================
# Edit this file to customize your settings.
# Only specify values you want to override - defaults are used for the rest.
# =============================================================================
# PROVIDER CONFIGURATION
# =============================================================================
provider: openai
openai_config:
provider_type: openai
api_key: sk-proj-abc123... # Or use OPENAI_API_KEY env var
base_url: https://api.openai.com/v1
default_model: gpt-4o
models:
- gpt-4o
- gpt-4o-mini
- gpt-4-turbo
# =============================================================================
# OUTPUT MANAGEMENT (optional - uses defaults if not specified)
# =============================================================================
# Uncomment and modify to customize:
# output_management:
# enabled: true
# limits:
# list: 5000
# search: 10000
# default: 8000
# =============================================================================
# COMMAND TRUST (optional - uses defaults if not specified)
# =============================================================================
# Uncomment and modify to customize:
# command_trust:
# enabled: true
# yolo_mode: false
# allowlist:
# - ls
# - pwdAWS Bedrock configuration:
provider: bedrock
bedrock_config:
provider_type: bedrock
aws_region: us-east-1
aws_profile: default
default_model: anthropic.claude-3-5-sonnet-20241022-v2:0
models:
- anthropic.claude-3-5-sonnet-20241022-v2:0
- anthropic.claude-3-opus-20240229-v1:0Key features:
- Only configured values are saved (everything else uses defaults)
- All optional settings are shown as comments with examples
- Inline documentation explains each setting
- Environment variables can override any credential
Project settings: .shello/settings.json (overrides global)
{
"model": "gpt-4o-mini"
}Environment variables:
OpenAI-compatible:
export OPENAI_API_KEY="your-api-key"AWS Bedrock:
export AWS_REGION="us-east-1"
export AWS_PROFILE="default"
# Or use explicit credentials:
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"Shello includes a comprehensive trust and safety system to protect you from accidentally executing dangerous commands while maintaining a smooth workflow for safe operations.
The trust system evaluates every command before execution using this flow:
- Denylist Check - Critical warnings for dangerous commands (highest priority)
- YOLO Mode - Bypass checks for automation (if enabled)
- Allowlist Check - Auto-execute safe commands without approval
- AI Safety Flag - AI can indicate if a command is safe (in ai_driven mode)
- Approval Dialog - Interactive prompt for commands requiring review
Add a command_trust section to your ~/.shello_cli/user-settings.json:
{
"api_key": "your-api-key",
"default_model": "gpt-4o",
"command_trust": {
"enabled": true,
"yolo_mode": false,
"approval_mode": "user_driven",
"allowlist": [
"ls",
"ls *",
"pwd",
"cd *",
"git status",
"git log*",
"git diff*",
"npm test"
],
"denylist": [
"sudo rm -rf *",
"git push --force",
"docker system prune -a"
]
}
}enabled (boolean, default: true)
- Enable or disable the trust system entirely
- When disabled, all commands execute without checks
yolo_mode (boolean, default: false)
- Bypass approval checks for automation and CI/CD
- Still shows critical warnings for denylist commands
- Can also be enabled per-session with
--yoloflag
approval_mode (string, default: "user_driven")
"user_driven"- Always prompt for non-allowlist commands"ai_driven"- Trust AI safety flags; only prompt when AI flags as unsafe
allowlist (array of strings)
- Commands that execute without approval
- User-defined allowlist replaces defaults
- Supports exact match, wildcards (
git *), and regex (^git (status|log)$)
denylist (array of strings)
- Commands that show critical warnings before execution
- User patterns are added to default denylist (additive for safety)
- Default denylist includes:
rm -rf /,dd if=/dev/zero*,mkfs*, etc. - Supports exact match, wildcards, and regex
Exact match:
"allowlist": ["git status", "npm test"]Wildcard patterns:
"allowlist": [
"git *", // Matches: git status, git log, git diff, etc.
"npm run *", // Matches: npm run test, npm run build, etc.
"ls *" // Matches: ls -la, ls -lh, etc.
]Regex patterns:
"allowlist": [
"^git (status|log|diff)$", // Matches only: git status, git log, git diff
"^npm (test|run test)$" // Matches only: npm test, npm run test
]For automation and CI/CD environments where you trust all commands:
Enable via config:
{
"command_trust": {
"yolo_mode": true
}
}Enable per-session:
shello --yoloImportant: YOLO mode still respects the denylist and shows critical warnings for dangerous commands.
When approval_mode is set to "ai_driven", the AI can indicate whether commands are safe:
- AI says safe (
is_safe: true) β Execute without approval (after allowlist check) - AI says unsafe (
is_safe: false) β Show approval dialog with warning - AI doesn't specify β Show approval dialog
The AI can also override the allowlist in ai_driven mode if it detects danger.
When a command requires approval, you'll see an interactive dialog:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β οΈ COMMAND APPROVAL REQUIRED β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β β οΈ CRITICAL: This command is in DENYLIST! β
β β
β Command: rm -rf node_modules β
β Directory: /home/user/project β
β β
β [A] Approve [D] Deny β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Press A to approve or D to deny execution.
Default Allowlist (safe commands that execute automatically):
- Navigation:
ls,pwd,cd - Git read-only:
git status,git log,git diff,git show,git branch - File viewing:
cat,less,more,head,tail - Search:
grep,find,rg,ag - Process inspection:
ps,top,htop - Network inspection:
ping,curl -I,wget --spider - Package inspection:
npm list,pip list,pip show
Default Denylist (dangerous commands that always show warnings):
- Destructive filesystem:
rm -rf /,rm -rf /*,rm -rf ~ - Disk operations:
dd if=/dev/zero*,mkfs*,format* - System modifications:
chmod -R 777 /,chown -R * / - Dangerous redirects:
> /dev/sda
- Start with defaults - The default allowlist covers most safe operations
- Add project-specific commands - Extend allowlist for your workflow (e.g.,
npm run dev) - Use wildcards carefully -
git *is safe, butrm *is not - Never remove denylist defaults - User denylist patterns are additive for safety
- Use YOLO mode sparingly - Only in trusted automation environments
- Review AI warnings - When AI flags a command as unsafe, take it seriously
If you prefer to disable all safety checks:
{
"command_trust": {
"enabled": false
}
}Warning: This removes all protections. Use with caution.
For developers debugging production systems:
- Hybrid execution model - Direct shell execution for instant commands, AI routing for analysis and complex queries
- Formal correctness properties - 8 properties validated via property-based testing (Hypothesis)
- Intelligent truncation - Type detector, semantic classifier, progress bar compressorβerrors never hidden
- Persistent LRU cache - Sequential cache IDs (cmd_001, cmd_002...), 100MB limit, conversation-scoped
- Streaming architecture - Real-time output for you, processed summary for AIβno token waste
- Zero data loss - Full output always cached, retrieve any section on demand for deeper debugging
- Modular design - Clean separation: cache β detect β compress β truncate β analyze
- Token optimization - Strips column padding, compresses progress barsβ2-3x reduction in token usage
See design.md for architecture details.
git clone https://github.com/om-mapari/shello-cli.git
cd shello-cli
pip install -r requirements.txt
python main.pyOptional: AWS Bedrock Support
If you plan to use AWS Bedrock as your AI provider, boto3 is included in requirements.txt. If you only need OpenAI-compatible APIs, you can skip boto3:
# Install without boto3 (OpenAI-compatible APIs only)
pip install python-dotenv pydantic rich requests urllib3 click prompt_toolkit keyring pyperclip openai hypothesis pytest
# Or install boto3 separately when needed
pip install boto3# Windows
build.bat
# Linux/macOS
chmod +x build.sh && ./build.shOutput in dist/ folder. See BUILD_INSTRUCTIONS.md for details.
Contributions welcome! Fork, create a feature branch, and submit a PR.
See CONTRIBUTING.md for guidelines.
- π§ Development Setup
- βοΈ AWS Bedrock Setup Guide
- π Changelog
- π Report Issues
- π Latest Release
MIT License - see LICENSE