Guides

Multi-Provider Support: Choose Your AI Backend

CCJK supports multiple AI providers. Learn how to configure and switch between Claude, GPT-4, and other models.

C
CCJK TeamJanuary 1, 2025
11 min read
1,961 views
Multi-Provider Support: Choose Your AI Backend

Multi-Provider Support: Choose Your AI Backend

CCJK is designed to be provider-agnostic. While optimized for Claude, it supports multiple AI providers, giving you flexibility in choosing the best model for your needs.

Supported Providers

ProviderModelsBest For
AnthropicClaude 3.5 Sonnet, Claude 3 OpusComplex reasoning, code generation
OpenAIGPT-4, GPT-4 Turbo, GPT-3.5General tasks, wide compatibility
GoogleGemini Pro, Gemini UltraMulti-modal tasks
LocalOllama, LM StudioPrivacy, offline use
AzureAzure OpenAIEnterprise compliance

Configuration

Basic Provider Setup

hljs yaml
# .claude/config.yaml providers: default: anthropic anthropic: api_key: ${ANTHROPIC_API_KEY} model: claude-sonnet-4-20250514 max_tokens: 8192 openai: api_key: ${OPENAI_API_KEY} model: gpt-4-turbo-preview max_tokens: 4096 google: api_key: ${GOOGLE_API_KEY} model: gemini-pro local: endpoint: http://localhost:11434 model: codellama:13b

Environment Variables

hljs bash
# .env or shell profile export ANTHROPIC_API_KEY="sk-ant-..." export OPENAI_API_KEY="sk-..." export GOOGLE_API_KEY="..."

Switching Providers

Command Line

hljs bash
# Use default provider ccjk # Specify provider ccjk --provider openai # Specify model ccjk --provider anthropic --model claude-3-opus-20240229

In-Session Switching

You: /provider openai
Switched to OpenAI (gpt-4-turbo-preview)

You: /provider anthropic
Switched to Anthropic (claude-sonnet-4-20250514)

You: /model claude-3-opus-20240229
Switched to claude-3-opus-20240229

Per-Task Provider

hljs yaml
# .claude/skills/complex-analysis.yaml name: complex-analysis provider: anthropic model: claude-3-opus-20240229 # Use Opus for complex tasks prompt: | Perform deep analysis of...

Provider-Specific Features

Anthropic (Claude)

Best for:

  • Complex code generation
  • Multi-file refactoring
  • Nuanced code review

Configuration:

hljs yaml
anthropic: model: claude-sonnet-4-20250514 max_tokens: 8192 features: extended_thinking: true # For complex problems artifacts: true # For structured output

OpenAI (GPT-4)

Best for:

  • Quick tasks
  • Wide language support
  • Function calling

Configuration:

hljs yaml
openai: model: gpt-4-turbo-preview max_tokens: 4096 features: json_mode: true function_calling: true vision: true # For GPT-4V

Google (Gemini)

Best for:

  • Multi-modal tasks
  • Long context windows
  • Google Cloud integration

Configuration:

hljs yaml
google: model: gemini-pro max_tokens: 8192 features: multi_modal: true long_context: true

Local Models (Ollama)

Best for:

  • Privacy-sensitive code
  • Offline development
  • Cost savings

Configuration:

hljs yaml
local: endpoint: http://localhost:11434 model: codellama:13b options: num_ctx: 4096 temperature: 0.7

Fallback Configuration

Automatic Fallback

hljs yaml
providers: default: anthropic fallback: - provider: openai condition: rate_limit - provider: local condition: api_error anthropic: model: claude-sonnet-4-20250514 rate_limit_fallback: openai openai: model: gpt-4-turbo-preview

Cost-Based Routing

hljs yaml
routing: # Use cheaper models for simple tasks simple_tasks: provider: openai model: gpt-3.5-turbo # Use powerful models for complex tasks complex_tasks: provider: anthropic model: claude-3-opus-20240229 # Use local for sensitive code sensitive: provider: local model: codellama:13b

Setting Up Local Models

Ollama Setup

hljs bash
# Install Ollama curl -fsSL https://ollama.com/install.sh | sh # Pull a coding model ollama pull codellama:13b # Or a general model ollama pull llama2:13b # Start the server ollama serve

LM Studio Setup

  1. Download LM Studio from lmstudio.ai
  2. Download a model (e.g., CodeLlama, Mistral)
  3. Start the local server
  4. Configure CCJK:
hljs yaml
local: endpoint: http://localhost:1234/v1 model: local-model api_type: openai_compatible

Enterprise Configuration

Azure OpenAI

hljs yaml
azure: endpoint: https://your-resource.openai.azure.com api_key: ${AZURE_OPENAI_KEY} api_version: "2024-02-15-preview" deployment: your-gpt4-deployment

AWS Bedrock

hljs yaml
bedrock: region: us-east-1 model: anthropic.claude-3-sonnet-20240229-v1:0 credentials: access_key: ${AWS_ACCESS_KEY} secret_key: ${AWS_SECRET_KEY}

Private Deployment

hljs yaml
private: endpoint: https://ai.internal.company.com api_key: ${INTERNAL_API_KEY} model: company-model-v2 tls: ca_cert: /path/to/ca.crt client_cert: /path/to/client.crt

Comparing Providers

Performance Comparison

Run benchmarks:

hljs bash
ccjk benchmark --providers anthropic,openai,local --task code-review

Output:

Provider Benchmark Results
==========================

Task: Code Review (500 lines)

| Provider  | Model              | Time   | Quality | Cost    |
|-----------|--------------------| -------|---------|---------|
| Anthropic | claude-3.5-sonnet  | 12.3s  | 9.2/10  | $0.045  |
| OpenAI    | gpt-4-turbo        | 15.1s  | 8.8/10  | $0.062  |
| Local     | codellama:13b      | 28.4s  | 7.1/10  | $0.00   |

Cost Analysis

hljs bash
ccjk cost --period month --by-provider
Monthly Cost Analysis
=====================

Anthropic:  $45.20 (1,200 requests)
OpenAI:     $12.50 (450 requests)
Local:      $0.00  (800 requests)

Total:      $57.70
Estimated savings from local: $28.00

Best Practices

1. Match Model to Task

hljs yaml
task_routing: # Quick questions → fast, cheap model quick: provider: openai model: gpt-3.5-turbo # Code generation → balanced model code: provider: anthropic model: claude-sonnet-4-20250514 # Architecture decisions → powerful model architecture: provider: anthropic model: claude-3-opus-20240229 # Sensitive code → local model sensitive: provider: local model: codellama:13b

2. Set Spending Limits

hljs yaml
limits: daily_spend: 10.00 monthly_spend: 200.00 alert_threshold: 0.8 # Alert at 80% per_request: max_tokens: 4096 max_cost: 0.50

3. Monitor Usage

hljs bash
# View usage statistics ccjk stats --period week # Export for analysis ccjk stats --export csv --output usage.csv

4. Test Before Switching

hljs bash
# Test a provider before making it default ccjk test-provider openai --task "Review this code..." # Compare outputs ccjk compare --providers anthropic,openai --task "Implement..."

Troubleshooting

Provider Connection Issues

hljs bash
# Test connectivity ccjk diagnose --provider anthropic # Check API key ccjk verify-key --provider openai

Model Not Available

hljs yaml
# Configure fallback for unavailable models anthropic: model: claude-3-opus-20240229 fallback_model: claude-sonnet-4-20250514

Rate Limiting

hljs yaml
rate_limiting: retry_attempts: 3 retry_delay: 1000 # ms exponential_backoff: true fallback_on_limit: true

Conclusion

Multi-provider support gives you flexibility to:

  • Choose the best model for each task
  • Manage costs effectively
  • Maintain privacy with local models
  • Ensure availability with fallbacks

Start with a single provider, then expand as you understand your needs.

Next: Return to Getting Started to review the basics, or explore our Case Studies for real-world examples.

Tags

#providers#configuration#openai#claude#flexibility

Share this article

继续阅读

Related Articles