CC
CCJK
랭킹🔥MCP 서버HOT제공업체모델 품질 검사NEW
마켓플레이스NEW
문서아티클솔루션다운로드GitHub
CC
CCJK

Claude Code를 제로 설정으로 강화하는 공식 툴킷입니다. 권한 프리셋, 전문 에이전트, 핫 리로드 스킬, 멀티 프로바이더 연결을 제공합니다.

제품

  • 기능
  • AI 에이전트
  • 스킬
  • 랭킹
  • 프로바이더
  • 마켓플레이스

도구

  • 다운로드
  • 도구 진단
  • 도구 비교
  • 방법론
  • 공급업체 포털

리소스

  • 아티클
  • 문서
  • API 레퍼런스
  • 예제
  • 변경 로그

법적 정보

  • MIT 라이선스

© 2026 CCJK Maintainers. 모든 권리 보유.

업스트림 공급 네트워크

우리의 통합 AI API 서비스를 위한 내부 조달 및 라우팅 뷰

모든 유형:
모든 유형무료 공개하이브리드상업용
가격:
모든 가격무료프리미엄유료구독
정렬:
인기순평점순이름 (A-Z)최신순
가격 정렬:
모든 가격가격: 낮은순가격: 높은순
운영 모델:
모든 모드직접 제공릴레이클라우드
기준선:
모든 기준선완전부분미흡
도입 판단:
모든 결론권장가드레일 필요평가 전용추가 검증 필요퍼스트파티 우선
실시간 검증:
모든 상태실시간 검증 완료부분 검증차단됨링크 손상
Found 2 providers
G

Groq

🇺🇸
무료 공개무료

Groq is an ultra-fast AI inference platform that leverages custom-designed LPU (Language Processing Unit) hardware to deliver unprecedented inference speeds for open-source LLMs. The platform provides free access to popular models like Llama 2, Mixtral, and Gemma through an OpenAI-compatible API, making it easy for developers to integrate blazing-fast AI capabilities into their applications. Groq's custom hardware enables token generation speeds up to 10x faster than traditional GPUs, with a generous free tier and competitive pay-per-use pricing for production workloads requiring maximum performance.

Cloud platform accessProcurement guardedRecommended차단됨
검토
Mar 13
출처
5
신뢰도
54%
다음 검토
Apr 3
운영 모델
Cloud platform access
조달 상태
Procurement guarded
도입 판단
Good production candidate when low-latency managed inference on GroqCloud matters more than direct control over every open model host. Live verification is currently partial because some required official source types are blocked from this environment: documentation, pricing, support, terms.
overdue · 21d cadence
실시간 검증
Required official source types exist, but live verification is currently blocked from this environment or region.
0 verified · 4 blocked · 0 broken
기준선
complete
official baseline
지원 모델:
llama-3.3-70bmixtral-8x7bgemma-7b
Quick Start
Quick Start
$npx ccjk -p groq
Copy
80
0
세부 정보 보기
S

SiliconFlow (Silicon Cloud)

🇨🇳
무료 공개무료

SiliconFlow (Silicon Cloud) is a Chinese AI infrastructure platform specializing in fast inference for open-source large language models. The platform provides optimized access to popular Chinese and international models including Qwen, ChatGLM, Baichuan, Yi, and DeepSeek with latency-optimized inference endpoints. SiliconFlow offers competitive pricing with a free tier for testing, making advanced AI models accessible to Chinese developers and businesses. The service features high-performance inference infrastructure, Chinese language optimization, and seamless integration capabilities for enterprises requiring reliable AI model deployment.

Cloud platform accessProcurement guardedUse with guardrails부분 검증
검토
Mar 13
출처
4
신뢰도
54%
다음 검토
Mar 27
운영 모델
Cloud platform access
조달 상태
Procurement guarded
도입 판단
Useful when you need China-friendly access to many mainstream models from one managed platform, but enterprise rollout should still verify support and billing maturity. Live verification is currently partial because some required official source types are blocked from this environment: documentation.
overdue · 14d cadence
실시간 검증
Some required official source types are live-verified, while others are blocked or broken and need follow-up.
3 verified · 1 blocked · 0 broken
기준선
complete
official baseline
지원 모델:
qwenchatglmbaichuan+3
Quick Start
Quick Start
$npx ccjk -p siliconflow
Copy
80
0
세부 정보 보기