CC
CCJK
๋žญํ‚น๐Ÿ”ฅMCP ์„œ๋ฒ„HOT์ œ๊ณต์—…์ฒด
๋งˆ์ผ“ํ”Œ๋ ˆ์ด์ŠคNEW
๋ฌธ์„œ์•„ํ‹ฐํด์†”๋ฃจ์…˜๋‹ค์šด๋กœ๋“œGitHub
CC
CCJK

Claude Code๋ฅผ ์ œ๋กœ ์„ค์ •์œผ๋กœ ๊ฐ•ํ™”ํ•˜๋Š” ๊ณต์‹ ํˆดํ‚ท์ž…๋‹ˆ๋‹ค. ๊ถŒํ•œ ํ”„๋ฆฌ์…‹, ์ „๋ฌธ ์—์ด์ „ํŠธ, ํ•ซ ๋ฆฌ๋กœ๋“œ ์Šคํ‚ฌ, ๋ฉ€ํ‹ฐ ํ”„๋กœ๋ฐ”์ด๋” ์—ฐ๊ฒฐ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค.

์ œํ’ˆ

  • ๊ธฐ๋Šฅ
  • AI ์—์ด์ „ํŠธ
  • ์Šคํ‚ฌ
  • ๋žญํ‚น
  • ํ”„๋กœ๋ฐ”์ด๋”
  • ๋งˆ์ผ“ํ”Œ๋ ˆ์ด์Šค

๋„๊ตฌ

  • ๋‹ค์šด๋กœ๋“œ
  • ๋„๊ตฌ ์ง„๋‹จ
  • ๋„๊ตฌ ๋น„๊ต
  • ๋ฐฉ๋ฒ•๋ก 
  • ๊ณต๊ธ‰์—…์ฒด ํฌํ„ธ

๋ฆฌ์†Œ์Šค

  • ์•„ํ‹ฐํด
  • ๋ฌธ์„œ
  • API ๋ ˆํผ๋Ÿฐ์Šค
  • ์˜ˆ์ œ
  • ๋ณ€๊ฒฝ ๋กœ๊ทธ

๋ฒ•์  ์ •๋ณด

  • MIT ๋ผ์ด์„ ์Šค

ยฉ 2026 CCJK Maintainers. ๋ชจ๋“  ๊ถŒ๋ฆฌ ๋ณด์œ .

ํ™ˆ์ˆœ์œ„Groq
G

Groq

๐Ÿ‡บ๐Ÿ‡ธ
๋ฌด๋ฃŒAPI

Groq is an ultra-fast AI inference platform that leverages custom-designed LPU (Language Processing Unit) hardware to deliver unprecedented inference speeds for open-source LLMs. The platform provides free access to popular models like Llama 2, Mixtral, and Gemma through an OpenAI-compatible API, making it easy for developers to integrate blazing-fast AI capabilities into their applications. Groq's custom hardware enables token generation speeds up to 10x faster than traditional GPUs, with a generous free tier and competitive pay-per-use pricing for production workloads requiring maximum performance.

87
์ข…ํ•ฉ
73
์กฐํšŒ์ˆ˜
0
ํˆฌํ‘œ์ˆ˜

์„ฑ๋Šฅ ์ง€ํ‘œ

์†๋„75/100
์•ˆ์ •์„ฑ85/100
๊ฐ€์„ฑ๋น„90/100
๊ธฐ๋Šฅ95/100
์‚ฌ์šฉ์ž ๊ฒฝํ—˜80/100

์„ค๋ช…

Groq is an ultra-fast AI inference platform that leverages custom-designed LPU (Language Processing Unit) hardware to deliver unprecedented inference speeds for open-source LLMs. The platform provides free access to popular models like Llama 2, Mixtral, and Gemma through an OpenAI-compatible API, making it easy for developers to integrate blazing-fast AI capabilities into their applications. Groq's custom hardware enables token generation speeds up to 10x faster than traditional GPUs, with a generous free tier and competitive pay-per-use pricing for production workloads requiring maximum performance.

์ฃผ์š” ๊ธฐ๋Šฅ

  • ๅฎšๅˆถLPU็กฌไปถ๏ผŒๆŽจ็†้€Ÿๅบฆ่ถ…ๅฟซๆ”ฏๆŒLlama 2ใ€Mixtralใ€Gemma็ญ‰ๅผ€ๆบๆจกๅž‹OpenAIๅ…ผๅฎนAPIๆ…ทๆ…จ็š„ๅ…่ดนๅฑ‚็บงtoken็”Ÿๆˆ้€Ÿๅบฆ้ซ˜่พพไผ ็ปŸGPU็š„10ๅ€ไฝŽๅปถ่ฟŸๆŽจ็†้€‚ๅˆๅฎžๆ—ถๅบ”็”จ

๊ด€๋ จ ๋„๊ตฌ

๋‹ค์Œ ๋„๊ตฌ๋„ ํ•จ๊ป˜ ์‚ดํŽด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค

R

Replicate

๐Ÿ‡บ๐Ÿ‡ธ
100/100

Platform for running machine learning models in the cloud

O

OpenAI

๐Ÿ‡บ๐Ÿ‡ธ
100/100

Leading AI research company providing GPT models and other AI services

H

Hugging Face

๐Ÿ‡บ๐Ÿ‡ธ
100/100

Leading platform for machine learning models, datasets, and applications

G

Google AI

๐Ÿ‡บ๐Ÿ‡ธ
100/100

Google's AI platform providing access to Gemini models and other AI services