Upstream Supply Network
Internal sourcing and routing view that powers our unified AI API service
AI service platform with API relay capabilities
Relay / aggregation layerProcurement blockedNeeds verificationBlocked
Operating model
Relay / aggregation layer
Procurement
Procurement blocked
Recommendation
Reasonable when you need broad model access and China-friendly delivery, but do not treat it as identical to direct first-party procurement. Live verification is currently partial because some required official source types are blocked from this environment: documentation, pricing, support, terms.
overdue · 7d cadence
Live Verification
Required official source types exist, but live verification is currently blocked from this environment or region.
0 verified · 4 blocked · 0 broken
Blocking factors
Required official source types are currently blocked from this verification environment: documentation, pricing, support, terms.
official baseline
Leading AI research company offering GPT-4, GPT-3.5, DALL-E, and Whisper APIs
Direct model providerProcurement guardedFirst-party preferredBlocked
Operating model
Direct model provider
Procurement
Procurement guarded
Recommendation
Use OpenAI directly when model quality, roadmap alignment, and first-party support matter more than multi-vendor convenience. Live verification is currently partial because some required official source types are blocked from this environment: documentation, pricing, support.
overdue · 30d cadence
Live Verification
Required official source types exist, but live verification is currently blocked from this environment or region.
0 verified · 3 blocked · 0 broken
official baseline
Models:
gpt-4gpt-4-turbogpt-3.5-turbo+2
Groq is an ultra-fast AI inference platform that leverages custom-designed LPU (Language Processing Unit) hardware to deliver unprecedented inference speeds for open-source LLMs. The platform provides free access to popular models like Llama 2, Mixtral, and Gemma through an OpenAI-compatible API, making it easy for developers to integrate blazing-fast AI capabilities into their applications. Groq's custom hardware enables token generation speeds up to 10x faster than traditional GPUs, with a generous free tier and competitive pay-per-use pricing for production workloads requiring maximum performance.
Cloud platform accessProcurement guardedRecommendedBlocked
Operating model
Cloud platform access
Procurement
Procurement guarded
Recommendation
Good production candidate when low-latency managed inference on GroqCloud matters more than direct control over every open model host. Live verification is currently partial because some required official source types are blocked from this environment: documentation, pricing, support, terms.
overdue · 21d cadence
Live Verification
Required official source types exist, but live verification is currently blocked from this environment or region.
0 verified · 4 blocked · 0 broken
official baseline
Models:
llama-3.3-70bmixtral-8x7bgemma-7b