FAQ

Common questions about xAI.

What is xAI?

xAI is an API provider that hosts large language models. Active models: 14; From (input): $0.20 / 1M tok; Avg throughput: 92 tok/s; Avg latency: 0.84 s; Max context: 2.0M.

How many models does xAI offer?

xAI currently serves 14 active models out of 20 historical offerings on LLM Stats.

What is xAI's API pricing?

xAI input pricing starts from $0.20 per 1M tokens, with the most expensive offering at $3 per 1M tokens. See the Pricing tab above for the full per-model breakdown.

How fast is xAI?

xAI averages 92 output tokens per second across its catalog, with average latency of 0.84s. Per-model performance is shown in the Performance tab.

Does xAI support multimodal models?

Yes. xAI's catalog includes 12 vision-capable, 3 image generation, and 6 video models. See the Models and Capabilities tabs for the full per-model breakdown.

Whose models does xAI host?

xAI hosts models from xAI. See the Models tab for the full catalog grouped by creator.

How do I start using xAI?

Sign up at https://docs.x.ai to get an API key, then call xAI's API directly from your application. Use the Pricing and Performance tabs above to pick the right model for your latency, cost, and context-window requirements.