FAQ
Common questions about OpenAI.
What is OpenAI?
OpenAI is an API provider that hosts large language models. Active models: 37; From (input): $0.10 / 1M tok; Avg throughput: 100 tok/s; Avg latency: 2.75 s; Max context: 1.1M.
How many models does OpenAI offer?
OpenAI currently serves 37 active models out of 54 historical offerings on LLM Stats.
What is OpenAI's API pricing?
OpenAI input pricing starts from $0.10 per 1M tokens, with the most expensive offering at $30 per 1M tokens. See the Pricing tab above for the full per-model breakdown.
How fast is OpenAI?
OpenAI averages 100 output tokens per second across its catalog, with average latency of 2.75s. Per-model performance is shown in the Performance tab.
Does OpenAI support multimodal models?
Yes. OpenAI's catalog includes 33 vision-capable, 12 image generation, 2 audio, and 4 video models. See the Models and Capabilities tabs for the full per-model breakdown.
Whose models does OpenAI host?
OpenAI hosts models from OpenAI. See the Models tab for the full catalog grouped by creator.
How do I start using OpenAI?
Sign up at https://openai.com to get an API key, then call OpenAI's API directly from your application. Use the Pricing and Performance tabs above to pick the right model for your latency, cost, and context-window requirements.