Compare LLM Models

Select up to 4 models and compare their costs side-by-side. Find the most cost-effective solution for your specific use case.

Loading models...

Comparison Tips

Get the most out of your model comparison with these helpful tips.

Realistic Ratios

Use realistic input/output token ratios based on your actual use case for accurate comparisons.

Scale Testing

Test with different volume levels - costs can vary significantly at scale.

Performance vs Cost

Consider quality and speed alongside cost - the cheapest option isn't always the best value.

Context Limits

Check context window limits - some tasks require models with larger context windows.

Understanding the Numbers

All costs are calculated based on official provider pricing and are subject to change.

Input Token Costs

Charged for tokens you send to the model (prompts, context, examples). Generally lower cost per token than output.

Output Token Costs

Charged for tokens the model generates in response. Typically higher cost per token due to computational requirements.

Pro Tip: Optimize your prompts to reduce unnecessary input tokens while maintaining quality output. This can significantly reduce costs, especially for high-volume applications.