OpenAI vs Meta Llama Pricing
Per-million-token pricing for OpenAI and Meta Llama, with side-by-side flagship models, cheapest tiers, and context windows. Pricing data syncs weekly from the open-source litellm catalog — last updated May 4, 2026.
Who wins on what
Cheapest input tokens
$0.02/1MMeta Llama
Llama 3.1 8B Instruct — $0.02/1M input
Cheapest output tokens
$0.02/1MMeta Llama
Llama 3.2 3B Instruct — $0.02/1M output
Longest context window
2.0MOpenAI
gpt-5.4 (>272K context length) — 2.0M input tokens
Lowest average output cost
$0.67/1MMeta Llama
Provider-wide average across 16 models
Largest model catalog
153 modelsOpenAI
More options to match cost vs capability
Side-by-side
OpenAI
Cheapest input
$0.030
gpt-oss-20b
Cheapest output
$0.140
gpt-oss-20b
Longest context
2.0M
gpt-5.4 (>272K context length)
Avg output / 1M
$23.87
Across catalog
| Model | In/1M | Out/1M | Ctx |
|---|---|---|---|
| o1-pro | $150.00 | $600.00 | 200K |
| gpt-5.4-pro (>272K context length) | $60.00 | $270.00 | 2.0M |
| gpt-5.5-pro (>272K context length) | $60.00 | $270.00 | 2.0M |
| gpt-5.4-pro (<272K context length) | $30.00 | $180.00 | 272K |
| gpt-5.5-pro (<272K context length) | $30.00 | $180.00 | 272K |
| gpt-oss-20b | $0.030 | $0.140 | 131K |
Meta Llama
Cheapest input
$0.020
Llama 3.1 8B Instruct
Cheapest output
$0.020
Llama 3.2 3B Instruct
Longest context
1.0M
Llama 4 Maverick
Avg output / 1M
$0.674
Across catalog
| Model | In/1M | Out/1M | Ctx |
|---|---|---|---|
| Llama 3.1 405B (base) | $4.00 | $4.00 | 33K |
| Llama 3.1 405B Instruct | $3.50 | $3.50 | 131K |
| Llama 4 Maverick | $0.150 | $0.600 | 1.0M |
| Llama 3 70B Instruct | $0.300 | $0.400 | 8K |
| Llama 3.1 70B Instruct | $0.400 | $0.400 | 131K |
| Llama 3.2 3B Instruct | $0.020 | $0.020 | 131K |
All prices in USD per 1 million tokens. Showing top 6 models per provider, sorted by output cost.
Run the numbers for your workload
Calcaas multiplies per-token costs by your real usage patterns — inputs, outputs, retries, and conversation history — across both providers in one model.