# AI Pricing Hub by OptimNow > Compare LLM pricing and cloud compute pricing with business metrics, technical filters, and live provider data. Primary site: https://aipricinghub.optimnow.io/ Compute page: https://aipricinghub.optimnow.io/compute Detailed reference: https://aipricinghub.optimnow.io/llms-full.txt ## Summary AI Pricing Hub helps teams compare large language model pricing and cloud infrastructure pricing in one web app. The root page focuses on LLMs. The compute page focuses on cloud instances. ## LLM pricing - Compare 160+ models when live OpenRouter data is available, with static fallback coverage in local development or API outages - Providers include OpenAI, Anthropic, Google, Meta, DeepSeek, Mistral, xAI, Cohere, Amazon, Alibaba, AI21 Labs, and others - Business view shows efficiency score, unit cost per request, monthly budget estimates, and a FinOps Friendly badge - Technical view shows input and output price per 1M tokens, context window, parameters, release date, and capabilities - Use case presets: Support Ticket, Knowledge Q&A, Meeting Summary, Marketing Content, Coding Task, Invoice Processing, Call Summary, Agent Workflow - FinOps Friendly requires all 3: Arena ELO >= 1250, efficiency in the top 30%, and a stable release ## Cloud compute pricing - Compare AWS, Azure, GCP, DigitalOcean, OCI, OVH, and Alibaba Cloud - Pricing tiers include on-demand, spot/preemptible, savings plans or CUDs, and reserved pricing where the provider supports them - Filters cover provider, category, use case, processor family, operating system, region, vCPUs, and memory ## Data sources - LLM pricing: OpenRouter API with curated static fallback data - LLM quality: Chatbot Arena ELO scores - Cloud pricing: provider APIs where supported, with static fallback data for gaps and outages