Index Methodology

Version v1.0Last updated: 2026-03-16

Index Definitions

Global H100 Compute Indexh100-global-compute

A composite benchmark of H100 on-demand compute pricing across major cloud providers and GPU marketplaces. Represents the typical cost of 1 GPU-hour of H100 compute.

Eligibility Criteria

H100 PCIe and SXM on-demand pricing from verified providers. Excludes spot, reserved, and ambiguous pricing. Excludes observations older than the configured staleness TTL (default 24 hours).

Calculation Method

Weighted trimmed mean. All eligible observations are sorted by normalized per-GPU-hour price, the top and bottom 10% are trimmed, and the remaining observations are averaged with weighting.

Weighting Scheme

Each observation is weighted by (confidence_score * recency_factor). Recency factor ranges from 1.0 (most recent) to 0.5 (oldest within TTL window), decaying linearly.

H100 PCIe Indexh100-pcie

Tracks the cost of H100 PCIe on-demand compute across providers. PCIe cards are typically used in inference workloads and smaller-scale training.

Eligibility Criteria

Only H100 PCIe on-demand observations. No SXM, no spot, no reserved.

Calculation Method

Same trimmed weighted mean methodology as Global Index.

Weighting Scheme

Same confidence * recency weighting.

H100 SXM Indexh100-sxm

Tracks the cost of H100 SXM on-demand compute. SXM cards with NVLink/NVSwitch are the standard for large-scale training and high-performance inference.

Eligibility Criteria

Only H100 SXM on-demand observations, including 8-GPU cluster offerings (normalized to per-GPU-hour). No PCIe, no spot.

Calculation Method

Same trimmed weighted mean methodology as Global Index.

Weighting Scheme

Same confidence * recency weighting.

H100 Spot Indexh100-spot

Tracks spot/preemptible H100 pricing across providers. Spot instances can be interrupted and are significantly cheaper than on-demand.

Eligibility Criteria

Only H100 spot/preemptible observations from any form factor.

Calculation Method

Same trimmed weighted mean methodology as Global Index.

Weighting Scheme

Same confidence * recency weighting.

H100 Hardware Indexh100-hardware

Placeholder for tracking H100 GPU hardware resale prices. Not yet implemented.

Eligibility Criteria

Not yet defined. Hardware resale ingestion is not implemented in V1.

Calculation Method

Not yet defined.

Weighting Scheme

Not yet defined.

Global H200 Compute Indexh200-global-compute

A composite benchmark of H200 on-demand compute pricing. The H200 features 141GB HBM3e memory, making it ideal for large-model inference.

Eligibility Criteria

H200 on-demand pricing from verified providers. Excludes spot, reserved, and stale observations.

Calculation Method

Same trimmed weighted mean methodology as Hopper indices.

Weighting Scheme

Same confidence × recency weighting.

H200 Spot Indexh200-spot

Tracks spot/preemptible H200 pricing across providers.

Eligibility Criteria

Only H200 spot/preemptible observations.

Calculation Method

Same trimmed weighted mean methodology.

Weighting Scheme

Same confidence × recency weighting.

Global Blackwell Compute Indexblackwell-global-compute

A composite benchmark of Blackwell-generation (B200, GB200) on-demand compute pricing. Uses the same weighted trimmed mean methodology as the Hopper index.

Eligibility Criteria

B200 and GB200 on-demand pricing from verified providers. Excludes spot, reserved, and stale observations.

Calculation Method

Weighted trimmed mean, identical to Hopper methodology.

Weighting Scheme

Same confidence × recency weighting as Hopper indices.

B200 SXM Indexb200-sxm

Tracks B200 SXM on-demand compute pricing across providers.

Eligibility Criteria

Only B200 SXM on-demand observations.

Calculation Method

Same trimmed weighted mean methodology.

Weighting Scheme

Same confidence × recency weighting.

GB200 Indexgb200

Tracks GB200 (NVL72) on-demand compute pricing across providers.

Eligibility Criteria

Only GB200 on-demand observations.

Calculation Method

Same trimmed weighted mean methodology.

Weighting Scheme

Same confidence × recency weighting.

Blackwell Spot Indexblackwell-spot

Tracks spot/preemptible Blackwell GPU pricing across providers.

Eligibility Criteria

Only Blackwell spot/preemptible observations.

Calculation Method

Same trimmed weighted mean methodology.

Weighting Scheme

Same confidence × recency weighting.

Normalization Rules

  • 1.All prices are normalized to USD per GPU-hour: normalized_price = total_hourly_price / gpu_count
  • 2.PCIe and SXM observations are NEVER mixed within a specific index (only in Global Composite)
  • 3.Spot and on-demand observations are NEVER mixed within a specific index
  • 4.Multi-GPU offerings (e.g., 8x H100 nodes) are normalized by dividing total hourly cost by GPU count
  • 5.Observations with confidence < 0.1 are excluded from index calculation
  • 6.Observations older than the staleness TTL (default 24h) are excluded
  • 7.Token pricing: price_per_1M_tokens = (price_per_hour × 1,000,000) / (tokens_per_second × 3,600)
  • 8.Token pricing throughput presets: Conservative (100 tps), Base (150 tps), Aggressive (180 tps)
  • 9.Published API token prices (from OpenAI, Anthropic, Google, xAI) are NEVER mixed with derived infrastructure costs

Data Source Policy

Every displayed price includes full source provenance: provider name, source URL, source type (official_api, official_price_page, marketplace_listing, etc.), observation timestamp, confidence score, and parser identification. We prioritize official APIs and pricing pages. Marketplace prices (e.g., Vast.ai) represent median or typical listings and are flagged with lower confidence scores due to variability.

Freshness Policy

Prices are refreshed on each API call with a minimum refresh interval of 60 seconds. Observations are considered stale after 24 hours and excluded from index computation. The 'observed_at' timestamp on each price reflects when the data was fetched, not when the provider last updated their page.

Source Types

official_api

Direct from the provider's pricing API. Highest confidence. Examples: Azure Retail Prices API, AWS Pricing API.

official_price_page

Parsed from the provider's official pricing webpage. May be manually maintained if the page is JS-rendered.

official_calculator

From the provider's pricing calculator tool. Similar confidence to price page.

marketplace_listing

From GPU marketplaces like Vast.ai. Prices are set by hosts and fluctuate. Lower confidence due to variability.

Data Source Transparency

Live API Sources

Azure GPU pricing is fetched live from the Azure Retail Prices API on every refresh cycle (no authentication required). This provides real-time price tracking with high confidence.

Published Price Page Sources

AWS, CoreWeave, Lambda, RunPod, Vast.ai, GCP, and Oracle prices are sourced from their official pricing pages and manually verified. These prices are updated when providers change their published rates. Each observation includes a "last verified" timestamp.

Model Provider Token Pricing

OpenAI, Anthropic, Google, and xAI token prices are sourced from their official pricing pages and manually verified. These are retail API prices, not infrastructure costs. Each model's price includes a source link.

Blackwell Coverage

Blackwell (B200, GB200) GPU pricing is still emerging. Coverage is limited compared to Hopper (H100). The same methodology applies but fewer providers are tracked. Coverage will expand as providers list Blackwell pricing publicly.

API Endpoints

GET /api/prices/latest

All latest price observations. Supports ?provider= and ?bucket= filters.

GET /api/indices/latest

All computed index values.

GET /api/indices/[slug]

Specific index by slug (e.g., h100-global-compute, h100-pcie, h100-sxm, h100-spot).

GET /api/sources/latest

Source registry and data state metadata.

GET /api/methodology

Full methodology description as JSON.