LookML, Cube, MetricFlow: the landscape
In December 2024 a data platform lead at Meesho had to choose between three semantic layers. The team already used dbt. They were evaluating Looker as the BI tool for analyst self-serve. They had a small in-house product team that wanted to embed metric-driven charts inside the seller dashboard. Three meetings, three vendors, three pitches that all sounded the same: "one source of truth for metrics". The pitches sounded the same; the architectures did not. LookML put the metric definition inside the BI tool. Cube put it inside a service that any BI tool could call. MetricFlow put it inside the transformation layer where dbt already lived. The choice was not about features — features overlap. The choice was about which boundary the metric crosses, and that boundary determined what the next two years of the data stack would look like.
LookML, Cube, and MetricFlow all encode the same seven metric fields, but they place the metric definition at three different boundaries: inside the BI tool, inside a service that fronts the warehouse, and inside the transformation layer. That placement decides who can query the metric, how the SQL gets cached, and whether you are locked to one BI tool. Pick the placement, then live with it.
The three placements, drawn straight
Before features, look at where each layer sits in the request flow from a chart to the warehouse. The whole architecture follows from this.
Why the boundary matters more than the syntax: a metric definition is only as useful as the consumers that can reach it. If the boundary is inside Looker, your data app and your LLM agent are outside the boundary and have to re-author the metric. If the boundary is a service (Cube) or a wire protocol (MetricFlow's Semantic Layer API), every consumer is inside.
LookML — the original, in the BI tool
Looker shipped LookML in 2012, a year before Snowflake had paying customers. The premise was that a BI tool could not be only a chart-builder; it had to know the dimensions and measures, because charts that all said "revenue" needed to mean the same thing. LookML is a configuration language — view files describe a table's dimensions and measures; explore files describe how views join; model files describe what is exposed. Every chart in Looker is generated SQL, never authored SQL.
# views/payments.view.lkml
view: payments {
sql_table_name: analytics.fct_payments ;;
dimension: payment_id { primary_key: yes; sql: ${TABLE}.payment_id ;; }
dimension: gst_state { sql: ${TABLE}.gst_state ;; }
dimension: merchant_tier { sql: ${TABLE}.merchant_tier ;; }
dimension_group: created {
type: time
timeframes: [date, week, month, quarter]
sql: ${TABLE}.created_at ;;
}
measure: gmv {
label: "GMV (₹)"
type: sum
sql: CASE WHEN ${TABLE}.status = 'captured'
THEN ${TABLE}.amount_paise / 100.0 ELSE 0 END ;;
drill_fields: [payment_id, gst_state, merchant_tier]
}
measure: txn_count { type: count; drill_fields: [payment_id] }
}
# models/cred.model.lkml
explore: payments {
always_filter: { filters: [payments.created_date: "30 days"] }
}
The strengths of this placement are real: drill-downs are first-class (a chart of gmv by gst_state lets the analyst click a state and see the underlying rows, because Looker knows the drill_fields). Permissioning is centralised — access_grant blocks restrict who can even see a measure, not just query it. The failure mode is also real: every other consumer — Hex notebook, Slack bot, mobile app, LLM agent — is outside the boundary. If the team wants the same GMV in a Streamlit app, they re-author the SQL, and the divergence-in-three-places problem returns.
Cube — the standalone service
Cube was launched in 2019 by Artyom Keydunov on the bet that the semantic layer should be a service, independent of any BI tool. A Cube deployment is a containerised process that holds the metric definitions in JS or YAML files, exposes SQL (Postgres protocol), REST, and GraphQL endpoints, and proxies to whatever warehouse you point it at. Tableau connects via the SQL endpoint, an iOS app connects via REST, an LLM tool calls GraphQL — every consumer talks to Cube, Cube talks to the warehouse.
// cubes/Payments.js
cube('Payments', {
sql_table: 'analytics.fct_payments',
data_source: 'snowflake',
measures: {
gmv: {
type: 'sum',
sql: `CASE WHEN ${CUBE}.status = 'captured'
THEN ${CUBE}.amount_paise / 100.0 ELSE 0 END`,
title: 'GMV (₹)',
},
txnCount: { type: 'count', title: 'Transactions' },
},
dimensions: {
paymentId: { sql: 'payment_id', type: 'string', primary_key: true },
gstState: { sql: 'gst_state', type: 'string' },
createdAt: { sql: 'created_at', type: 'time' },
},
pre_aggregations: {
daily_state: {
measures: [Payments.gmv, Payments.txnCount],
dimensions: [Payments.gstState],
time_dimension: Payments.createdAt,
granularity: 'day',
refresh_key: { every: '5 minutes' },
},
},
});
The strengths are the inverse of LookML's weakness: every consumer is inside the boundary, and you can swap BI tools without touching the metric. The pre-aggregations feature is also serious — the daily_state rollup above is materialised as a small table inside the warehouse (or in Cube Store, a stripped-down columnar engine Cube ships), and queries that match its shape get answered from it in single-digit milliseconds. Why pre-aggregations work here and not at the BI layer: Cube knows the metric definition AND sees every query, so it can decide which rollups to materialise based on actual access patterns. Looker has the same information about Looker queries, but not about queries from Tableau or apps that bypass it.
The failure mode is operational: Cube is one more service to run. It needs a Redis for the query queue, a Postgres for metadata, horizontal pod scaling for the API tier, monitoring of the orchestrator that refreshes pre-aggregations. Teams without platform headcount end up with stale pre-aggs and a process they don't know how to debug at 2 a.m.
MetricFlow — inside dbt
dbt acquired Transform in early 2023 and turned MetricFlow into the semantic layer that ships with dbt Cloud. The bet is that the metric definition is just another piece of analytics code and belongs in the same repository as the models that produce its inputs. The metric YAML lives next to the SQL model file, gets reviewed in the same PR, gets versioned in git, and gets compiled by dbt parse every time the project compiles.
# models/payments/payments.yml
semantic_models:
- name: payments
model: ref('fct_payments')
defaults: { agg_time_dimension: created_at }
entities:
- { name: payment_id, type: primary }
- { name: merchant, type: foreign, expr: merchant_id }
measures:
- name: amount_inr
agg: sum
expr: "CASE WHEN status = 'captured' THEN amount_paise / 100.0 ELSE 0 END"
- { name: txn_count, agg: count, expr: payment_id }
dimensions:
- { name: gst_state, type: categorical }
- { name: merchant_tier, type: categorical }
- name: created_at
type: time
type_params: { time_granularity: day }
metrics:
- name: gmv
label: "GMV (₹)"
type: simple
type_params: { measure: amount_inr }
- name: gmv_per_txn
label: "GMV per transaction (₹)"
type: ratio
type_params: { numerator: gmv, denominator: txn_count }
A consumer (a BI tool, a notebook, the dbt CLI) calls the Semantic Layer API with {metrics: [gmv], group_by: [gst_state, metric_time__day], where: "{{ Dimension('payments__created_at').last_n_days(7) }}"}, and MetricFlow returns the SQL or the result. The strengths: the metric is in git, code-reviewed, has the same CI pipeline as your dbt models, and reuses the same warehouse compute and connection. The failure mode: you need dbt — specifically dbt Core (open) plus the dbt Semantic Layer (Cloud-only as of 2025) for the wire protocol. Self-hosted MetricFlow is callable as a Python library, but the JDBC server that lets Tableau and Looker query it through the Semantic Layer API is part of the paid product.
# Sample compiled SQL emitted by MetricFlow for {gmv by gst_state, last 7d}:
SELECT
subq_3.gst_state AS gst_state
, DATE_TRUNC('day', subq_3.created_at) AS metric_time__day
, SUM(subq_3.amount_inr) AS gmv
FROM (
SELECT
gst_state, created_at,
CASE WHEN status = 'captured' THEN amount_paise / 100.0 ELSE 0 END AS amount_inr
FROM analytics.fct_payments
) subq_3
WHERE subq_3.created_at >= CURRENT_DATE - INTERVAL '7 days'
GROUP BY 1, 2
ORDER BY 1, 2
Walk through what is happening:
- The semantic model is the contract, not the metric. The metric is a thin wrapper that picks one measure (
amount_inr) of the semantic model and gives it a public name (gmv). The same semantic model can carry many metrics, all sharing the same dimension allow-list and time grain. Why this split is load-bearing: a new metric (e.g.captured_txn_count) is a 4-line addition that points at an existing measure. No new SQL, no new join graph, no new dimension declarations. - Ratio metrics are first-class.
gmv_per_txnis declared, not computed by the consumer. MetricFlow will emit two CTEs (one for the numerator, one for the denominator) and join them on the requested dimensions, getting the join right even when numerator and denominator have different filters. metric_time__dayis automatic. The consumer asks forgmvand gets back ametric_time__daycolumn whose grain matches the semantic model'sagg_time_dimension. Slicing by month or week reuses the same metric — no new definition needed.- The compiled SQL is generated, not written. The subquery + group-by shape is what every semantic-layer compiler emits because it must work uniformly across
simple,ratio,cumulative, andderivedmetric types. It is not the SQL a human would write by hand for this single query, but it is the SQL a compiler must emit to keep all metric shapes consistent.
How they actually compare on the seven fields
| Field | LookML | Cube | MetricFlow |
|---|---|---|---|
| Lives in | .lkml files in the Looker project (git-backed) |
.js / .yml files in the Cube repo (git-backed) |
.yml files in the dbt project (git-backed) |
| Compiled by | Looker server | Cube service | dbt Cloud Semantic Layer (or mf CLI) |
| Consumer API | LookML/HTML inside Looker; SDK + JDBC for embedded analytics | SQL (Postgres wire), REST, GraphQL | JDBC (Arrow Flight SQL), GraphQL, Python |
| Pre-aggregation | aggregate_table (warehouse-side) |
pre_aggregations (Cube Store or warehouse-side) |
dbt incremental materializations + Snowflake dynamic tables |
| Versioning | LookML refinements (+view) + git tags |
Naming convention (PaymentsV2 cube) |
Naming convention (gmv_v2) + dbt model versioning |
| Governance | access_grant, access_filter (cell-level) |
query_rewrite hook, JWT claims for row filtering |
dbt meta: tags + Snowflake/BigQuery row access policies |
| Lock-in | Strong — leaving Looker means re-authoring | Light — service is OSS, can self-host | Light — semantic YAML is parseable, dbt is OSS |
The three are not strictly substitutes. A team using Looker for analyst BI plus Cube for the customer-facing dashboard is common. So is dbt + MetricFlow + Lightdash (an OSS BI tool that consumes MetricFlow natively). The choice is not "which one wins" but "what is your boundary".
Common confusions
- "LookML is dead because Looker got bought by Google." Looker is alive, embedded in Google's BI offering, and LookML still ships features. What did die is the bet that "the BI tool owns the semantic layer" — Cube and MetricFlow exist precisely because consumers other than Looker need to reach the metric. LookML is the right answer when Looker is your only BI consumer; it is the wrong answer when you have a customer-facing app, a notebook user base, or an LLM agent in the loop.
- "MetricFlow is just a YAML wrapper around dbt models." MetricFlow has its own compiler (
mf), its own join planner, its own time-spine machinery, and emits SQL that dbt does not author. The metric YAML is parsed by dbt; the compilation logic is MetricFlow's. The reason it lives next to dbt models is workflow alignment, not implementation laziness. - "Cube is the same as a caching proxy for the warehouse." Cube can cache, but caching is not the contract. The contract is "this is the metric definition; here is a wire protocol; the warehouse identity is hidden". You can swap Snowflake for BigQuery underneath Cube without changing one consumer query, because the contract is at the Cube boundary, not at the warehouse boundary.
- "You can use all three together." You can run Cube on top of dbt models that have MetricFlow definitions, and consume the result through a LookML view that points at Cube's SQL endpoint. People have done this. It is also three semantic layers, three metric registries, three places to keep in sync — exactly the divergence problem the layer was supposed to solve. Pick one as the source of truth; the others should be thin pass-throughs at most.
- "Open-source means free to operate." Cube is OSS; running it in production needs a Redis, a Postgres, an orchestrator pod, and an on-call rotation. MetricFlow is OSS; the wire protocol that lets Tableau query it is Cloud-only. LookML is closed; Looker's per-seat licence is the cost. There is no free lunch — the cost moves between licence, ops, and feature gaps.
Going deeper
The Headless BI moment and why all three pivoted
In 2020 Benn Stancil's "metrics layer" post named the category. By 2022 every BI vendor had a metric definition feature — Looker had had it for a decade, Tableau added one (later abandoned), Mode pivoted to "datasets". The reason the category solidified was not BI; it was the rise of consumers that were not BI tools: Slack bots, embedded analytics inside SaaS products, LLM agents asking metric questions in English, customer-facing dashboards inside fintech apps. None of these wanted a Looker licence. The bet that paid off was "the semantic layer is a stand-alone API"; both Cube and MetricFlow rode that bet. LookML's response — Looker's embedded SDK and the JDBC adapter — is functional but feels retrofitted because it is.
Pre-aggregations: where the cost actually lives
The Meesho seller dashboard hits fct_orders (3 billion rows for 2025). A naive SUM(gmv) per seller per day would scan ten of billions of rows on every dashboard load. All three layers solve this with pre-aggregations, but they do it differently. LookML's aggregate_table is a Looker-managed materialisation in the warehouse; it builds an aggregated table per Explore. Cube's pre_aggregations can live in Cube Store (a stripped-down columnar engine Cube runs alongside the API tier) for sub-100ms responses, or in the warehouse for compatibility. MetricFlow defers to dbt's incremental materializations — the rollup table is just another dbt model, refreshed by dbt run. The MetricFlow approach is the most legible (it is just a dbt model), Cube's is the fastest (Cube Store is in-process), and LookML's is the most invisible (Looker decides when to use the aggregate, the user does not have to). All three are valid; the operational character is different.
Why "JDBC over Arrow Flight" matters more than it sounds
When the dbt Semantic Layer added an Arrow Flight SQL endpoint in 2024, Tableau, Hex, Mode, and Lightdash could connect to it as if it were a Postgres database — but typed columns, vectorised result transport, and metric-aware metadata. The reason this matters: BI tools have spent twenty years optimising Postgres-shaped connectors. Reusing that connector path means a Tableau user types gmv into a chart, and the same SQL Looker would have generated lands at the warehouse — except the metric definition is shared with the dbt repo. The wire protocol is the integration story. Cube uses the Postgres wire directly for the same reason. The lesson is industry-wide: a semantic layer succeeds to the extent that it speaks a wire protocol existing tools already trust.
The Indian-stack question: which one in 2026?
A typical Indian D2C / fintech stack in 2026: warehouse on Snowflake or BigQuery, transformations in dbt, BI mostly Looker for internal teams plus a customer-facing dashboard built in React. For this stack, the dominant pattern is dbt + MetricFlow as the source of truth, with Looker consuming through the Semantic Layer API for analyst self-serve and the React dashboard hitting a thin GraphQL layer (sometimes Cube fronting the same warehouse). Pure-Cube shops exist mostly when the customer-facing surface is the primary consumer (Cred's rewards dashboard, Meesho's seller insights). Pure-LookML shops exist mostly when Looker was adopted before dbt and the team has not yet untangled the metric definitions out of LookML — Razorpay was in that position in 2023, finished migrating measures into MetricFlow over Q4 2024, and now uses LookML only for layout, not for definitions. The arc across the Indian data ecosystem follows the same pattern as the global one: definitions move into the dbt repo, consumers proliferate, and the BI tool becomes a presentation layer.
Where this leads next
The deep dive on dbt as the substrate is at /wiki/dbt-as-the-de-facto-transformation-layer. The metric-definition mechanics — the seven fields, the compile-from-intent pattern — are at /wiki/metric-definitions-once-queried-many-ways.
After this chapter, Build 13 turns to the consumer side: how headless BI works in practice, what the metric API looks like over HTTP, and the LLM agent that asks "what was GMV in Bengaluru last Tuesday?" and gets the same number every other consumer would. The thread of this build remains: the metric is the contract, the consumer surface is interchangeable — but the contract has to live somewhere, and that "where" is what this chapter named.
References
- LookML reference — Google Cloud — the canonical LookML syntax. Skim views, explores, and access_grants.
- Cube — Data modeling — the cube/measure/dimension contract; pre_aggregations are in the same docs.
- dbt Semantic Layer — overview — MetricFlow's home in the dbt docs; the Arrow Flight SQL endpoint is described here.
- Benn Stancil — The metrics layer — the 2021 essay that named the category and seeded the Cube / MetricFlow round of investment.
- Nick Handel — Why MetricFlow — the MetricFlow founder's argument for putting metrics in the transform layer (pre-acquisition by dbt Labs).
- Lightdash — open-source BI for dbt — useful as a calibration of what a BI tool that consumes MetricFlow natively looks like.
- /wiki/metric-definitions-once-queried-many-ways — the previous chapter; the seven-fields framing this article maps three tools onto.
- /wiki/dbt-as-the-de-facto-transformation-layer — the substrate MetricFlow sits inside.