In short
You finished Build 19 with a working understanding of every vector index a production system might use — flat search, IVF, HNSW, product quantization, DiskANN — and the filter-aware variants that sit on top of them. The natural next question is which off-the-shelf system to actually run. There are four mainstream answers in 2026, and the choice between them is not a quality question (all four are competent) but a deployment question. pgvector is a Postgres extension: install it with CREATE EXTENSION vector; and your existing Postgres becomes a vector store, with full SQL, joins, transactions, and the operational ecosystem you already trust. It tops out around 10M vectors before HNSW build times and recall start to suffer. Pinecone is a fully managed cloud service — no servers, no index tuning, you POST vectors and GET nearest neighbours — at the cost of vendor lock-in and per-million-query pricing that adds up fast at production scale. Weaviate is the feature-rich open-source option: self-hosted or managed, ships built-in embedder modules, hybrid lexical-plus-vector search, multi-tenancy, replication, and a GraphQL API; the trade-off is operational weight (Java, more concepts to learn). Qdrant is the lightweight open-source option: Rust binary, fast, great filter-aware HNSW, REST and gRPC APIs, self-hosted or managed; less feature-rich than Weaviate but easier to run. Beyond these four, Milvus wins at billion-scale on Kubernetes, Vespa is the mature heavyweight Yahoo built for its own search, Elasticsearch has a vector field if you already run ES, MongoDB Atlas has Vector Search if you already run Atlas, and Chroma is the in-process Python option for development and prototypes (not production). The decision tree is mechanical: existing Postgres and small-to-medium scale → pgvector; need fully managed and willing to pay → Pinecone; need self-hosted with rich features (hybrid, multi-tenant, replicated) → Weaviate; need self-hosted, fast, and lightweight → Qdrant. This chapter walks an Indian e-commerce company through three stages — early stage on pgvector, mid stage migrating to Qdrant or Pinecone, late stage on Weaviate or Milvus — and explains why the right answer is rarely the same one twice.
You spent Build 19 learning the algorithms — embeddings, flat search, IVF, HNSW, product quantization, filter-aware indexes. The algorithms are the same regardless of which vendor implements them. Pinecone uses a proprietary index that is structurally an HNSW variant; Qdrant uses HNSW; Weaviate uses HNSW; pgvector ships both IVFFlat and HNSW; Milvus uses HNSW or IVF-PQ depending on configuration. Reading the marketing pages, you would think each system has invented something fundamentally new. They have not. The differentiator is operational, not algorithmic.
So the question of which vector database to choose is not "which has the best ANN algorithm" — they are all within a few percent of each other on standard benchmarks like ANN-Benchmarks, and for any specific workload the difference is dominated by parameter tuning, not vendor choice. The question is where you want the vectors to live, what features beyond ANN you need, and how much operations your team can absorb. This chapter walks those three axes and produces a decision tree you can defend.
The thesis: three axes, four answers
Three variables determine the right vector database for any project. Get them wrong and you will spend the next year either rewriting your retrieval layer or fighting an ops fire that should not exist.
Axis 1: where do the vectors live? Three common options. (a) Inside an existing operational database, typically Postgres — the vectors sit next to the user table, the orders table, the products table, and join naturally to them. (b) In a managed cloud service — vectors live in someone else's infrastructure, you talk to it over HTTPS, you do not run anything yourself. (c) On your own infrastructure, self-hosted — you run the vector database in your VPC, on your own VMs or pods, and own the operational burden in exchange for cost control and data sovereignty.
Axis 2: what features beyond ANN do you need? Pure approximate nearest neighbour is the floor; production systems usually need more. Hybrid retrieval (BM25 + vector + filter, chapter 150) is the most common addition. Structured filters — category:electronics AND price < 50000 AND in_stock:true evaluated alongside the ANN query — are next. Multi-tenancy — millions of tenants with millions of vectors each, isolated such that one tenant cannot see another's data and one tenant's hot query cannot starve another tenant — is required for B2B SaaS. Replication and high availability matters once vector retrieval is on the critical path of your product. Built-in embedders (the database calls OpenAI or a local model for you, you POST raw text and the database vectorises it) save integration code at the cost of flexibility.
Axis 3: operational capacity. A two-person startup with one backend engineer cannot run a three-node Weaviate cluster with a Kafka schema bus and a separate embeddings service — they will spend more time keeping it alive than building product. A 50-engineer scale-up with a dedicated platform team can absorb almost any operational overhead. Be honest about which you are.
Why this taxonomy matters more than algorithm benchmarks: the highest-recall ANN algorithm on a benchmark dataset is irrelevant if your team cannot operate the database that ships it. A startup that picks Milvus because it scored 2% better than Qdrant on ANN-Benchmarks will spend six months getting its etcd, MinIO, Pulsar, and proxy components stable before the recall difference matters. Conversely, a fintech that picks pgvector because it is "in Postgres" but has 200M vectors will hit the wall on index build time and have to migrate anyway. Match the deployment model to your team and stage first; the algorithm is interchangeable.
pgvector: Postgres becomes a vector database
pgvector is an open-source extension that adds a vector type and ANN indexes (IVFFlat and HNSW) to Postgres. It is the simplest possible answer to the question "where do I put my vectors?" — they go in the same Postgres you already run.
The integration is one SQL statement to install and one column to add:
CREATE EXTENSION vector;
ALTER TABLE products ADD COLUMN embedding vector(1536);
CREATE INDEX ON products USING hnsw (embedding vector_cosine_ops);
A nearest-neighbour query is just ORDER BY embedding <=> $query_vec LIMIT 10 — the <=> operator is cosine distance, <-> is Euclidean, <#> is negative inner product. You can add WHERE category = 'shoes' AND price < 5000 and the planner will combine the ANN index scan with the structured filter naturally. You can join to the orders table. You can wrap it in a transaction. You get backups, replication, point-in-time recovery, and every other operational feature Postgres has accumulated over thirty years — for free.
The advantages are operational, not algorithmic. You already monitor Postgres. You already back it up. Your application code already has a Postgres connection pool. Your dashboards already chart its query latency. Adding pgvector adds zero new operational surfaces. For a team of one or two backend engineers, this is decisive.
The disadvantages show up at scale. HNSW build time in pgvector grows roughly linearly with vector count and is single-threaded per index in versions before 0.6 (parallel build landed in pgvector 0.6, late 2024). Building an HNSW index over 50M vectors takes hours, sometimes days, and during that time the table is locked against writes (you can use CREATE INDEX CONCURRENTLY to avoid the lock, but build time is still painful). Memory consumption is a second wall — pgvector keeps the HNSW graph in shared buffers and the working set is roughly 2 \times d \times n + \text{graph overhead} bytes. For 100M vectors of 1536 dimensions, that is over 600 GB before graph overhead, which forces a beefy machine and tight memory tuning. Recall at extreme scale lags dedicated vector databases that ship more recent algorithms (DiskANN, IVF-PQ).
The rule of thumb: pgvector handles up to roughly 10M vectors comfortably on a single Postgres instance, up to maybe 50M with careful tuning and parallel index builds. Beyond that, the operational simplicity wins are eaten by the index pain, and you should consider migrating.
Best for: any team that is already running Postgres, has fewer than 10M vectors, and values operational simplicity over peak performance. Most early-stage Indian SaaS startups land here by default and stay until product-market fit forces a re-evaluation.
Pinecone: zero-ops, pay per query
Pinecone is a fully managed cloud vector database. There is nothing to install. You sign up, get an API key, and start POSTing vectors. The system auto-scales, auto-shards, handles replication, and exposes one HTTPS endpoint. No nodes to manage, no indexes to tune, no version upgrades to schedule.
from pinecone import Pinecone
pc = Pinecone(api_key="...")
index = pc.Index("products")
index.upsert(vectors=[("p1", embedding, {"category": "shoes", "price": 4999})])
results = index.query(vector=query_embedding, top_k=10, filter={"price": {"$lt": 5000}})
That is the entire integration. You do not know or care what algorithm Pinecone uses internally (it is a proprietary HNSW variant called "graph-based" in their docs, with serverless and pod-based deployment modes). You do not tune efSearch or M. The system has done that for you, and it is consistently in the top quartile of recall-vs-latency on public benchmarks.
The advantages are deployment-model advantages. Pinecone genuinely is zero ops — there is no on-call rotation for "the vector database is down" because if it is down, Pinecone's on-call rotation handles it. Scaling from 100K to 100M vectors is a configuration change, not a project. Latency is consistently low because Pinecone runs on a global edge network and you can pick the region closest to your application.
The disadvantages are deployment-model disadvantages too. Vendor lock-in — Pinecone is closed source, runs only on Pinecone's infrastructure, and exposes a proprietary API. Migrating away is a real project: you have to re-export every vector and rewrite every query. Cost — Pinecone bills per million queries and per GB of vector storage. At low volume it is cheap; at production volume it can quickly cost more per month than running your own Qdrant cluster. A typical mid-stage Indian SaaS doing 50M queries a month over 50M vectors might pay USD 2-5K monthly for Pinecone versus USD 500-1500 to run Qdrant on managed Kubernetes. Data residency — for industries with strict data residency rules (Indian fintech under RBI guidelines, healthcare under DPDP), the data leaves your VPC and lives in Pinecone's, which may be a compliance non-starter.
Best for: production applications that need to ship fast, do not have ops capacity, are not cost-sensitive at the scale they are at, and are not blocked by data residency. Stripe-like consumer-facing products with predictable revenue per query are the typical fit.
Weaviate: open source, feature-rich, self-hosted or managed
Weaviate is the kitchen-sink open-source vector database. Written in Go (not Java — that was a misconception in some early reviews), shipped as a Docker image and Helm chart, with a managed Cloud option (Weaviate Cloud) for teams that want zero ops. It implements HNSW with PQ compression, but the differentiator is everything around the index.
Modular embedders. Weaviate ships built-in modules for OpenAI, Cohere, HuggingFace, Vertex AI, and local models. You POST raw text or an image, Weaviate calls the embedder for you, stores the resulting vector, and indexes it. You never manage embedding model code yourself. This is convenient until you need a model that is not in the module list, at which point you fall back to passing pre-computed vectors yourself.
Hybrid search as a first-class query mode. Weaviate computes BM25 over a built-in text index alongside the vector search and fuses the two with a configurable alpha parameter (and supports RRF too). This means you do not need a separate Elasticsearch deployment for hybrid retrieval — Weaviate covers both halves of the hybrid pipeline in one system.
Multi-tenancy. Weaviate's tenants are isolated index shards within a class — you can have a million tenants in one cluster, each with its own index, each loaded into memory only when queried. For B2B SaaS where each customer has their own vector data, this is the correct primitive (versus pgvector's "one HNSW index per table" model that does not isolate well across tenants).
Replication and sharding. Weaviate replicates data across nodes using a Raft-based consensus protocol (introduced in 2024 with the v1.25 release for schema changes; data replication is configurable per class). Sharding is native — you set numberOfShards per class and Weaviate distributes vectors across them.
GraphQL API. Queries look like:
{ Get { Product(nearVector: {vector: [...]}, where: {path: ["category"], operator: Equal, valueText: "shoes"}, limit: 10) { name price } } }
Some teams love this; some find it overhead compared to a flat REST endpoint. Weaviate does also expose REST and gRPC.
Disadvantages. More concepts to learn (classes, modules, tenants, shards, vectorisers, generators) than Qdrant or Pinecone. Higher memory baseline for the same vector count because of the modular runtime. Slightly more complex Helm chart. Slower than Qdrant on raw HNSW benchmarks (5-15% latency disadvantage at the same recall), though usually fast enough.
Best for: teams that want feature-rich self-hosted (hybrid search built-in, multi-tenancy, replication) and have the operational capacity to run a stateful Go service in Kubernetes. B2B SaaS companies serving many tenants are the canonical fit.
Qdrant: fast, lightweight, Rust
Qdrant is the lightweight open-source counterpoint to Weaviate. Written in Rust, shipped as a single binary or Docker image, with a managed Qdrant Cloud option. Implements HNSW with optional product quantization and scalar quantization, with a particularly strong filter-aware HNSW implementation that builds payload indexes alongside the graph and prunes the graph traversal by filter at search time (the algorithm covered in chapter 156).
Speed and footprint. Rust gives Qdrant a real performance advantage — typical benchmarks show 20-40% lower p99 latency than Weaviate or Elasticsearch at the same recall, and substantially lower memory baseline because there is no JVM or Go runtime overhead. A single Qdrant node on a 16-core machine handles tens of millions of vectors with sub-10ms p50 search latency.
API surface. Both REST and gRPC, with a clean schema:
from qdrant_client import QdrantClient
client = QdrantClient(url="http://qdrant:6333")
client.upsert(collection_name="products", points=[{"id": "p1", "vector": embedding, "payload": {"category": "shoes", "price": 4999}}])
hits = client.search(collection_name="products", query_vector=query_embedding, query_filter={"must": [{"key": "price", "range": {"lt": 5000}}]}, limit=10)
Filter-aware HNSW. This is Qdrant's standout feature — when a filter is moderately selective (cuts the candidate set by 10x to 100x), Qdrant's HNSW respects the filter during graph traversal rather than post-filtering after, giving substantially higher recall on filtered queries than HNSW implementations that filter purely post-hoc.
Disadvantages. Less feature-rich than Weaviate — no built-in embedder modules (you compute embeddings yourself and pass vectors), hybrid search exists but is less polished, the multi-tenancy story is "one collection per tenant" which works but is less elegant than Weaviate's tenant primitive at a million-tenant scale. Newer ecosystem, fewer SaaS integrations, smaller community than pgvector or Pinecone (though growing rapidly).
Best for: teams that want self-hosted, value performance and operational simplicity over feature breadth, and are comfortable shipping pre-computed vectors. The Indian fintech / e-commerce middle-stage default in 2026.
The feature matrix
The decision tree
The decision tree is mechanical. Walk it from the top.
Why this ordering: each question is "is the cheapest option still adequate?" Q1 asks if pgvector — the option that adds zero new operational surface — is enough. If your scale and feature needs fit, take it; you save the most operational and cognitive cost. Only when pgvector is not enough do you add a new system. Q2 then asks if you can offload to a vendor (Pinecone) — paying money to remove ops is the next-cheapest move if you can afford it. Only when you cannot afford Pinecone or cannot use it (data residency, lock-in concerns) do you commit to running infrastructure yourself, and at that point Q3 vs Q4 is just "which open-source system fits your feature needs". The tree is ordered by total cost (operational + monetary + cognitive), not by raw performance.
The other names you will see
The four above cover most production decisions. A few others come up in design docs and deserve a sentence each.
Milvus — Kubernetes-native, designed for billion-scale, splits storage and compute, integrates with FAISS as a backend. The right answer when you need to run a vector database at billion-scale on your own infrastructure and have a platform team that operates Kubernetes well. Heavy operational footprint (etcd, MinIO or S3, Pulsar or Kafka, multiple service tiers); not the right answer for a 10-engineer startup.
Vespa — Yahoo's mature search and recommendation engine, open source, supports vectors as one feature among many. The right answer when you need a single system that does ANN, BM25, structured queries, ranking, and ML feature serving in one place at production scale. Steep learning curve; small community outside Yahoo's diaspora.
Elasticsearch / OpenSearch — both have a dense_vector field type with HNSW since version 8.x. The right answer if you are already running Elasticsearch for logs or text search and want to add vectors without introducing a new system. Lower vector recall than dedicated vector databases at the same parameters; the integration is the win.
MongoDB Atlas Vector Search — vectors as a query stage in Atlas. The right answer if you are already on Atlas for document storage and want vectors close to your documents. Closed source, Atlas-only.
Chroma — in-process Python vector store with a small client-server mode. The right answer for development, notebooks, and prototypes. Not the right answer for production at any meaningful scale.
Worked example: an Indian e-commerce company at three stages
The decision is not one-time. The right answer migrates as the company grows. Walking one company through three stages makes this concrete.
Three stages of an Indian D2C e-commerce platform
A direct-to-consumer fashion brand based in Bengaluru is building search and recommendations powered by vector embeddings of product descriptions, images, and user behaviour. Three snapshots, two years apart.
Stage 1 — early stage, 100K products, 5 engineers, pre-Series A. Postgres on AWS RDS holds all the operational data: products, orders, users, reviews. Search is the next feature. The team needs to embed each product description with text-embedding-3-small (1536-dim) and serve nearest-neighbour search for "/products?q=oversized cotton tee".
The decision: pgvector. Walk the tree: Q1 — already running Postgres, 100K vectors (well under 10M)? YES. Stop.
ALTER TABLE products ADD COLUMN embedding vector(1536);
CREATE INDEX ON products USING hnsw (embedding vector_cosine_ops) WITH (m = 16, ef_construction = 64);
A search query joins the vector index with the existing structured filter:
SELECT id, name, price FROM products
WHERE in_stock = true AND price BETWEEN 500 AND 3000
ORDER BY embedding <=> $1 LIMIT 20;
One database, one connection pool, one set of dashboards. The team adds vector search in two days, not two months. Cost: zero incremental — runs on the existing RDS instance.
Stage 2 — growth stage, 5M products + product variants + user-behaviour vectors, 25 engineers, Series B. Catalogue exploded after onboarding 50 brands as a marketplace. User-behaviour vectors (one per active user, recomputed nightly) added another 8M vectors. Total: 13M vectors. Search latency on pgvector p99 has crept above 200ms. HNSW index builds take 4 hours and lock writes during REINDEX. The team now has a small platform group (3 engineers).
The decision: split the workload. Walk the tree again. Q1 — still on Postgres but vector count is now past 10M and growing? NO. Q2 — fully managed? The team has platform capacity now and wants cost predictability, so prefers self-hosted. NO. Q3 — need hybrid + multi-tenant + replicated? Hybrid yes (BM25 for SKU lookups + vector for semantic), multi-tenant no (single tenant), replicated yes for HA. Mixed signal. Q4 — fast and lightweight? YES — the team values operational simplicity and Rust binaries are easy to deploy on their EKS cluster.
The choice: Qdrant for the vector workload, kept alongside Postgres (vectors move out, structured data stays in Postgres). Three Qdrant nodes on EKS with replication factor 2. p99 drops to 25ms, index builds drop to 20 minutes per shard, and the platform team is happy with the operational profile. A managed alternative considered: Pinecone, which would have been faster to ship but priced at roughly USD 4K/month at their query volume versus USD 1.2K/month for the EKS nodes. They picked Qdrant on cost.
Stage 3 — late stage, 100M products + variant SKUs + image vectors + multi-tenancy across the marketplace, 200 engineers, pre-IPO. The marketplace now has 500 seller tenants, each with their own catalogue and isolated search. Each tenant wants to tune their own retrieval (some prefer vector-heavy, some prefer BM25-heavy). Image vectors added a second collection. Hybrid retrieval is required — power users do BM25-style SKU lookups, casual users do semantic queries.
The decision: walk the tree. Q1 — way past pgvector. Q2 — managed not viable at this scale (cost projects to USD 30K/month; data residency required for some seller PII anyway). Q3 — hybrid + multi-tenant + replicated? YES on all three. Stop.
The choice: Weaviate, with multi-tenancy as the killer feature. Each seller is a Weaviate tenant — their vectors are isolated, loaded into memory only when their catalogue is queried, and their hybrid alpha is configurable per tenant. Replication via Raft. Hybrid search as a first-class query mode means no separate Elasticsearch deployment.
A second consideration: Milvus for the image-vector collection alone, where billion-scale storage on disk via DiskANN matters more than feature richness. The team runs both — Weaviate for product text + multi-tenancy, Milvus for image vectors at billion-scale — accepting the operational overhead because they have the platform capacity to do so.
The lesson: the company used three different vector databases over three years. None of those decisions was wrong at the time it was made. pgvector was the right answer at 100K vectors; it would have been the wrong answer at 100M. Pre-committing to "the production-grade vector database" at stage 1 would have wasted six months of platform engineering before product-market fit. Sticking with pgvector at stage 3 would have crashed the search system every weekend.
The right vector database is the one that matches your stage. Plan for migration; do not pretend you will never need it.
The Indian context
Most Indian SaaS startups land on pgvector first because Postgres is the default operational database and adding the extension is free. The migration path most commonly seen in 2026 is pgvector → Qdrant, taken when vector counts exceed roughly 10M and the team has built up enough platform capacity to run a stateful Rust binary in EKS or GKE. Pinecone shows up in two distinct cohorts: very early-stage teams that want to ship in a week and have not committed to any infrastructure choices, and US-headquartered Indian teams selling to US enterprise customers where the data residency constraint runs the other way. Weaviate shows up in B2B SaaS specifically — Razorpay-adjacent fintech, Freshworks-style customer-data platforms — where multi-tenancy is the primary requirement. Milvus is rare outside large platform teams (Flipkart, Jio, large banks), where the operational cost is amortised across many products.
The cost calculus matters more in India than in the US because dollar-denominated SaaS pricing converts to rupees at a punishing multiplier. A Pinecone bill of USD 4K/month is INR 3.4 lakh — roughly the fully-loaded cost of one mid-level engineer. Self-hosting Qdrant on a single EC2 m6i.4xlarge instance for INR 60-80K/month replaces that bill at the cost of one half-day per week of engineering attention, which is the right trade for most Indian startups at Series A through Series C.
Going deeper
Going deeper
What follows are the questions that come up after the basic choice is made — production tuning, migration paths, and the second-order considerations that decide whether your vector database is a quiet success or a recurring incident.
Cost models compared at concrete numbers
Take a workload of 50M vectors of 1536 dimensions, 100 queries per second average with bursts to 1000 QPS, 20% growth per quarter. Approximate 2026 pricing.
pgvector: one r6i.4xlarge RDS instance at roughly USD 950/month plus storage at USD 200/month for the vector data. Total INR 1 lakh/month. But: this workload is past the pgvector comfort zone, so this is hypothetical.
Pinecone: serverless pricing at this scale runs roughly USD 0.04 per million read-units, with each query consuming around 5 read-units → USD 0.2 per 1000 queries. At 100 QPS average that is roughly USD 5K/month for queries plus USD 1K/month for storage. INR 5 lakh/month total.
Qdrant Cloud: a 3-node cluster with 32GB RAM each runs roughly USD 1500/month. Self-hosted on EC2 (3 × r6i.2xlarge): roughly USD 1100/month. INR 1-1.3 lakh/month.
Weaviate Cloud: comparable cluster runs roughly USD 1800/month. Self-hosted: roughly USD 1400/month. INR 1.2-1.5 lakh/month.
The factor of 4-5x cost difference between Pinecone and self-hosted Qdrant or Weaviate is real and dominates almost every other consideration past the initial-shipping stage.
Migration paths and dual-writing
When you migrate from pgvector to Qdrant or from anywhere to anywhere else, the standard pattern is dual-write: write to both the old and the new system for a transition period (typically two weeks), have the read path try the new system first and fall back to the old, monitor recall and latency on the new system, then cut over reads, then decommission the old write path. This requires a small embedding pipeline change (usually 50 lines of code) and a feature flag for the read path. Plan for two weeks of dual-cost during the transition.
Hybrid retrieval across systems
If you adopt Qdrant or pgvector, you may still want BM25 for the lexical half of hybrid retrieval. Postgres has tsvector for BM25-style full-text search, and Qdrant integrates with an external Elasticsearch or runs its own native sparse-vector mode (added in 1.7). Weaviate has hybrid built-in. Pinecone has hybrid built-in via its alpha parameter. If hybrid is core to your product, factor it into the decision — Weaviate and Pinecone reduce the integration burden significantly.
Filter-aware index quality
Filter-aware HNSW (the algorithm in chapter 156) is implemented differently across systems. Qdrant has the most thorough implementation — payload indexes integrated with HNSW traversal. Weaviate has good filter-aware HNSW. Pinecone's behaviour is opaque (proprietary). pgvector's filter handling is post-filter for HNSW (which can wreck recall for selective filters); IVFFlat handles filters slightly better but at a recall ceiling. If your queries are heavily filtered, Qdrant's filter-aware HNSW is genuinely better than the alternatives.
When to introduce a re-ranker
None of these systems ships a great cross-encoder re-ranker out of the box (Cohere Rerank, BGE-reranker, and Jina Reranker are usually run separately). The re-ranker sits downstream of whichever vector database you chose and is independent of the choice. Plan to add it when retrieval recall is good but ranking quality is poor — typically after the first version ships and you have query logs to evaluate against.
References
- pgvector: Open-source vector similarity search for Postgres — the canonical extension, README and indexing notes.
- Pinecone documentation — serverless and pod-based deployment, pricing, hybrid mode reference.
- Weaviate documentation — multi-tenancy, hybrid search, replication, modules.
- Qdrant documentation — filter-aware HNSW, quantization options, REST/gRPC APIs.
- Milvus documentation — Kubernetes-native architecture, billion-scale guidance, FAISS backend.
- ANN-Benchmarks — reproducible recall-vs-latency comparisons across HNSW, IVF, and PQ implementations across systems.