Preamble
Foreign key constraints are an important way to make sure your database is always correct and to show how tables relate to each other.
Foreign keys must have proper indexing in order to function properly, a fact that is frequently overlooked.
That is what this article will cover, along with how to look for missing indexes.
Index at the target of a foreign key
In the following, I will call the table on which the foreign key constraint is defined “the”source table and the referenced table “the” target table.
The target table’s referenced columns must have a primary key or a unique constraint. In PostgreSQL, such constraints are implemented using unique indexes. As a result, a foreign key’s target side is automatically indexed.
This must be done so that the foreign key will always point to a different row. Also, the index can be used to find the row in the target table that matches a row in the source table.
Index at the source of a foreign key
PostgreSQL does not require an index at the source of a foreign key, in contrast to the examples given above.
However, when using this type of index, it can be very beneficial to find all the source rows that are linked to a particular target row. Typically, you would require that in the following situations:
1. To find source rows that refer to one or more target rows, you perform a join between the two tables. If there is an index on the columns at the source, PostgreSQL can use an efficient nested loop join.
This is well known and pretty obvious.
2. You modify important columns or remove rows from the target table.
The foreign key constraint must then be verified by PostgreSQL to see if it is still satisfied. It does so by searching for rows in the source table that would become orphaned by the data. Without an index, this requires a sequential scan of the source table.
An example
Let’s create a target and source table:
-- to make the plans look simpler SET max_parallel_workers_per_gather = 0; -- to speed up CREATE INDEX SET maintenance_work_mem = '512MB'; CREATE TABLE target ( t_id integer NOT NULL, t_name text NOT NULL ); INSERT INTO target (t_id, t_name) SELECT i, 'target ' || i FROM generate_series(1, 500001) AS i; ALTER TABLE target ADD PRIMARY KEY (t_id); CREATE INDEX ON target (t_name); /* set hint bits and collect statistics */ VACUUM (ANALYZE) target; CREATE TABLE source ( s_id integer NOT NULL, t_id integer NOT NULL, s_name text NOT NULL ); INSERT INTO source (s_id, t_id, s_name) SELECT i, (i - 1) % 500000 + 1, 'source ' || i FROM generate_series(1, 1000000) AS i; ALTER TABLE source ADD PRIMARY KEY (s_id); ALTER TABLE source ADD FOREIGN KEY (t_id) REFERENCES target; /* set hint bits and collect statistics */ VACUUM (ANALYZE) source;
Query time without an index
Looking up source rows via the link to target and deleting rows from target is unreasonably slow:
EXPLAIN (ANALYZE) SELECT source.s_name FROM source JOIN target USING (t_id) WHERE target.t_name = 'target 42'; QUERY PLAN ---------------------------------------------------------------- Hash Join (cost=8.45..19003.47 rows=2 width=13) (actual time=0.150..360.920 rows=2 loops=1) Hash Cond: (source.t_id = target.t_id) -> Seq Scan on source (cost=0.00..16370.00 rows=1000000 width=17) (actual time=0.011..155.964 rows=1000000 loops=1) -> Hash (cost=8.44..8.44 rows=1 width=4) (actual time=0.111..0.111 rows=1 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Index Scan using target_t_name_idx on target (cost=0.42..8.44 rows=1 width=4) (actual time=0.105..0.107 rows=1 loops=1) Index Cond: (t_name = 'target 42'::text) Planning time: 0.701 ms Execution time: 360.982 ms (9 rows) EXPLAIN (ANALYZE) DELETE FROM target WHERE target.t_name = 'target 500001'; QUERY PLAN ---------------------------------------------------------------- Delete on target (cost=0.42..8.44 rows=1 width=6) (actual time=0.178..0.178 rows=0 loops=1) -> Index Scan using target_t_name_idx on target (cost=0.42..8.44 rows=1 width=6) (actual time=0.107..0.109 rows=1 loops=1) Index Cond: (t_name = 'target 500001'::text) Planning time: 0.165 ms Trigger for constraint source_t_id_fkey: time=153.804 calls=1 Execution time: 154.030 ms (6 rows)
Query time with an index
After creating the appropriate index:
CREATE INDEX source_t_id_idx ON source (t_id);
the queries are as fast as they should be:
EXPLAIN (ANALYZE) SELECT source.s_name FROM source JOIN target USING (t_id) WHERE target.t_name = 'target 42'; QUERY PLAN ---------------------------------------------------------------- Nested Loop (cost=0.85..19.89 rows=2 width=13) (actual time=0.068..0.076 rows=2 loops=1) -> Index Scan using target_t_name_idx on target (cost=0.42..8.44 rows=1 width=4) (actual time=0.048..0.049 rows=1 loops=1) Index Cond: (t_name = 'target 42'::text) -> Index Scan using source_t_id_idx on source (cost=0.42..11.43 rows=2 width=17) (actual time=0.013..0.019 rows=2 loops=1) Index Cond: (t_id = target.t_id) Planning time: 1.238 ms Execution time: 0.147 ms (7 rows) EXPLAIN (ANALYZE) DELETE FROM target WHERE target.t_name = 'target 500001'; QUERY PLAN ---------------------------------------------------------------- Delete on target (cost=0.42..8.44 rows=1 width=6) (actual time=0.135..0.135 rows=0 loops=1) -> Index Scan using target_t_name_idx on target (cost=0.42..8.44 rows=1 width=6) (actual time=0.091..0.094 rows=1 loops=1) Index Cond: (t_name = 'target 500001'::text) Planning time: 0.151 ms Trigger for constraint source_t_id_fkey: time=0.557 calls=1 Execution time: 0.751 ms (6 rows)
How to check for missing indexes?
The following query will list all the foreign key constraints in the database that don’t have an index on the source columns:
SELECT c.conrelid::regclass AS "table", /* list of key column names in order */ string_agg(a.attname, ',' ORDER BY x.n) AS columns, pg_catalog.pg_size_pretty( pg_catalog.pg_relation_size(c.conrelid) ) AS size, c.conname AS constraint, c.confrelid::regclass AS referenced_table FROM pg_catalog.pg_constraint c /* enumerated key column numbers per foreign key */ CROSS JOIN LATERAL unnest(c.conkey) WITH ORDINALITY AS x(attnum, n) /* name for each key column */ JOIN pg_catalog.pg_attribute a ON a.attnum = x.attnum AND a.attrelid = c.conrelid WHERE NOT EXISTS /* is there a matching index for the constraint? */ (SELECT 1 FROM pg_catalog.pg_index i WHERE i.indrelid = c.conrelid /* it must not be a partial index */ AND i.indpred IS NULL /* the first index columns must be the same as the key columns, but order doesn't matter */ AND (i.indkey::smallint[])[0:cardinality(c.conkey)-1] OPERATOR(pg_catalog.@>) c.conkey) AND c.contype = 'f' GROUP BY c.conrelid, c.conname, c.confrelid ORDER BY pg_catalog.pg_relation_size(c.conrelid) DESC;
The result is ordered by the size of the table so that the tables, where the missing index hurts most, are listed on top.
Should I create indexes for all foreign keys?
You don’t need the index if the source table is small because a sequential scan will likely be less expensive than an index scan in that case.
Additionally, the index is unnecessary if you are certain that you will never use it for a join and that you won’t ever delete a row or change a key column in the target table.
About Enteros
Enteros offers a patented database performance management SaaS platform. It finds the root causes of complex database scalability and performance problems that affect business across a growing number of cloud, RDBMS, NoSQL, and machine learning database platforms.
The views expressed on this blog are those of the author and do not necessarily reflect the opinions of Enteros Inc. This blog may contain links to the content of third-party sites. By providing such links, Enteros Inc. does not adopt, guarantee, approve, or endorse the information, views, or products available on such sites.
Are you interested in writing for Enteros’ Blog? Please send us a pitch!
RELATED POSTS
Scaling Revenue Platforms on Smarter Databases: Enteros’ AI SQL–Driven Management for Tech Enterprises
- 10 February 2026
- Database Performance Management
Introduction For modern technology enterprises, revenue no longer flows from a single product or channel. It is generated across complex digital platforms—SaaS applications, subscription engines, usage-based billing systems, digital marketplaces, data products, and AI-driven services. These revenue platforms are expected to scale continuously, operate globally, and deliver consistent user experiences in real time. At the … Continue reading “Scaling Revenue Platforms on Smarter Databases: Enteros’ AI SQL–Driven Management for Tech Enterprises”
Beyond Cloud Bills in Real Estate: Enteros’ AI Platform for Database Management and Cost Attribution
Introduction The real estate sector is undergoing a fundamental digital transformation. Property management platforms, smart building systems, tenant experience applications, investment analytics, IoT-driven facilities management, and AI-powered valuation models now form the backbone of modern real estate enterprises. From global REITs and commercial property firms to proptech platforms and smart city operators, data-driven systems are … Continue reading “Beyond Cloud Bills in Real Estate: Enteros’ AI Platform for Database Management and Cost Attribution”
Real Estate IT Economics with Financial Precision: Enteros’ Cost Attribution Intelligence
- 9 February 2026
- Database Performance Management
Introduction Real estate has always been an asset‑heavy, capital‑intensive industry. From commercial portfolios and residential developments to REITs and PropTech platforms, profitability depends on precise financial control. Yet while real estate organizations apply rigorous financial discipline to assets, leases, and investments, their IT and data environments often lack the same level of cost transparency. Modern … Continue reading “Real Estate IT Economics with Financial Precision: Enteros’ Cost Attribution Intelligence”
Managing Database Growth with Financial Precision: Enteros for Tech Leaders
Introduction For technology enterprises, databases are no longer just systems of record—they are engines of innovation. SaaS platforms, AI applications, digital marketplaces, analytics products, and customer-facing services all depend on rapidly growing databases that must scale continuously, remain highly performant, and stay available around the clock. But as database environments grow, so do costs. Cloud … Continue reading “Managing Database Growth with Financial Precision: Enteros for Tech Leaders”