Preamble
I tend to focus on tools already available and newly released versions rather than writing about upcoming PostgreSQL features. however, this feature excited me and will undoubtedly be a huge relief for practical usage, particularly for beginners! After years of wishing for it, one of the features most frequently requested by everyday business users and application developers has finally arrived in the repository. I had almost given up hope that we would ever see this day. Bravo to the reviewer and author!
commit 6c3ffd697e2242f5497ea4b40fffc8f6f922ff60 Author: Stephen Frost (sfrost@snowman.net) Date: Mon, Apr 5, 13:42:52 (2021-0400) Add the pg_read_all_data and pg_write_all_data roles... Reviewed-by: Georgios Kokolatos Discussion: https://postgr.es/m/20200828003023.GU29590@tamriel.snowman.net
Here is the entire Git commit entry.
Why is it a “killer feature”?
If everyone knew more or less exactly how their data model would look, could predict how it would be used, and more importantly, who would use it, they could design the access/privilege system accordingly, then no one would need such convenience functionality. Every decent-sized project, to put it mildly, has a healthy number of question marks hanging over it because, alas, we sometimes have to deal with the real world where we frequently lack the time and resources to come up with perfect solutions.
But it’s also true that we have survived without this new feature for a long time. But frequently, we observe that database administrators are unable to anticipate and perfectly accommodate all future data access requirements. and frequently they fix the problem by giving access to potentially harmful superusers, which comes at a high cost!
If you’re new(ish) to Postgres, you may wonder, “How bad can it be?” Since they have all been superusers for years, has anything bad happened to our data analysts? Yes, I understand that most of the time nothing bad occurs… But even if you don’t use superusers too much, there will always be a “cloud of danger” over your database. Even if your data is well-backed up or simply not that important, a malicious superuser can take control of the entire database server by running arbitrary OS commands! Furthermore, this is a feature, not a bug, my friends. From there, the next move might involve breaking into the entire corporate network and stealing all priceless assets, or it might involve locking you out of your business to run the risk of painting an even darker picture.
How does the new feature work?
Well, it couldn’t be much simpler – just hand out the required GRANT (pg_read_all_data or pg_write_all_data) to a trustworthy user of choice and voila!
Note that I’m only demonstrating the “read all” use-case here, as this will be the more common one. You yourself still need to be a superuser.
## NB! Assuming logged in as superuser on a devel build # Let’s first create a test table CREATE TABLE data (data jsonb); # And a plain mortal login user called “bob” CREATE USER bob; # Let’s verify that Bob has no privileges to the “data” table. # NB! This is a “psql” command, not SQL... \dp+ data Access privileges Schema | Name | Type | Access privileges | Column privileges | Policies --------+------+-------+-------------------+-------------------+---------- public | data | table | | | (1 row) # Let’s a test row INSERT INTO data SELECT '{"hello": "world"}'; # And try to look at it as Bob SET ROLE TO bob; /* acting as Bob from now on … */ SELECT * FROM data; ERROR: permission denied for table data # Time to check out that new functionality… RESET ROLE; /* back to superuser again */ GRANT pg_read_all_data TO bob; # Let’s try again as Bob SET ROLE TO bob; # Voila... SELECT * FROM data; data -------------------- {"hello": "world"} (1 row)
We’re not there yet – workaround ideas
How can you fulfill the “read all data” and “write all data” requirements with the tools at hand given that the change was committed and will soon be implemented? Regrettably, it will be many months before the next major version (v14) is released. There are a few possibilities:
- Build up a proper role hierarchy, and set up appropriate “default privileges” using a relatively unknown Postgres
ALTER DEFAULT PRIVILEGESsyntax. This is the cleanest and most highly recommended method. Basically, you could have anappdb_readermeta-role that gets aSELECTautomatically granted for every table that will be created in the future, and then you could just assign this role to a trustworthy/needed real (login) user. So something along the lines of:CREATE ROLE appdb_reader; CREATE ROLE data_analyst_role IN ROLE appdb_reader; CREATE USER alice; GRANT data_analyst_role TO alice; -- NB! You need to do this for all schemas! ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO appdb_reader; CREATE TABLE data(id INT); SET ROLE TO alice; /* Should work automagically... */ TABLE data;
- Noting, however, that new tables will still require a separate grant, we can also choose a quick fix if the role hierarchy concept seems too difficult to implement and the schema is static enough.
GRANT SELECT ON ALL TABLES IN SCHEMA public TO appdb_reader;
- Something for the courageous: superuser access on replicas only! Here I mean that you really create a spare replication machine and block access on the real HA nodes on the
pg_hba.conflevel. In that way, a malicious (or hacked) user won’t pose an OPSEC threat. Note that for heavy read queries you might also need to tune some configuration parameters still, to avoid replication conflicts.
Summary
There isn’t much to say about this new feature given how simple it is to use (assuming you don’t need to worry too much about some secret tables), so I’ll just leave you with one small warning: making use of this feature could lead to an unfavorable database design. So, use it with care and stick to the tried-and-true role system for projects that are harder. Having more granular access at your disposal in an enterprise context can mean the difference between handing out a simple GRANT vs expensive redesign of the whole database schema!
I hope this feature will result in far fewer unnecessary superuser roles being handed out. All in all, it’s yet another great reason to become an “elephant herder,” if you aren’t already.
About Enteros
Enteros offers a patented database performance management SaaS platform. It finds the root causes of complex database scalability and performance problems that affect business across a growing number of cloud, RDBMS, NoSQL, and machine learning database platforms.
The views expressed on this blog are those of the author and do not necessarily reflect the opinions of Enteros Inc. This blog may contain links to the content of third-party sites. By providing such links, Enteros Inc. does not adopt, guarantee, approve, or endorse the information, views, or products available on such sites.
Are you interested in writing for Enteros’ Blog? Please send us a pitch!
RELATED POSTS
Scaling Revenue Platforms on Smarter Databases: Enteros’ AI SQL–Driven Management for Tech Enterprises
- 10 February 2026
- Database Performance Management
Introduction For modern technology enterprises, revenue no longer flows from a single product or channel. It is generated across complex digital platforms—SaaS applications, subscription engines, usage-based billing systems, digital marketplaces, data products, and AI-driven services. These revenue platforms are expected to scale continuously, operate globally, and deliver consistent user experiences in real time. At the … Continue reading “Scaling Revenue Platforms on Smarter Databases: Enteros’ AI SQL–Driven Management for Tech Enterprises”
Beyond Cloud Bills in Real Estate: Enteros’ AI Platform for Database Management and Cost Attribution
Introduction The real estate sector is undergoing a fundamental digital transformation. Property management platforms, smart building systems, tenant experience applications, investment analytics, IoT-driven facilities management, and AI-powered valuation models now form the backbone of modern real estate enterprises. From global REITs and commercial property firms to proptech platforms and smart city operators, data-driven systems are … Continue reading “Beyond Cloud Bills in Real Estate: Enteros’ AI Platform for Database Management and Cost Attribution”
Real Estate IT Economics with Financial Precision: Enteros’ Cost Attribution Intelligence
- 9 February 2026
- Database Performance Management
Introduction Real estate has always been an asset‑heavy, capital‑intensive industry. From commercial portfolios and residential developments to REITs and PropTech platforms, profitability depends on precise financial control. Yet while real estate organizations apply rigorous financial discipline to assets, leases, and investments, their IT and data environments often lack the same level of cost transparency. Modern … Continue reading “Real Estate IT Economics with Financial Precision: Enteros’ Cost Attribution Intelligence”
Managing Database Growth with Financial Precision: Enteros for Tech Leaders
Introduction For technology enterprises, databases are no longer just systems of record—they are engines of innovation. SaaS platforms, AI applications, digital marketplaces, analytics products, and customer-facing services all depend on rapidly growing databases that must scale continuously, remain highly performant, and stay available around the clock. But as database environments grow, so do costs. Cloud … Continue reading “Managing Database Growth with Financial Precision: Enteros for Tech Leaders”