Preamble
I tend to focus on tools already available and newly released versions rather than writing about upcoming PostgreSQL features. however, this feature excited me and will undoubtedly be a huge relief for practical usage, particularly for beginners! After years of wishing for it, one of the features most frequently requested by everyday business users and application developers has finally arrived in the repository. I had almost given up hope that we would ever see this day. Bravo to the reviewer and author!
commit 6c3ffd697e2242f5497ea4b40fffc8f6f922ff60 Author: Stephen Frost (sfrost@snowman.net) Date: Mon, Apr 5, 13:42:52 (2021-0400) Add the pg_read_all_data and pg_write_all_data roles... Reviewed-by: Georgios Kokolatos Discussion: https://postgr.es/m/20200828003023.GU29590@tamriel.snowman.net
Here is the entire Git commit entry.
Why is it a “killer feature”?
If everyone knew more or less exactly how their data model would look, could predict how it would be used, and more importantly, who would use it, they could design the access/privilege system accordingly, then no one would need such convenience functionality. Every decent-sized project, to put it mildly, has a healthy number of question marks hanging over it because, alas, we sometimes have to deal with the real world where we frequently lack the time and resources to come up with perfect solutions.
But it’s also true that we have survived without this new feature for a long time. But frequently, we observe that database administrators are unable to anticipate and perfectly accommodate all future data access requirements. and frequently they fix the problem by giving access to potentially harmful superusers, which comes at a high cost!
If you’re new(ish) to Postgres, you may wonder, “How bad can it be?” Since they have all been superusers for years, has anything bad happened to our data analysts? Yes, I understand that most of the time nothing bad occurs… But even if you don’t use superusers too much, there will always be a “cloud of danger” over your database. Even if your data is well-backed up or simply not that important, a malicious superuser can take control of the entire database server by running arbitrary OS commands! Furthermore, this is a feature, not a bug, my friends. From there, the next move might involve breaking into the entire corporate network and stealing all priceless assets, or it might involve locking you out of your business to run the risk of painting an even darker picture.
How does the new feature work?
Well, it couldn’t be much simpler – just hand out the required GRANT (pg_read_all_data or pg_write_all_data) to a trustworthy user of choice and voila!
Note that I’m only demonstrating the “read all” use-case here, as this will be the more common one. You yourself still need to be a superuser.
## NB! Assuming logged in as superuser on a devel build # Let’s first create a test table CREATE TABLE data (data jsonb); # And a plain mortal login user called “bob” CREATE USER bob; # Let’s verify that Bob has no privileges to the “data” table. # NB! This is a “psql” command, not SQL... \dp+ data Access privileges Schema | Name | Type | Access privileges | Column privileges | Policies --------+------+-------+-------------------+-------------------+---------- public | data | table | | | (1 row) # Let’s a test row INSERT INTO data SELECT '{"hello": "world"}'; # And try to look at it as Bob SET ROLE TO bob; /* acting as Bob from now on … */ SELECT * FROM data; ERROR: permission denied for table data # Time to check out that new functionality… RESET ROLE; /* back to superuser again */ GRANT pg_read_all_data TO bob; # Let’s try again as Bob SET ROLE TO bob; # Voila... SELECT * FROM data; data -------------------- {"hello": "world"} (1 row)
We’re not there yet – workaround ideas
How can you fulfill the “read all data” and “write all data” requirements with the tools at hand given that the change was committed and will soon be implemented? Regrettably, it will be many months before the next major version (v14) is released. There are a few possibilities:
- Build up a proper role hierarchy, and set up appropriate “default privileges” using a relatively unknown Postgres
ALTER DEFAULT PRIVILEGESsyntax. This is the cleanest and most highly recommended method. Basically, you could have anappdb_readermeta-role that gets aSELECTautomatically granted for every table that will be created in the future, and then you could just assign this role to a trustworthy/needed real (login) user. So something along the lines of:CREATE ROLE appdb_reader; CREATE ROLE data_analyst_role IN ROLE appdb_reader; CREATE USER alice; GRANT data_analyst_role TO alice; -- NB! You need to do this for all schemas! ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO appdb_reader; CREATE TABLE data(id INT); SET ROLE TO alice; /* Should work automagically... */ TABLE data;
- Noting, however, that new tables will still require a separate grant, we can also choose a quick fix if the role hierarchy concept seems too difficult to implement and the schema is static enough.
GRANT SELECT ON ALL TABLES IN SCHEMA public TO appdb_reader;
- Something for the courageous: superuser access on replicas only! Here I mean that you really create a spare replication machine and block access on the real HA nodes on the
pg_hba.conflevel. In that way, a malicious (or hacked) user won’t pose an OPSEC threat. Note that for heavy read queries you might also need to tune some configuration parameters still, to avoid replication conflicts.
Summary
There isn’t much to say about this new feature given how simple it is to use (assuming you don’t need to worry too much about some secret tables), so I’ll just leave you with one small warning: making use of this feature could lead to an unfavorable database design. So, use it with care and stick to the tried-and-true role system for projects that are harder. Having more granular access at your disposal in an enterprise context can mean the difference between handing out a simple GRANT vs expensive redesign of the whole database schema!
I hope this feature will result in far fewer unnecessary superuser roles being handed out. All in all, it’s yet another great reason to become an “elephant herder,” if you aren’t already.
About Enteros
Enteros offers a patented database performance management SaaS platform. It finds the root causes of complex database scalability and performance problems that affect business across a growing number of cloud, RDBMS, NoSQL, and machine learning database platforms.
The views expressed on this blog are those of the author and do not necessarily reflect the opinions of Enteros Inc. This blog may contain links to the content of third-party sites. By providing such links, Enteros Inc. does not adopt, guarantee, approve, or endorse the information, views, or products available on such sites.
Are you interested in writing for Enteros’ Blog? Please send us a pitch!
RELATED POSTS
Intelligent Healthcare IT Economics: How Enteros Unifies Cost Attribution, Performance Management, and Cloud FinOps with AIOps
- 21 December 2025
- Database Performance Management
Introduction Healthcare organizations today are under immense pressure to deliver better patient outcomes while managing rising operational costs, complex regulatory requirements, and rapidly expanding digital ecosystems. From electronic health records (EHRs) and clinical decision systems to telehealth platforms, AI-driven diagnostics, and revenue cycle applications, healthcare IT environments have become both mission-critical and highly complex. At … Continue reading “Intelligent Healthcare IT Economics: How Enteros Unifies Cost Attribution, Performance Management, and Cloud FinOps with AIOps”
Smart Real Estate IT Operations: How Enteros Uses AIOps to Optimize Database Performance and Cost Estimation
Introduction The real estate sector is undergoing a profound digital transformation. What was once a traditionally asset-heavy, manually operated industry is now driven by data, cloud platforms, and real-time analytics. From property management systems and leasing platforms to smart building technologies, digital twin models, and AI-powered valuation engines, modern real estate enterprises rely heavily on … Continue reading “Smart Real Estate IT Operations: How Enteros Uses AIOps to Optimize Database Performance and Cost Estimation”
Intelligent Healthcare Performance Management: Enteros’ AIOps and Cloud FinOps Framework
- 18 December 2025
- Database Performance Management
Introduction Healthcare organizations are under unprecedented pressure to deliver better patient outcomes while managing rising operational costs, increasing regulatory demands, and rapidly expanding digital infrastructure. From electronic health records (EHRs) and telemedicine platforms to clinical analytics, revenue cycle management systems, and AI-assisted diagnostics, modern healthcare relies on highly complex, data-driven technology ecosystems. As these systems … Continue reading “Intelligent Healthcare Performance Management: Enteros’ AIOps and Cloud FinOps Framework”
How Enteros Transforms Retail Performance Management with AI-Driven Cost Estimation
Introduction The retail industry is operating in one of the most demanding digital environments in history. Omnichannel commerce, real-time inventory visibility, hyper-personalized customer journeys, dynamic pricing, and always-on digital storefronts have become non-negotiable expectations. Behind these seamless experiences lies a highly complex IT ecosystem powered by cloud platforms, SaaS databases, analytics engines, and microservices architectures. … Continue reading “How Enteros Transforms Retail Performance Management with AI-Driven Cost Estimation”