postgres=# SELECT
(pg_identify_object(s.classid,s.objid,s.objsubid)).*,
pg_catalog.pg_get_acl(s.classid,s.objid,s.objsubid) AS acl
FROM pg_catalog.pg_shdepend AS s
JOIN pg_catalog.pg_database AS d
ON d.datname = current_database() AND
d.oid = s.dbid
JOIN pg_catalog.pg_authid AS a
ON a.oid = s.refobjid AND
s.refclassid = 'pg_authid'::regclass
WHERE s.deptype = 'a';
-[ RECORD 1 ]-----------------------------------------
type | table
schema | public
name | testtab
identity | public.testtab
acl | {postgres=arwdDxtm/postgres,foo=r/postgres}
What I wanted to really add, was two new system views, pg_ownerships and pg_privileges [1]. The pg_get_acl() was a dependency that we needed to get in place first. In the end, I withdrew the patch trying to add these views. If there is enough interest from users, I might consider picking up the task of trying to work out the remaining obstacles.Do people here need pg_ownerships and/or pg_privileges?
[1] https://www.postgresql.org/message-id/flat/bbe7d1cb-0435-4ee...
This all meant that as databases like Postgres keep adding cool new features they mostly go unused because an ORM just doesn’t let you pierce that layer of abstraction except dropping to pure SQL which is typically seen as a code smell and an annoyance to everyone involved.
So on the one hand I love that Postgres is getting amazing new features, not to mention all the cool extensions. On the other hand I and I suspect many others are essentially locked out of them and since most ORMa try to serve multiple databases they typically only include the most common denominator features. As I get more experienced I both see why RDBMS is the right choice most times AND see the appeal of an object store instead of a row store.
Why would you ever force your db to multiply a value by 12 to another column, or parse a json path, if it’s not for filtering?
Move that effort to your clients so you’re not needlessly consuming db resources.
Having computed (stored or virtual) columns would've been awesome.
The use case isn't really "multiply a value by 12", but more like "we have a single boolean is_active column, and want to migrate to a more extensive status model" or "migrate from an is_active column to a (begin, end) timestamp tuple" or so.
With a virtual column, you can present a read-only, compatible column to the legacy application, while the other applications can use the more detailed, new columns, without having to keep the legacy column in sync.
I agree that they're often a symptom of bad schema design or data normalisation though. Sometimes that can't be helped however.
I imagine the computed column could be indexed or materialized if needed.
The article mentions that "you cannot create indexes on VIRTUAL generated columns".
We use them to pull out some fields out of JSON blobs, and until PG18 they were not available for logical replication log consumers.
PostgreSQL 18 Released https://news.ycombinator.com/item?id=45372283 - 3 days ago, 21 comments