SELECT x ->> 'field1', x ->> 'field2', x ->> 'field3'
FROM table
and x is TOASTed, Postgres will deTOAST x three different times. This multiplies the amount of data that needs to be processed and dramatically slows things down.My first attempt to fix this was to read the field in one query and use a subselect to pull out the individual fields. This attempt was thwarted by the Postgres optimizer which inlined the subquery and still resulted in deTOASTing the field multiple times.
After a discussion with the Postgres IRC, RhodiumToad pointed out that if I add OFFSET 0 to the end of the subquery, that will prevent Postgres from inlining it. After retrying that, I saw an order of magnitude improvement due to eliminating the redundant work.
https://www.postgresql.org/docs/13/sql-altertable.html
This is useful if you have, for example, short character codes coming from some external system. The default will be EXTENDED (external, compressed) but in order to make the absolutely smallest stored tuple possible, if you know it's a short or fixed length, you can go MAIN (internal, compressed) or PLAIN (internal, uncompressed).
PostgreSQL doesn't do this, there is no difference between any of the character types, they are all stored the same way, and the type only serves to validate the data. See: https://www.postgresql.org/docs/current/datatype-character.h...
MySQL even has TINYTEXT for when you want to store small strings outside the table (for performance). With tinytext only a single byte is stored in the table, and the rest externally, so I/O is reduced when doing a full table scan, if you don't need to read the tinytext column.
Though one key difference is that SQL Server doesn't compress the off-page parts as this article states postgres does. In fact even if you have the table set to compress using either row or page compression option, off-page data is not compressed.
[1]. https://www.postgresql.eu/events/pgconfeu2019/sessions/sessi...
The latest versions still don't compress off-page data at all (with the exception of using 2019+s support for UTF8 in place of fixed two-bytes-per-character string types as a form of compression) though there are methods using triggers, backing tables, and the [DE]COMPRESS functions if you really need LOB compression and don't need things like full text search.
Pretty sure I have some targets to try this on even