The second time, we had a reporting system that eventually stored billions of rows per day in a Postgres database. Processing times got so bad that we decided to migrate to Clickhouse, resulting in a substantial boost to query times. I maintain that we haven't exhausted all available optimisations for Postgres, but I cannot deny that the migration made sense in the long run - OLTP vs OLAP and all that.
(The first time is a funny story that I'm not quite ready to share.)
But seriously though, even if you use postgres, as a former DBA (DB2 and Oracle) I would have tuned the OLTP database very differently to the OLAP database, and I don't mean just indexes, but even during ETL from OLTP->OLAP you might decide to de-normalize columns on the OLAP side simply to speed up queries (OLAP databases are the sort of database you were warned about, where indexes can be 10x the data size)
If you were storing billions of rows per day in MySQL, SQL Server, or Oracle, it still wouldn't be able to handle it, would it?
That said, no regrets using Postgres there. If we started with Clickhouse the project could have not launched as quickly as it did, and that would have given us more problems.