One hiccup was that when the query cardinality estimator got confused, it would occasionally ignore the partition prefix and do a full scan somewhere, bloating the results by a factor of 2000x! This would cause dramatic slowdowns randomly, and then the DB engine would often cache the inefficient query plan, making things slow until it got rebooted.
This is a very deep rabbit hole to go down. For example, many large cloud vendors have an Iron Rule that relational databases must never be used, because they're concerned precisely about this issue occurring, except at a vastly greater scale.
I could have used actual database partitioning, but I discovered it had undesirable side-effects for some cross-library queries. However, for typical queries this would have "enforced" the use of the partitioning key, side-stepping the problem the cloud vendors have.
Modern versions of SQL Server have all sorts of features to correct or avoid inefficient query plans. E.g.: Query Store can track the "good" and "bad" version of each plan for a query and then after sufficient samples start enforcing the good one. That would have been useful back in 2007 but wasn't available, so I spent about a month doing the same thing but by hand.