Timestamp is fundamentally flawed and should only be used in applications without any kind of performance/efficiency concerns, or for people who really need a range of ten thousand years. The problem with Timestamp is it should not have used variable-length integers for its fields. The fractional part of a point in time is uniformly distributed, so most timestamps are going to have either 4 or 5 bytes in their representation, meaning int32 is worse than fixed32 on average. The whole part of an epoch offset in seconds is also pretty large, it takes 5 bytes to represent the present time. Since you also have two field tags, Timestamp requires 11-12 bytes to represent the current time, and it's expensive to decode because it takes the slowest-possible path through the varint decoder.
Reasonable people can used a fixed64 field representing nanoseconds since the unix epoch, which will be very fast, takes 9 bytes including the field tag, and yields a range of 584 years which isn't bad at all.