So I actually store the rows as serialized JSON objects in my redis cache, and when a write occurs, I update the JSON representation in the cache (if it's not in the cache, I read it from the DB and store it there). After X hours, I have a daemon that goes through all expired keys, deserializes the JSON, and executes a transaction, updating 500 objects at a time. To prevent possible race conditions, you need a mutex to allow only 1 thread to modify the cache for that row.
Each object has a field that keeps track of what fields were updated, so I can construct an update query from that.
When I read a row from the DB, I first check if it's in the write through cache, if so I retrieve it there.
Of course, this means that all writes go through my cache. There can't be any other process that updates that table that avoids writing to the cache, or else we have database inconsistency.
This improved the load in my DB server by more than ten-fold, as well as the indexing time to my ElasticSearch server.