At that point, we were forced by our contracts, and data protection laws, and a CEO aware of all of these, to shut the affected productive system down. We stopped all services, set the firewalls of our hoster to only accept traffic from our office and that's it, while figuring out wtf happened. Those measures overall reduce the situation to a known situation again. If someone in our office is hostile.. that's another issue.
After a bit of analysis, we figured out the IPs attacking us and we blacklisted those on the firewall of the other production systems. Eventually things cleared up to be a pentest no one told us about.
If the attack had moved into these other systems, we'd have to extend the nuclear solution to those systems too. At that point, we'd have to lockout some 30k+ FTE users. I think we'd be able to make national news with that for our customers. Except.. not good news.
This was elevated in ridiculousness, because said manager was backpedaling really, really hard after we contacted the pen-testing company as well as the customers senior management. However, all attempts at re-instating the system were swiftly blocked by the customers security policies and security teams. So, the system stayed down for a solid amount of time.
After all, the customer insisted on us participating in their security workflows for that system under their security teams control. And from their companies point of view, this was an external hostile attack -- since the manager didn't tell anyone.
Following that thought, it is entirely possible the whole point of the hack is to discredit Twitter and the bitcoin bit is just smoke.
It should become very apparent how this is done through the correct levels of logging. Unless of course twitter backend firefighting team consists of hasty tooling that writes directly to production table with no oversight (which also sounds like a possibility)..