Not something I'd want on my personal system, but it's exactly the sort of thing that I think every NOC/Secure environment should have for post-mortem assessments.
Tools of that sort [1][2] are pretty standard in call center environments.
I'm not a fan.
First, the software tends to be incredibly expensive. Second, in my experience, it's primarily used by managers looking for reasons to bludgeon their $30K, entry-level call takers over trivial infractions.
They'd sit down employees and playback fast-forwarded video showing how much time was wasted on Facebook, personal email, shopping, etc. It's horribly invasive but it meant everyone was too scared to use work computers for personal things.
I don't think they ever looked at it unless they wanted to fire someone and didn't want to pay unemployment and needed proof that they weren't doing their job.
shudder
As for processing, you need to find a new computer if you can notice a screenshot being taken.
The Runbooks that NOC teams have, quite often have them connecting to a lot of systems with greatly heightened privileges - It's not unusual for a NOC employee to have expansive sudo privileges on many of the unix hosts they manage. They are also often on privileged VLANs, with direct IP routing to a lot of hosts that normally wouldn't be reachable.
Most of our NOC guys have their own personal laptops, and they can hop onto the (unprivileged) wireless system and do their own thing when they aren't working an incident.
I'd have no problem having my screen captured once a minute when I was working in that type of environment.
Anything with lots of confidential information, or anything financial, and you are going to want to monitor all the people with access constantly. You may not want to snoop real-time, but you are going to want to be able to find and fix breaches after the fact, and do root-cause analysis.
It's not a matter of trust in the IT people, it's a matter of people go crazy sometimes, and people make bad hiring decisions sometimes.
Hacked versions of common utilities is a common payload for rootkits.
You should still run the scans, just be aware of the limitations.
Once you've lost root to a sufficiently competent attacker you can't trust _anything_ on that box any more.
One thing that'll help against (some) script kiddies with rootkits is to have available statically linked copies of every tool you might want to use to see what's happening on your box - for a long while, every colo-ed box I managed had a cdrom drive with read-only versions of /bin, /sbin, and useful bits of /usr - all with the binaries statically linked. They can in handy a few times (mostly to confirm that "yep, we're screwed. Get this box off the network and powered down immediately and implement the bring-up-a-new-server-from-scratch plan right now".
At some stage though, you can't trust the kernel or the hardware - an attacker who's got into your booted kernel or your bios or your network card firmware, if they're good enough, they probably can't be detected by examining anything you could see logged into the box itself. The only way to identify that level of attack is by monitoring the traffic from the box from some trusted piece of network gear upstream of your rooted server (and against a sufficiently talented attacker, even identifying unexpected outbound traffic might be impossible. If your list of likely attackers includes three letter agencies or nation states, I hope your getting your secutiry advice from somethere other than HN comments…)
Nobody should lose any sleep about BIOS embedding and similar - that level of attacker and sponsor imply a level of threat that no typical organization has a chance against.
In my opinion, after years of pondering dozens of intrusions with many types of ways in and regular failure of all kinds of defenses I don't think there is much advice to give aside from the flaw is in your custom software, stupid.
I have become a really big advocate of CM and push button provisioning with identical replacement hosts that build from scratch and commonly get refreshed - relying on code and configurations managed centrally.
The best way to remove an attacker is a complete rebuild. If you're already using Chef etc why not just dump them proactively? Some roles don't lend themselves to this, but I assure you it is like a massive weight has been lifted.
So this happened to me once, but it wasn't ps or netstat but sshd that had been replaced. We only noticed it because the replacement sshd was bad and didn't set up the PATH correctly so "darcs push" stopped working (it was presumably set up for a different linux distro than we were running).
I think the article is quick to jump to the conclusion that he was attempting to be malicious with his actions however this could be a case of Hanlon's razor.
His actions could easily be attributed to a less-than-aware sysadmin developing his own solution to get around often arduous security restrictions. Stupid, yes. Malicious, no.
Malicious, yes. Stupid, absolutely.
That said, it's still likely that this guy is just a douche with a bad attitude, and deserves everything he has coming. Big difference between this, and "stealing" a bunch of reports that were government funded, and open to any and all users on the school network they were accessed from.
It would be quite a lucrative stance for the employee to sell access to these servers to one or more groups who could potentially make more use of them.
They would be more valuable for bitcoin mining most likely.
Wait, wait, just because the guy's in jail is no reason not to return voicemail and emails!
> Given the rapid discovery, the malware was on Hostgator systems for less than a month.
Then yes, he did. If the malware was on there for more than a few days, I find it extremely unlikely that at least some data wasn't compromised.