> These legacy systems will in many cases need to be migrated to new versions, substantially modified, or even rebuilt from the ground up, either because they are unsupported and therefore cannot be repurchased or restored, or because they simply will not operate on modern servers or with modern security controls.
> There is a clear lesson in ensuring the attack vector is reduced as much as possible by keeping infrastructure and applications current, with increased levels of lifecycle investment in technology infrastructure and security.
> Our reliance on legacy infrastructure is the primary contributor to the length of time that the Library will require to recover from the attack.
A lot of lines like the following, also indicate to me IT was increasingly were involved in fighting fires and maintining operational systems ("keeping the lights on") rather than deploying new infrastructure and automation, updating software etc.
> Some of our older applications rely substantially on manual extract (...) which in a modern data management and reporting infrastructure would be encapsulated in secure, automated end-to end workflows.
Modern business is IT, I know that I am preaching to the chior but this sounds a lot like their IT was seen as a cost.
While I'm certain they are underfunded and overworked, this sounds like they had an internet accessible terminal server. I'd like to imagine IT screaming this is a bad idea but a suit somewhere saying they needed easy access for partners. I can only imagine how insecure the solution they replaced with this one was.
Some aspects sounded quite interesting, but these weren't places pushing the envelope in any aspect of technology. I'm sure they were running outdated software and configurations on everything, but IT was closing their tickets and meeting their SLAs. And with no disrespect, these people weren't necessarily disruptors looking to shake up and modernize the museums' infrastructure and take it into the future either, they just did their job to the best of their ability and went home at the end of the day.
To generalize I find that this usually holds true in a lot of non-tech industries, and IT is generally seen as a burdensome cost as opposed to enabler of business.
> The Library utilises numerous trusted partners for software development, IT maintenance, and other forms of consultancy
> increasing complexity of managing their access was flagged as a risk.
> first detected unauthorised access to our network was identified at the Terminal Services server. This terminal server had been installed in February 2020 to facilitate efficient access for trusted external partners
Sadly their response seems to be using more cloud infrastructure and outsourcing more.
trusted != trustworthy
The essential lesson - that good IT and security people within your company cost money. It is worth paying for vigilance, loyalty and care - has not been heeded.
CYA - it stops being their management's fault if its outsourced,
What’s unfortunate is that they flagged this vulnerability in 2022 and planned to review it in 2024 ???
Does it usually take this long to identify impact of users? They mentioned they paid for identity protection for their staff & ex-staff as well.
Credit monitoring is usually offered as standard when a breach occurs, the UK is much less litigation friendly than the US so in the absence of any actual harm, that would discharge most of their obligations to protect you following an incident.
Price of everything and value of nothing. Outsource everything, underfund everything from systems renewal to staff salaries.
Take a look at the US DoD, NASA, etc. They love acronyms, complicated internal organisation structures, just as much as the Brits do.
So refreshing.
Ouch.
I see a few comments indicating that connecting Microsoft (? not mentioned anywhere in the report??) t Terminal Services to the internet was a wholly bad idea.
Aside: is the report using "Terminal Services" generically, or do they mean that the server hasn't been updated since before 2009 (? when it seems Terminal Services became Remote Desktop Services (RDS))?
Is there something inherently insecure about remote desktops, or is MS software here known to be particularly insecure, or ...? RDP is default enabled on MS Windows installs (I always disable it), is that more of a problem than one might imagine?
Do they say anywhere where the access was from (maybe only GCHQ know that). Presumably the firewall would only allow known connections - did they report on analysis of all the remote clients?
Exposing RDP to the Internet directly has been frowned-upon because of the attack surface being presented, there's no two factor "story" out-of-the-box, and you're opened up to brute force attempts on cruddy user passwords.
Older versions of the Microsoft Remote Desktop Protocol had a much larger attack surface than current versions. The current versions with Network Level Authentication (starting in Windows Vista/Server 2008) present a smaller attacks surface. Older versions used "homegrown" Microsoft crypto, whereas current versions use TLS.
Disclosure: I made a FLOSS fail2ban-like tool for RDP many years ago[0]. I had a situation where I was forced to expose RDP to the Internet and I didn't like having it open w/o some protection against brute force attacks. This tool happens to still work in Server 2022 and will slow the velocity of brute force attacks. I still highly recommend not exposing RDP directly to the Internet anyway.
(The ts_block tool is missing some fairly essential functionality that I never got around to implementing. It works fine and is really easy to install but some things are sub-optimal.)
There is a huge difference between a port forward on port 3389, and publishing the gateway behind azure app proxy - the latter supporting mfa, account lockouts, and not actually requiring any open port to the internet. Much of the discussion online treats these as equal.
¯\_(ツ)_/¯
For a report from British--and a library, no less--the lack of Oxford comma cocnerns me.
> In common with other on-premise servers, this terminal server was protected by firewalls and virus software, but access was not subject to Multi-Factor Authentication (MFA).
"Jisc is the UK digital, data and technology agency focused on tertiary education, research and innovation."
State-owned quango asleep at the wheel. Unsurprising.
This used to be what we called JANET. Back in the day this was top banana and prestigious to work for like GCHQ etc.
I expect they've died from a thousand cuts under the Tories. Every university I've been in the past 10 years have their ICT run by Microsoft, and is absolute rubbish.
In this case, it looks like Jisc was basically turned into a charity in 2011, so technically they're not even state-owned anymore.
My ISP could do the same thing. How is that being asleep at the wheel?
No root cause. On other forums it is understood they were running very old and unpatched VMware os. Which is simply embarrassing and everybody within their IT team should be fired immediately for gross negligence.
They can't inform people whos data has been compromised because they refuse to pay the ransom and have no other way to tell what was stolen. Farcical.
Their ability to rebuild in a timely manner was hampered by not having any spare servers and presumably because all their server hardware was compromised and couldnt be used for restore.
It's bad that they don't know what was taken, but as for paying the ransom, I wouldn't do it either: first, because it's danegeld; second, because you're just exposing yourself to even further risk by accepting files from criminals; third, because as others said, it would be UK tax money.
At least they seem to have a plan moving forward that seems considered, though I think a lot of what they want to do is easier said than done effectively. I wish them the best of luck.
It said that. The terminal server entry point was completely scorched in the attack. Offsite rlogd would have helped.
That may be true, but by that standard about 90% of every sysadmin, IT managers and even CISOs would be out of a job next week.
Most companies are just "getting by" and hoping it won't be them next.
We have a multi-national cybersecurity crisis due to decades of kicking the can down the road, excusing poor software engineering to allow unfettered commercial development, and destroying our education and training sectors.
You do not really worry about what would happen if all the grossly negligent doctors get fired. Who will do those procedures with a total disregard for safety, said no one ever.
The IT team most likely begged for years for funds to upgrade their infrastructure, but did not receive any of it. Public institutions are already short on money, but education has it even worse.
If anyone is to blame, it is the last British governments, who have focused their attention on Brexit and Ruanda crap instead of providing services for the citizens.
As an organisation forming part of the UK State, they're not allowed to. Rightly, in my opinion.
Since I dont trust the library to actually assess my impact, or track records of companies getting hacked often drag their feet making it up to victims. (equifax)
They were following explicit government guidance, as promulgated by the National Cyber Security Centre (NCSC), which is the civvie offshoot of GCHQ.
> This report is a joke.
> No root cause. On other forums it is understood they were running very old and unpatched VMware os. Which is simply embarrassing and everybody within their IT team should be fired immediately for gross negligence.
> They can't inform people whos data has been compromised because they refuse to pay the ransom and have no other way to tell what was stolen. Farcical.
> Their ability to rebuild in a timely manner was hampered by not having any spare servers and presumably because all their server hardware was compromised and couldnt be used for restore.
That doesn't fit their claims on page 7 about reviewing the lost data and contacting affected users.