I'm not saying that they shouldn't add security to their protocol, but I can think of several ways off the top of my head to stay secure. The application-layer protocol doesn't have to be the one to implement it, network-level encapsulation can help you there.
I'm not sure how old the protocol is, but perhaps it was more important to get it working and wrap it in a VPN and then iterate on that design.
It looks as though the researchers saw that as a possibility too:
"It is possible to temporarily mitigate the flaw by implementing the following workaround: Researchers have demonstrated that ITP can be operated over TLS/DTLS, using certificate-based authentication to ensure the security and integrity of the protocol."
I don't really understand why this is only a "temporary mitigation", though, rather than a reasonable long-term solution. Can anyone enlighten me?
Maybe the extra technical complexity of setting up these certificates is deemed too great, and the likelihood of people getting it wrong too high?
Hopefully, we're still testing these surgeries on mice and not men.
It's not like surgery robots move around the network and come online at unexpected locations. Those installations are planned ahead and the IT considerations are part of that deployment. Also, encryption is important but so is available bandwidth and bandwidth quality: a jittery link could be as dangerous as a compromised one.
So in other words, somebody demonstrated that a preliminary protocol that admitted it didn't have any security was insecure. Woo!
That's a curious statement. How does encrypting video increase its bandwidth requirements?
Checking your browser before accessing osvdb.org.
This process is automatic. Your browser will redirect to your requested content shortly.
Please allow up to 5 seconds… DDoS protection by CloudFlare Ray ID: 1eaaa26e86870920
have I gone back in time to 1995?