"Q. So how did you get yourself started into submersible operations?
A. Well, I'm sure you're familiar with my film Titanic. When I set down the path to make that film, the first thing that I did was arrange to be introduced to the head of the submersible program at the P.P. Shirshov Institute in Moscow, a guy named..."
https://media.defense.gov/2025/Sep/17/2003800984/-1/-1/0/CG-...
https://data.ntsb.gov/Docket/Document/docBLOB?ID=17236880&Fi...
It appears that all the engineers -- system designer, material engineer and structural analyst -- thought that OceanGate CEO was going to kill himself:
If you ever find <name-of-the-engineer>, he’s not going
to have a whole lot of nice to say. He was very frustrated
with the company. (...) And I understand why. He thought
Stockton was going to kill himself.
And the director himself declined to dive on Titan when asked: Now, the question is, why wouldn’t the engineer get inside
his own vehicle? It was because of what I felt -- and I have a
background in Navy diving in EOD operations. I knew firsthand
that the operations group was not the right group for that role,
and I told him as much, that I don’t trust operations and who he
has there.An anecdotal personal story as it aligns with this exact statement although no one got killed but data breaches certainly occurred.
Many years ago now I was propositioned to be on the board of a financial technology company and they spared no expense in literally rolling out the red carpet for my arrival. I found it all very laughable being solely focused on business and the technical details as I was not being fooled by all the schmoozing. After hearing all the unrealistic business objectives and the promise of having the Philadelphia Flyers involved I then asked to meet the technology team that built the product to see a demo. They bring in one young guy who built it all, the executives are still present mind you, and they allow me to ask any and all questions about the platform that nearly no one in management comprehended. After seeing the demo which involved several blatant security issues I asked only one more question of the sole developer: "Would you put your financial information into this system?"
He provided his answer in front of the companies executive board and I can still see their reactions to this very day. I then stood up and thanked everyone for opportunity and left.
> A certain agitator, for privacy's sake, let's call her Lisa S. No, that's too obvious, let's say L. Simpson.
Lisa the Vegetarian
He didn't mince his words either; he was extremely critical of the whole thing before and after the disaster.
They found a couple of images, but
No data with a timestamp after May 16th was found on the camera, so it is likely that none of the data recorded on the SD Card were of the accident voyage or dive.
After all that work...If you're interested in data recovery, you will enjoy reading this report, about 10 pages, clearly written. The technical language mentioned they didn't see a LUKS header on the card so they figured it was a custom dm_crypt setup.
Evidently the camera data was recorded to an external SSD card in the mission computer when the accident occurred.
The investigation team actually managed to salvage the PC as well:
https://data.ntsb.gov/Docket/Document/docBLOB?ID=19169363&Fi...
Sadly it turned into a compressed ball of metal...
> To conduct the CT scans, the large mass was evaluated by a third-party laboratory under NTSB supervision. This facility had a range of scanners with different power and energy levels and could scan large masses using a rotating table, avoiding the need to rotate the mass itself. Ultimately, the third-party laboratory attempted to image the large mass at a power as high as 320 kilovolts (kV). The scans conducted at 320 kV were not powerful enough to penetrate the object, and as a result, no internal structures or voids were visible, and no memory devices could be identified. The NTSB evaluated using another laboratory with higher power and energy CT scan devices, however, there was concern that increased CT scan energy could damage data stored on any surviving NVM chips. Consequently, higher-energy scans were not pursued.
I'm no expert, but remember reading about neutron imaging ([1]). I'm curious if that was deemed unfeasible, too expensive, or having little chance of success? From Wikipedia:
> X-rays are attenuated based on a material's density. Denser materials will stop more X-rays. With neutrons, a material's likelihood of attenuation of neutrons is not related to its density. Some light materials such as boron will absorb neutrons while hydrogen will generally scatter neutrons, and many commonly used metals allow most neutrons to pass through them.
[1] https://en.wikipedia.org/wiki/Neutron_imaging#Neutron_radiog...
That truly is one of those “let God sort them out” situations.
I guess they decided it wasn't worth pursuing.
Unfortunately this situation is likely to get more common in the future as the "security" crowd keep pushing for encryption-by-default with no regard to whether the user wants or is even aware of it.
Encryption is always a tradeoff; it trades the possibility of unauthorised access with the possibility of even the owner losing access permanently. IMHO this tradeoff needs careful consideration and not blind application.
Sure, sure buddy, I'll encrypt all of my PII data so nobody can access it... including the web application server.
Okay, fine, I'll decrypt it on the fly with a key in some API server... now the web server had unencrypted access to it, which sounds bad, but that's literally the only way that it can process and serve the data to users in a meaningful way! Now if someone hacks the web app server -- the common scenario -- then the attacker has unencrypted access!
I can encrypt the database, but at what layer? Storage? Cloud storage is already encrypted! Backups? Yeah, sure, but then what happens in a disaster? Who's got the keys? Are they contactable at 3am?
Etc, etc...
It's not only not as simple as ticking an "encrypted: yes" checkbox, it's maximally difficult, with a very direct tradeoff between accessibility and protection. The sole purpose of encrypting data is to prevent access!
You can have the key saved in your Microsoft account.
Willing to bet plenty of hn readers are unaware of encryption going on at lower layers of the tech stack than they're aware of.
For example most hard drives encrypt all data, even when not commanded to, as a way to do 'data whitening' (ie making sure there are even numbers of 0's and 1's in the data stream and not some pattern which might throw off tracking.)
The encryption key is simply stored elsewhere in the drive - or nvram or in the firmware.
But it means if you extract the physical magnetic surface and read it with the right microscope, you might well find the data encrypted with no available key.
Ethernet is a good example. It has the same problem where long strings of 0's or 1's can cause clock recovery problems. The solution as clock rates have increased is to just run all the data through a scrambler driven by a simple Linear Feedback Shifter.
Also a good video from Scott Manley: https://youtu.be/qMUjCZ7MMWQ
And so it is, but anyone who has ever seen a Sandisk SD card knows what they're looking at. I can even tell it's not the fastest V90 speed.
The things companies try ineffectually to keep out of public view are weird.
https://data.ntsb.gov/Docket/Document/docBLOB?ID=19169363&Fi...
Full docket:
But honestly from the rest of the story it sounds like the camera manufacturer was selling their pressure housing moreso than the off-the-shelf camera hardware inside, and was not particularly concerned with whether/how the storage was encrypted.
For a machine that needs to boot unattended, what would you do with disk decryption keys?
Jumping off of daemonologist's point, the most convincing model I could think of is one where you are consistently removing the SD Card, likely because you are swapping multiple SD cards back and forth. But this still seems bad! In that case your multiple SD cards all share the same decryption key (or I guess less egregiously all at least share the same physical vulnerability of that device's nonvolatile memory).
> For a machine that needs to boot unattended, what would you do with disk decryption keys?
This depends entirely on what you mean by unattended. Here's some candidates:
1. The machine has a network connection and you would like to boot it over the network. In that case you have a small unencrypted bit that handles the boot sequence and then pass the decryption key over the wire, store in volatile memory, and wipe on shutdown or logout.
2. The machine is regularly serviced/booted by an in-person operator. You give the decryption key or passphrase to the operator to decrypt and boot the machine.
3. The machine is meant to autonomously reboot itself due to e.g. error conditions. In extreme cases when the machine detects that there is a large probability of physical compromise, it shuts down. In this case for the former I would imagine some power supply, e.g. a backup battery, that keeps the volatile memory up when rebooting and in case of large probability of physical compromise there is a forced shutdown and/or wipe of the volatile memory.
4. An operator needs to periodically boot up the machine, but the operator should not be able to access any previous data the machine has recorded (data from the moment operator arrives forward is assumed leaked to the operator because e.g. the operator can just position their own sensor in front of the machine and collect data). In that case asymmetric encryption should be used and the encryption keys should be stored on the machine, but the decryption keys should not, effectively turning the machine into a write-only device from the operator's perspective.
The only case I could imagine where you want the machine to boot unattended and you'd keep the decryption keys in nonvolatile memory on the device is if you want an operator to be able to boot the device without knowledge of the decryption key and also to be able to view historical data previously recorded by the device. But in that case there is no functional difference between that and a device that is unencrypted! In both cases you boot up the device without a password and voila all the data is in front of you ready to be read! And indeed that's basically what happened here (modulo the difficulties associated with the device being damaged). And in that case encryption seems like pure overhead for no real security benefit.
The "carrier" that everything rides on within the housing is clearly FDM printed as well. I assume these cameras (rated to 6,000 meters) are rather low volume products.
They probably should still know what it's doing though...
The black cable goes dangerously close to the pushbutton of death. ;-)
Yep. I designed boards for cameras like this (and the vehicles they are mounted on) for 20 years. When you're only going to sell ~30 a year, and it's going into a $7k enclosure, the extra $7 for the dev board you used during prototyping isn't even a consideration. Go ahead and design around the breadboard, at this low volume it's WAY cheaper than the time to re-design the support circuitry from scratch and it gives you time to start working on the NEXT project that has already been sold to customers with a delivery date quickly looming.
Many times I have heard the tech stack for the subsea industry called "Shop & Glue."
Common misconception. A handful of capacitors, SPI NOR flash, an inductor, and a crystal is way easier to place and route than a restrictive module that completely disables your ability to use SWD/JTAG on an otherwise excellent MCU.
https://www.pjrc.com/store/teensy32.html
As to what it's doing in there, I have no idea.
edit: probably? It was posted at the Teensy forum about a month ago.
https://forum.pjrc.com/index.php?threads/the-deepest-teensy....