"Here's your hardware wallet, initialize it with the seed written on this piece of paper, it's the only one that's going to work for this hardware wallet. Do not lose this seed or you'll lose access to your funds!".
Several unsuspecting users, not aware that a random seed is supposed to be generated by the hardware wallet (or by throwing dice, or whatever) have been pwned this way.
Most wallets let you provide your own seed words, which users can derive using diceware themselves, but DSA (and its elliptic-curve variants) need a secure random input, and I'm not sure if all wallets commonly use a deterministic (i.e. provably free of covert channels) construction (like in RFC 6979) for that.
https://en.m.wikinews.org/wiki/Predictable_random_number_gen...
With the article, it can go unsuspecting for years even simply waiting for maximum distribution and then a coordinated attack.
It would be trivial for any iOS-based software wallet to compromise your seed before your private key before is even created. You don't even need fancy spyware that calls home. If the seed is generated from a method that isn't random you'd never know. It will appear random to you, but the author of the software could simply increment on a known value and be able to recreate every private key ever created with that app. No one would ever know. The attacker could sit silent for years or even decades, and if they DID drain a wallet there would be no way to prove it and no one would believe the victim. It would just be a case of, "Well, you must have leaked your seed, it's your fault."
I can even see something like Coinbase Wallet being 100% compromised. The apology post is probably already written in a draft somewhere.
https://community.trustwallet.com/t/wasm-vulnerability-incid...
how many people can or will verify the key is truly one-of-a-kind?
I don't see how this is helpful advice.
The whole point of the article was how the look and feel of a legitimate hardware wallet was cloned.
Under these circumstances there is no way to tell what is in the device(clear housing perhaps?). all it has to do is act like the real device. It does not matter how good your security chip actually is if all I have to do is copy the correct interface.
Unrelated: the use of that particular version is a strangely shoddy mistake. It should have been very easy to use a version string that exists. In which case that version would never have been skipped??? perhaps at one point that was a real version and trezor pulled it due to it's use in a batch of clone units.
Perhaps attackers wanted to discourage user from trying to upgrade firmware/bootloader before first use by using version one step higher than officially released. If they used older version number, savvy user might try to flash newest firmware and discover something isn't quite right. Using nonexistent, but plausible looking version number, can also be used to explain minor discrepancies in behavior between fake and original unit, if some are introduced by mistake.
A security chip actually deserving the name (i.e. a tamper-proof one) can protect a private key even against physical attacks, with the corresponding public key marked as authentic by the manufacturer.
If the interface contains a challenge-response interaction with that private key (and ideally ties that to any further communication with the trusted applications on it), you can't copy/emulate that.
It's an HSM which you can flash yourself. Unfortunately, it never generated much interest and so I had to fold up the tent. But maybe it was just ahead of its time.
Unfortunately I was aiming to use it to generate TOTP codes (and replace my authenticator app), but IIRC it needed a RT clock and thus a battery, which was not part of the design.
Great project though.
Anyway I suspect the problem is the nature of crypto. For this to actually take off, you would have needed to hand a bag of money to Jake Paul or John mcafee or a bitboy, and I'd highly suspect a really good product has a hard time competing against those that do
This seems like a horrendous design, like a safe that burns the money inside if you try to tamper with it. Sure, it might protect a malicious thief from absconding with the funds, but it is also an attack vector for any bad actor that simply wishes to cause you harm.
If the firmware had been tampered with, there is no safe way to extract the key. Better that the user uses the recovery seed on a fresh device.
Which means the weakest link of your fancy hardware wallet is how well you hide that bit of paper with your seed phrase.
Edit: Looks like I was beaten to this down thread.
Other than having x-ray vision, one easy (but by no means perfect) verification to thwart these types of attacks is to weigh your devices.
Manufacturing should be consistent enough that resealing a device like this would be adding some grams that shouldn’t be there. And unlike something like a cisco router, nothing to cut out to make up for the added weight.
Best part is they pay for the certifications!
Then there are friends that ahem buy/sell materials in gram quantities. A counted handful of newish coins are a reasonable way of verifying accuracy in those cases. Be sure to weigh different quantities lest the absolute and relative error cancel out.
Yet another hilarious example of where a the solution to security in an alledgedly trustless system designed to subvert authority comes down to ... trust and authority.
If you don't do anything, that includes the OEM, their supply chain, your delivery courier, an evil maid etc.
If you have the choice of reducing that list to only the OEM, isn't that a win? That's what attestation does.
To my knowledge, current Trezor devices are unfortunately not (sufficiently) key extraction proof, though; in that scenario, attackers might be able to extract the private attestation key of a legitimate device and then go on to impersonate it in their own version.
This again could be mitigated by e.g. making the attestation key device-unique and offering an online validation service (which could keep track of unusual verification patterns and alert users), but it's not an easy problem to solve.
Still, physically threatening/kidnapping somebody is an entirely different threat model, although it's very common in the Bitcoin world: https://github.com/jlopp/physical-bitcoin-attacks
That's an awful idea. If you're the type of person to worry about being supply-chain-attacked, then targeted supply-chain attacks are far more likely to happen to you than untargeted ones are. Specifically, you are more likely to be supply-chain attacked by an entity who has the power to either compel or blackmail the OEM into giving you a first-party-adulterated device (think: Huawei network switches), than by an entity who's supply-chain-attacking random strangers. This doesn't just include governments, mind you, but also any sufficiently-wide-reaching criminal gang.
Showing up in person to the factory — or to a retail store — means the intelligence operative planted there can recognize you, and give you the "special" device prepared just for you; or the employee can be compelled by certain training (required to be allowed to sell such devices in certain countries) to follow the special instructions that come up when they swipe your credit card.
So what to do? Don't show up in person. Send a one-time proxy buyer to show up in person. And have the proxy buyer pay in cash, or using their own card.
Think what an American diplomat stationed in China would do if they absolutely needed to get e.g. a new smartphone right away. Normally they'd just wait for something like that to be sent over from America via diplomatic courier, specifically to avoid this problem. But if they couldn't — then proxy-buying at retail is the next-best solution.
(Funny enough, this is also the same thing that computer-hardware reviewers have to do to avoid getting a "reviewer special" binning of the hardware. Counterintelligence is oddly generalizable!)
The trusted codebase and set of OEMs seems an order of magnitude larger, and I'm not sure whether the lower likelihood of being specifically targeted as e.g. a crypto user by a supplier can make up for that.
Not commenting on GP's point but... No, you don't.
You can prepare your transaction on an online machine, without signing it. With full access to the blockchain, the balance of every address, the "counter" needed so that you tx is legal (in Ethereum's case), which address you want to spend from etc.
Then you transfer that transaction, without using the Internet, to the offline computer and sign it there and transfer the transaction back to the online computer to broadcast it.
The computer preparing the transaction, the one signing the transaction and the one broadcasting it can be three different computers.
You can even do that with an hardware wallet: the hardware wallet does not need to be plugged to a computer that is online. It can be plugged to a computer that is offline.
There are still many issues, even when using airgrapped computers. For example it's possible that a hardware wallet vendor is using non-determinism in "random" parameters chosen to sign transactions to exfiltrate the seed hidden among signed transactions. So even an offline/airgapped computer and a hardware wallet hooked to that offline/airgapped computer wouldn't help.
The safest play here for an average user is to just not buy your hardware wallets off eBay, as it seemed to be the case in the OP!
When you send funds to the wallet, you don't need to send them to the address that the wallet presents, you can send them to the address you calculated during offline key generation. As long as you use the Trezor derivation path on your offline machine, it's predictable what the first address will be.
Granted that would be really inconvenient to do if one used the wallet on a daily basis. In my case I use it rarely enough (large transactions only) that it doesn't bother me that much.
Use a little bit of python (there are libraries for this or you can do it yourself) to make sure that the addresses generated in the HW wallet by the 12 word mnemonic are indeed the correct addresses. For example the first segwit address using your private key and the derivation path 49h/0h/0h/0/0 should be deterministic. This way you know your 12 words are the ones used and the wallet is using known standards and not some homebrew crypto.
In fact you should always do that anyway in case the HW stops working and/or the company goes under. This way you can be sure that you can recreate your private keys from your mnemonic and access your funds no matter what.
The only thing that would have stopped this attack would be to generate your seed off the device. And then, since the device is counterfeit and there may very well be a way to exfiltrate seeds from it, to use a genuine device, but that's a different attack.
What youre saying though is a good idea, provided the device youre running this on (and therefore entering your seed into) is secure. Since this cannot really be guaranteed it is often advised to never enter your seed into a computer, for good reason.
HW support entering any mnemonic, you don’t have to generate it on the device itself. So if you create a mnemonic yourself and check what addresses it should generate with a third party tool and then enter it in the wallet and see different addresses something is fishy.
https://www.youtube.com/watch?v=dT9y-KQbqi4&pp=ygULdHJlem9yI...
> I was contacted to hack a Trezor One hardware wallet and recover $2 million worth of cryptocurrency (in the form of THETA).
Some people may forgoe ease and aim for self-custody because they value decentralization over convenience. Others will choose a middle ground where they and a centralized entity both hold part of the key, ensuring that the 3rd party can't move their funds without permission.
I wonder if Trezor team communicated that in some maybe different way than that line in the CHANGELOG. Not blaming them of course, just wondering.
From their forum earlier this year: https://forum.trezor.io/t/protect-from-getting-a-fake-trezor...
Validate the holograms: Most users aren't forensic experts and don't have an authentic physical sample to compare their evaluation target to, only photos of one.
Only buy from authorized resellers such as the official Amazon shop: Fake products have been introduced into Amazon's supply chain before [1].
The bootloader validates the firmware and displays a warning otherwise: Sure, but so does the fraudsters' bootloader.
[1] https://www.redpoints.com/blog/amazon-commingled-inventory-m...
* Offer rewards to anyone able to send me the fake devices or clues who is making them.
* Tell my clients to upgrade the firmware on devices before use. Make sure every new firmware is distinctive in some way - for example the boot screen, and tell the users to check for that to ensure they are actually running the firmware they thought they just flashed.
Has the author tried cashing out crypto? KYC anyone? It's harder than ever to cash out ,especially large sums. So many restrictions due to fraud.
Hardware wallets are never safe. the only safe way is to generate your own entropy, key derivation. Why would you ever trust a 3rd party to generate your keys?
You have fraud team and IT security team on your staff, right?
If the wallet uses deterministic ECDSA, or the algorithm used is deterministic by definition (such as EdDSA), this can be detected, but doing so requires validating some generated signatures on a second, trusted device.
uh oh! does this imply something is up that the trezor developers know of?
An attacker can just implement whatever "install firmware version xyz" command by returning "ok, did it!" and remembering that version number if it ever needs to be displayed.
A more complex attacker could emulate the entire firmware in more powerful hardware of the same physical profile and selectively intercept any input and output.
> All Trezor devices are distributed without firmware installed - you will need to install it during setup. This setup process will check if firmware is already installed on the device. If firmware is detected then the device should not be used.
>The bootloader verifies the firmware signature each time you connect your Trezor to a computer. Trezor Suite will only accept the device if the installed firmware is correctly signed by SatoshiLabs. If unofficial firmware has been installed, your device will flash a warning sign on its screen upon being connected to a computer.
https://trezor.io/learn/a/authenticate-model-one
There seems to be an element of user carelessness and naivety here. Anyone who follows Trevor's hardware verification checks surely needn't worry about these attacks.
This is an absurd security model. Where's the root of trust here? How do I know I am initially talking to an authentic "blank" device, and not a malicious one pretending to be one?
> If unofficial firmware has been installed, your device will flash a warning sign on its screen upon being connected to a computer.
Hopefully, malicious firmware won't meddle with this feature in any way...
The vendor here is either completely clueless, or is trying to paint a better picture for prospective customers despite knowing better.
...?
Although I'll concede that I'm now wondering what's preventing compromised hardware from faking this part too. A complex malware could even receive firmware updates, dump them in an unused partition, and report to the connected host that it promises that it's definitely running that firmware, right? Hmmm.