In short, only you can access your wallet. When you subscribe to Ledger Recover, a pre-BIP39 version of your private key is encrypted, duplicated and divided into three fragments, with each fragment secured by a separate company—Coincover, Ledger and an independent backup service provider. Each of these encrypted fragments is useless on its own. When you want to get access to your wallet, 2 of the 3 parties will send fragments back to your Ledger device, reassembling them to build your private key.
The concept is nice and well-intended to improve the recovery experience for less experienced users. People will probably use it. But for hardened crypto veterans, it’s wrapped in KYC requirements (we give this info out way too much already) and it will likely add to the overall attack surface of being a ledger user/become a target for theft. So, I think caution is warranted.
Ledger is looking for a different revenue stream. Device sales is definitely down in the bear market. The shocking revelation is that Ledger security/trust assumption is not at the level that most folks (including techies) think/assume.
I prefer Trezor which are (mostly) open source. But heck, Trezor is pro-surveillance.
I guess, I will stick with my old laptop to create wallets.
Well, I don’t find surprising that a firmware upgrade can extract the keys. In fact, I think it shouldn’t surprise anyone who worked with embedded devices to be honest.
How would the OS derive the secret keys?
It’s a programmable device that supports multiple coins. It’s clear for people familiar with the technology that a firmware update could be able to extract keys, so I’m not really worried about it. But the fact that they claimed it was impossible before is a bad look.
Claiming it was impossible could have been a marketing issue. It is “impossible” to get the data out from the secure element (SE), but the SE runs the firmware/OS.
In any case, I think it was a REALLY BAD IDEA™ as it increases the surface exposure. How is the seed export triggered? Where is the sharding happening? How is it implemented? Can an app exploit this feature without permission from the user?
So far, my impression is that the design of the OS and hardware is great, but clearly, we didn’t need another reason to trust a company.
PS: To the people who ask for open source, SE code cannot be open-sourced AFAIK.
Yeah I was speaking to someone last night who said this. It’s a tradeoff.
Following up as well from the comms perspective. It seems that this situation was overblown and was simply a comms mishandling. I recommend everyone who is worried to dive a layer deeper.