Android wallets can potentially offer higher security and privacy compared to hardware wallets.
While this is not yet the case for most wallets, some are approaching this goal. This thread aims to discuss strategies for creating and maintaining a wallet that maximizes security and privacy.
It’s important to remember the limitations of relying solely on hardware wallets:
Privacy Concerns: Purchasing hardware wallets could place you on certain tracking lists, inadvertently signaling cryptocurrency ownership. Ledger Live transactions are visible to Ledger Inc, and while Trezor may offer more privacy, the extent is uncertain.
Security Risks: The lack of privacy can make you a target for physical theft, such as the $5 wrench attack. Additionally, airport security systems can detect these devices, potentially drawing unwanted attention.
Closed Ecosystems: These wallets limit user control over app availability and can revoke access, potentially making funds inaccessible. For example, Ledger Inc’s decision to reject an individual developer’s code has prevented some users from accessing their ZEC holdings.
Android wallets, however, can provide a high level of security and privacy. Solutions like GrapheneOS combined with Ywallet are making strides in this area.
To further enhance Android wallet security, we need:
More support for Android Cold Storage: While the phone should be regularly updated for security, the wallet app itself should not have network permissions. Ywallet’s QR code system is a good start, but a standardized communication method for multiple wallets is necessary.
Interesting, thanks for clarifying that point @hanh.
So it would work for Ethereum (with wallets such as getclave.io), but not on Zcash, correct?
Either way, it would still be possible to have the passphrase displayed during the initialization phase of the app, but never after, correct? In that case, while not as optimal as a secure element, it would still provide a good amount of security. It would be similar to the way Trezor operates, which, with the optional addition of a strong password on top, can be very secure. Bitwarden works this way on Android.
The OS encrypts the disks and apps are signed and run with isolated access. The secret keys are stored on disk. The app can read and write them. Normally, no other app can access that storage. But, if the OS has a flaw, or if the app has a bug, it is possible that the keys get exposed.
The “never after” part is the risk. The app can be buggy and leak that info somehow.
Trezor has no secure element. It’s less safe than people think. Android has Strongbox and TEE. But it is useless if signing cannot be done in there.
That connects perfectly with my point. The problem is perception. Too much credibility is given to hardware wallets, and not enough to Android devices configured for security.
On the side of Ledger, while they do fully leverage a secure element, they have other critical issues such as those I have mentioned above. Another issue is the “Ledger Recover” planted right in the firmware. How are we supposed to trust this? Additionally, any necessary upgrade of the firmware has to go through Ledger Live, which is proprietary and communicates your device SN (and likely more details) back to Ledger Inc. Unacceptable.
Trezor is much less safe than a wallet running on a GrapheneOS device. Why? Because there is no hacking such device that is powered-off. The decryption process does rely on the secure element, which is throttled and therefore cannot be bruteforced. The OS integrity can be checked anytime with the Auditor app.
Which only leaves us with the buggy app issue. Indeed, that is where we should have much more focus. We can and should stand on things that are known to be reliable, so we can focus on what only we can do better: the Zcash apps.
The Secure Element in the TS3 protects your PIN (without learning it), which releases a secret (stored on the Secure Element), which in turn protects your recovery seed (stored only on the Trezor Safe 3 general purpose chip, encrypted by both the device PIN and the secret stored on the Secure Element).
I’m no cryptography expert so I turned to ChatGPT to expand on why this could be an issue. Some of these points make sense to me. Curious about your thoughts here. It seems likely there might be trade-offs either way you slice it.
This is from three years ago but it is still relevant. In this post is argued that indeed, way too much trust is put in hardware wallets, and even a desktop could be better given appropriate usage and configuration. Standing on this argument, it may become easier to see why it is even more secure to have a wallet on a GrapheneOS device.
I’m nobody so don’t take my word for it. Greg Maxwell (controversial as he is) and Vitalik however, are often worth paying attention to.
Relevant protection for suck attacks on GrapheneOS / Pixel 8 devices:
Vitalik is definitely pointing to Android with the links  and to some extent  shared above. As for Greg Maxell, he points to a regular computer, which is definitely far from being as safe compared to a GrapheneOS device (which is to be expected from a 3 years old post).
So, your concern in this case is that after the seed phrase is decoded, but before the user is done with their Trezor session, an attacker operating on the connected machine could exploit the device from the connected machine causing the seed phrase to become compromised. Is this correct?
What about in the case where the user also uses an additional password + pin? No difference?
And comparing this to how Ledger works… since the seed never leaves the Secure Element, your seed is less likely to be compromised via this route. An attacker would have to forge a transaction that would need to be signed by the secure element on the device which would require you to acknowledge a prompt. Do I understand this right?
But if we’re already talking about device exploitation from the connected machine, could a secure element be compromised this way too causing seed compromise?
You can see the secure element as a cold wallet inside your hardware wallet. Done properly, the secrets never leave and all communication is done through hashes & signatures.
This lowers the surface exposure. Now, the risk is that the code running in the SE is itself faulty. To mitigate that, only firmware that is signed by the vendor is allowed. Ledger blew it when they said that they couldn’t extract the seed phrase. That is false. They can make a bogus firmware, sign it and if the user installs it, the seed phrase can go out. That’s Ledger Recover in a nutshell. You need to trust in them that the firmware does not do that today.
Trezor does not have a SE for that purpose. Signing is done on the regular microcontroller. Now, if the OS is faulty or if the app has a backdoor, it could expose your seed phrase. They mitigate that risk by authoring all the apps. Third-party devs cannot create apps. It is a different form of trust.
Attacks that were able to extract the seed phrase relied on some side channel vulnerability that exposed the keys without using the app itself. AFAIR, the first one was simply that the memory was not cleared on warm reboot.