"Zbay" threat model, request for feedback!

Hi everyone,

As part of Zbay’s Zcash Foundation grant from last year, I researched the communication needs of journalists and sources, to see if a messaging app built using Zcash encrypted memos could meet their needs better than, say, like SecureDrop.

We’ve created a threat model based on that work, and I’m looking for feedback on it. If anyone with experience in security would like to offer feedback, that would be amazing. @earthrise, @dconnolly, @chelseakomlo, @zebambam, and @mistfpga—I’ve learned a lot from your posts and publications, and this might be relevant to things you’re thinking about, so if you’d like to take a look I’d be super grateful!

Thanks again to @sarahjamielewis for the extremely helpful early guidance. I’ve never done any academic anthropology work (just informal user research) but thanks to Sarah’s guidance the methods and quantity of data seem at least loosely on par with the academic work I’ve seen on this subject. I think we succeeded at getting an accurate snapshot of peoples’ needs right now. (I certainly hope so, since a ton of product decisions will follow from this!) Due to funding and time constraints, and the complexity of privacy and safety concerns for interviewees, many of whom are leaders in their fields and do very sensitive work, we won’t be publishing anything beyond this overview. But I’d be happy to speak privately about the conclusions in more depth if anyone would like to do that.

Also thanks to @antonie and @acityinohio at Zcash Foundation for funding this research, and funding the basic work needed to act on it, like the work moving to a light wallet stack and using Tor for off-chain messaging.


Threat model research results

To develop a threat model for our decentralized messaging app “Zbay” (a placeholder name that will change soon), we conducted ~18 user interviews with journalists, sources (specifically: activists or policy experts who communicate frequently with journalists), and security experts whose work includes advising and protecting journalists. We also reviewed the security properties of existing encrypted messaging apps, including centralized (e.g. Signal, WhatsApp), federated (Matrix), and decentralized (Ricochet, Cwtch, Session) approaches.

What journalists and sources said

Our hypothesis going into the study was that journalists needed a cheaper tool for anonymous tips than SecureDrop, but we found a greater need for secure team chat.

Indeed, many journalists affirmed that the cost of running SecureDrop was indeed prohibitive. However, in our interviews, both journalists and sources were far more concerned about the security of their internal communications (i.e. conversations within their own organization or with close collaborators at other organizations) than with journalist/source communication.

While many use Signal and/or voice communication for sensitive conversations, most still use insecure channels like Slack and email, and they worried about exposure to a large-scale breach or a targeted attack. Users cited missing features like themed channels as to why Signal was not an option for team chat, and expressed a general wariness as to whether other Slack alternatives were sufficiently usable and reliable.

What security experts who advise journalists said

Experts named account compromise via phishing or guessing of insecure passwords as the most common threat, followed by the risk of devices being compromised physically, e.g. by being lost or seized. Malware attacks were mentioned, but noted by multiple experts as being much less common.

Usability, features, high availability, and “software that just works” were experts’ top recommendation criteria, above any specific security features. Several experts said something to the effect of, “if software isn’t easy to use or interferes with work, users will misuse it or avoid it entirely.”

The next highest recommendation criteria was the trustworthiness of the team behind a given piece of software, followed by whether the software was open source. Timed deletion of messages came next, alongside the project’s stability, funding, longevity, and capacity to respond to vulnerabilities.

Key takeaways

  1. Among the journalists and sources we interviewed, the primary and most motivating concern about communications security was the breach of written internal communications by an adversary or in a public leak accessible to adversaries.

  2. The core need was not just encrypted messaging, since most interviewees use Signal, but rather a suitable encrypted replacement for Slack or Discord, which Signal is not because it lacks necessary team chat features such as themed channels.

  3. Usability, end-to-end encryption, timed deletion, and resistance to account credential phishing were the most important security requirements, according to the experts we interviewed, and this was consistent with responses from users (sources and journalists). Neither users nor security experts mentioned specific properties of end-to-end encryption like forward or backward secrecy as requirements, and often recommended tools that did not have properties like forward secrecy.

Threat model

Given the above conclusions about the threat models and needs of the users we hope to serve, our goal is to achieve the following set of security invariants for the following scenario.

(We follow the “invariant-centric threat modeling” approach outlined here: GitHub - defuse/ictm: A user-first approach to threat modeling.)

Usage scenario:

A team uses Zbay as a Slack replacement for team chat, and all team members use full-disk encryption with user-controlled keys and a strong password.

Definitions:

DELETED means any data the app has declared “deleted,” to any user, and that users have not archived using other means, for example by screenshotting chats, by inadvertently backing up app data with cloud backup tools, or by tampering with the app to block deletion.

REMOVED means any team member the app has declared “removed,” to any user.

Adversaries:

MEMBER is a user who has been invited to a group, with no other capabilities.

NON-MEMBER is a user who has never been invited to a group, or a user who was REMOVED, with no other capabilities.

HACKER can access keys or messages on the device of a member VICTIM, but has no other capabilities (such as recovering deleted data from a device.)

HACKER/ARCHIVER can intercept a team member’s network traffic, archive it for later decryption, and access keys on the device of a member VICTIM, but has no other capabilities.

Security invariants:

A NON-MEMBER cannot:

  • Read team messages.
  • Send messages as any MEMBER.
  • Degrade app functionality for any MEMBER, including by sending unwanted messages to any MEMBER that has disabled direct messages from NON-MEMBERS.

A MEMBER cannot:

  • Read messages from private chats or DMs that did not include them.
  • Read DELETED messages.
  • Send messages as any other MEMBER.
  • Add or remove MEMBERS unless authorized to do so.

A HACKER cannot:

  • Send messages as any member except VICTIM.
  • Read DELETED messages.
  • Read messages from private chats or DMs that did not include VICTIM.

A HACKER/ARCHIVER cannot:

  • Send messages as any member except VICTIM
  • Access any private chats or DMs that did not include VICTIM.
  • Access any DELETED messages from before they began intercepting and archiving messages.

Known weaknesses:

A NON-MEMBER can:

  • Send unwanted messages to a MEMBER who has not disabled messages from NON-MEMBERS.

A MEMBER can:

  • Degrade app functionality for any MEMBER, e.g. by spamming.
  • Prevent any message (or all messages) from being DELETED without the knowledge of other users, e.g. by screenshotting it, or by archiving app data.

A HACKER can:

  • Send messages as VICTIM.
  • Read all non-DELETED messages readable by VICTIM, including all future messages until VICTIM is REMOVED.

A HACKER/ARCHIVER can:

  • Do anything a HACKER can do.
  • Read any DELETED messages once readable by VICTIM, provided they were intercepted and archived by HACKER.

Note re: deletion:

Because messages are potentially stored by every member, it won’t be possible to delete messages on-demand (e.g. when users click a delete button) for members who are offline—because there is no way to communicate to these users that messages should be deleted. This means there will inevitably be UX challenges in ensuring deletion matches user expectations.

However, we can strictly adhere to the threat model above by making each client automatically report successful deletion to all members, and by telling a user that a message has been deleted only once all other member clients claim to have deleted that message. (Once all clients report deletion we can rely on the explicit assumption, in our Usage Scenario, that deletion did in fact occur.)

Milestones

To address the need for a secure, usable team chat space, while meeting the security invariants above, we have identified the following milestones:

  1. Low-latency, theoretically-deletable public groups
  • Online users can send and receive messages with low-latency
  • Users can sync recent messages when they come online
  • It is technically possible to delete message data, i.e. by all participants deleting all app data from their devices.
  1. Low-latency private groups
  • A user can create a new team chat space
  • That user, i.e. the owner, can securely invite members
  • Messages in that space are end-to-end encrypted
  • The owner can remove members
  • Non-members do not know the Tor addresses of members, and so have no way to interfere with team conversations, e.g. by spamming or DoS’ing members.
  1. Private channels and direct messages
  • Members can send and receive private direct messages, off-chain.
  • Members can create private channels, and invite other members to join them
  1. Mobile support
  • Users can access teams on Android and iOS
  • Users can receive notifications of new messages.
  • Note: some sacrifices in decentralization and metadata privacy may be required to build a working product, especially on iOS.
  1. Deletion and “disappearing” messages
  • Owners can set a global message deletion policy (e.g. messages deleted in 1 month)
  • Channels can have stricter settings (e.g. messages deleted daily)
  • Users can delete individual messages
  • It is clear to all users when messages selected for deletion have actually been deleted from all user devices.
  1. Low-latency, off-chain account registration
  • Trust team owners to register accounts and distribute key/name bindings to all users.
  • Mitigate potential harm if the owner’s device is compromised
  • Update security invariants to reflect remaining weaknesses
  • Research distributed identity approaches like CONIKS, ETHIKS, Bitforest, and the existing startup ecosystem for solutions that can address remaining weaknesses.

Security properties left for future work

Many generally-desirable security properties did not surface as critical requirements in our interviews with journalists, sources, and security experts who work to protect journalists, so we leave achieving them for future work—even though we are already using tools that provide some of these properties:

  1. Security requirements for Zcash payments (we seek to follow the wallet app threat model but haven’t had outside review.)
  2. Metadata protection and anonymity
  3. Forward and backward secrecy
  4. Message ordering integrity
  5. Preventing accidental archiving of messages (e.g. through misconfigured cloud backup)
  6. Managing keys across multiple devices
  7. Tools for managing unwanted messages without disabling all messages from outside teams.
3 Likes

Im very used to STRIDE, so haven’t seen this style of threat model before. I will need to read up on it before I can give more constructive feedback.

I know this isnt what you asked me for, but I need to know how this bit works.

Am I correct in these assumptions.

  • All messages are stored on the main chain
  • I have my priv key and can always generate a new view key to retrieve the messages off the block chain if needs be.

also tangential but kinda relevant.

thats one thing snowden and i agree on. otr all the way.

This is pretty easy for anyone with a modicum of tech savvy to set up. - I am surprised not more people use it or have heard of it.

First, thanks for any attention at all you can give this! It means a lot!

It’s probably a good thing to write something up using this methodology too. I’d be happy to start something if it’s helpful. There are still some design decisions we have to make though about how private groups work, and how everything works on iOS.

Also, the question I’m most looking to answer is “are the above requirements expressed coherently and are there any requirements we should probably include for this use case that we haven’t included?” since we still have to build the thing, be audited, fix things, and so on.

Right now, most messages are stored on the main chain, and we use viewing keys for group chats.

However, the requirements that 1) it must be possible to delete messages and that 2) since users need an end-to-end encrypted replacement for Slack latency must be low, are pushing us to do messaging off chain. So we’re building our own off-chain messaging solution built on Tor and IPFS (in a closed network belonging only to members) and soon no messages will be on chain.

Each team (what Slack would call a “workspace” and Discord would call a “server”) will be a group of users connected via Tor v3 onion addresses that they only use for that team.

Every message they send will be shared with all users in their team, using a gossip network.

For DMs or private channels, messages are still sent to everyone in the team but the message is encrypted to the recipients.

Once we have teams big enough where everyone receiving every message would be a problem there are easy ways to make message routing more efficient, but we want to leverage every peer’s ability to store and forward messages as much as we can.

We’re okay with the tradeoff where members theoretically can learn who is talking to who in a team, in exchange for higher message availability.

We haven’t figured out how inviting and removing members will work yet, but it might be as simple as “send them the list of members and their onion addresses” and “start a new group with everyone but the removed member.”

Hey, I haven’t had a chance to dig in and read this properly, but hot take it looks like you’ve engaged with the right people and gotten to a useful working set of invariants. Invariant-centric is a great approach. I’ve used STRIDE before as well and also asset/scenario-based modeling.

One tiny suggestion to make it easier to scan - you’ve got fine attacker definitions in terms of their capabilities, but you might want to call them NETWORK OBSERVER and KEY THEIF to be more descriptive. Up to you, just a suggestion.

Couple of scenarios that stood out, I think you probably don’t want MEMBERs to be able to degrade app security or attack a user’s keys - in this model a passive network observer doesn’t have the ability to actively attack a user, but there’s tonnes of attacks that have historically resulted in key theft through oracles at the network layer. So to fix that you might want to either add active network attacks to the existing ATTACKER/ARCHIVER or make a new category or state that requirement as a property of the MEMBERs.

Hopefully that was helpful but you’ve got my signal if not.