"Zbay" threat model, request for feedback!

Hi everyone,

As part of Zbay’s Zcash Foundation grant from last year, I researched the communication needs of journalists and sources, to see if a messaging app built using Zcash encrypted memos could meet their needs better than, say, like SecureDrop.

We’ve created a threat model based on that work, and I’m looking for feedback on it. If anyone with experience in security would like to offer feedback, that would be amazing. @earthrise, @dconnolly, @chelseakomlo, @zebambam, and @mistfpga—I’ve learned a lot from your posts and publications, and this might be relevant to things you’re thinking about, so if you’d like to take a look I’d be super grateful!

Thanks again to @sarahjamielewis for the extremely helpful early guidance. I’ve never done any academic anthropology work (just informal user research) but thanks to Sarah’s guidance the methods and quantity of data seem at least loosely on par with the academic work I’ve seen on this subject. I think we succeeded at getting an accurate snapshot of peoples’ needs right now. (I certainly hope so, since a ton of product decisions will follow from this!) Due to funding and time constraints, and the complexity of privacy and safety concerns for interviewees, many of whom are leaders in their fields and do very sensitive work, we won’t be publishing anything beyond this overview. But I’d be happy to speak privately about the conclusions in more depth if anyone would like to do that.

Also thanks to @antonie and @acityinohio at Zcash Foundation for funding this research, and funding the basic work needed to act on it, like the work moving to a light wallet stack and using Tor for off-chain messaging.


Threat model research results

To develop a threat model for our decentralized messaging app “Zbay” (a placeholder name that will change soon), we conducted ~18 user interviews with journalists, sources (specifically: activists or policy experts who communicate frequently with journalists), and security experts whose work includes advising and protecting journalists. We also reviewed the security properties of existing encrypted messaging apps, including centralized (e.g. Signal, WhatsApp), federated (Matrix), and decentralized (Ricochet, Cwtch, Session) approaches.

What journalists and sources said

Our hypothesis going into the study was that journalists needed a cheaper tool for anonymous tips than SecureDrop, but we found a greater need for secure team chat.

Indeed, many journalists affirmed that the cost of running SecureDrop was indeed prohibitive. However, in our interviews, both journalists and sources were far more concerned about the security of their internal communications (i.e. conversations within their own organization or with close collaborators at other organizations) than with journalist/source communication.

While many use Signal and/or voice communication for sensitive conversations, most still use insecure channels like Slack and email, and they worried about exposure to a large-scale breach or a targeted attack. Users cited missing features like themed channels as to why Signal was not an option for team chat, and expressed a general wariness as to whether other Slack alternatives were sufficiently usable and reliable.

What security experts who advise journalists said

Experts named account compromise via phishing or guessing of insecure passwords as the most common threat, followed by the risk of devices being compromised physically, e.g. by being lost or seized. Malware attacks were mentioned, but noted by multiple experts as being much less common.

Usability, features, high availability, and “software that just works” were experts’ top recommendation criteria, above any specific security features. Several experts said something to the effect of, “if software isn’t easy to use or interferes with work, users will misuse it or avoid it entirely.”

The next highest recommendation criteria was the trustworthiness of the team behind a given piece of software, followed by whether the software was open source. Timed deletion of messages came next, alongside the project’s stability, funding, longevity, and capacity to respond to vulnerabilities.

Key takeaways

  1. Among the journalists and sources we interviewed, the primary and most motivating concern about communications security was the breach of written internal communications by an adversary or in a public leak accessible to adversaries.

  2. The core need was not just encrypted messaging, since most interviewees use Signal, but rather a suitable encrypted replacement for Slack or Discord, which Signal is not because it lacks necessary team chat features such as themed channels.

  3. Usability, end-to-end encryption, timed deletion, and resistance to account credential phishing were the most important security requirements, according to the experts we interviewed, and this was consistent with responses from users (sources and journalists). Neither users nor security experts mentioned specific properties of end-to-end encryption like forward or backward secrecy as requirements, and often recommended tools that did not have properties like forward secrecy.

Threat model

Given the above conclusions about the threat models and needs of the users we hope to serve, our goal is to achieve the following set of security invariants for the following scenario.

(We follow the “invariant-centric threat modeling” approach outlined here: GitHub - defuse/ictm: A user-first approach to threat modeling.)

Usage scenario:

A team uses Zbay as a Slack replacement for team chat, and all team members use full-disk encryption with user-controlled keys and a strong password.

Definitions:

DELETED means any data the app has declared “deleted,” to any user, and that users have not archived using other means, for example by screenshotting chats, by inadvertently backing up app data with cloud backup tools, or by tampering with the app to block deletion.

REMOVED means any team member the app has declared “removed,” to any user.

Adversaries:

MEMBER is a user who has been invited to a group, with no other capabilities.

NON-MEMBER is a user who has never been invited to a group, or a user who was REMOVED, with no other capabilities.

HACKER can access keys or messages on the device of a member VICTIM, but has no other capabilities (such as recovering deleted data from a device.)

HACKER/ARCHIVER can intercept a team member’s network traffic, archive it for later decryption, and access keys on the device of a member VICTIM, but has no other capabilities.

Security invariants:

A NON-MEMBER cannot:

  • Read team messages.
  • Send messages as any MEMBER.
  • Degrade app functionality for any MEMBER, including by sending unwanted messages to any MEMBER that has disabled direct messages from NON-MEMBERS.

A MEMBER cannot:

  • Read messages from private chats or DMs that did not include them.
  • Read DELETED messages.
  • Send messages as any other MEMBER.
  • Add or remove MEMBERS unless authorized to do so.

A HACKER cannot:

  • Send messages as any member except VICTIM.
  • Read DELETED messages.
  • Read messages from private chats or DMs that did not include VICTIM.

A HACKER/ARCHIVER cannot:

  • Send messages as any member except VICTIM
  • Access any private chats or DMs that did not include VICTIM.
  • Access any DELETED messages from before they began intercepting and archiving messages.

Known weaknesses:

A NON-MEMBER can:

  • Send unwanted messages to a MEMBER who has not disabled messages from NON-MEMBERS.

A MEMBER can:

  • Degrade app functionality for any MEMBER, e.g. by spamming.
  • Prevent any message (or all messages) from being DELETED without the knowledge of other users, e.g. by screenshotting it, or by archiving app data.

A HACKER can:

  • Send messages as VICTIM.
  • Read all non-DELETED messages readable by VICTIM, including all future messages until VICTIM is REMOVED.

A HACKER/ARCHIVER can:

  • Do anything a HACKER can do.
  • Read any DELETED messages once readable by VICTIM, provided they were intercepted and archived by HACKER.

Note re: deletion:

Because messages are potentially stored by every member, it won’t be possible to delete messages on-demand (e.g. when users click a delete button) for members who are offline—because there is no way to communicate to these users that messages should be deleted. This means there will inevitably be UX challenges in ensuring deletion matches user expectations.

However, we can strictly adhere to the threat model above by making each client automatically report successful deletion to all members, and by telling a user that a message has been deleted only once all other member clients claim to have deleted that message. (Once all clients report deletion we can rely on the explicit assumption, in our Usage Scenario, that deletion did in fact occur.)

Milestones

To address the need for a secure, usable team chat space, while meeting the security invariants above, we have identified the following milestones:

  1. Low-latency, theoretically-deletable public groups
  • Online users can send and receive messages with low-latency
  • Users can sync recent messages when they come online
  • It is technically possible to delete message data, i.e. by all participants deleting all app data from their devices.
  1. Low-latency private groups
  • A user can create a new team chat space
  • That user, i.e. the owner, can securely invite members
  • Messages in that space are end-to-end encrypted
  • The owner can remove members
  • Non-members do not know the Tor addresses of members, and so have no way to interfere with team conversations, e.g. by spamming or DoS’ing members.
  1. Private channels and direct messages
  • Members can send and receive private direct messages, off-chain.
  • Members can create private channels, and invite other members to join them
  1. Mobile support
  • Users can access teams on Android and iOS
  • Users can receive notifications of new messages.
  • Note: some sacrifices in decentralization and metadata privacy may be required to build a working product, especially on iOS.
  1. Deletion and “disappearing” messages
  • Owners can set a global message deletion policy (e.g. messages deleted in 1 month)
  • Channels can have stricter settings (e.g. messages deleted daily)
  • Users can delete individual messages
  • It is clear to all users when messages selected for deletion have actually been deleted from all user devices.
  1. Low-latency, off-chain account registration
  • Trust team owners to register accounts and distribute key/name bindings to all users.
  • Mitigate potential harm if the owner’s device is compromised
  • Update security invariants to reflect remaining weaknesses
  • Research distributed identity approaches like CONIKS, ETHIKS, Bitforest, and the existing startup ecosystem for solutions that can address remaining weaknesses.

Security properties left for future work

Many generally-desirable security properties did not surface as critical requirements in our interviews with journalists, sources, and security experts who work to protect journalists, so we leave achieving them for future work—even though we are already using tools that provide some of these properties:

  1. Security requirements for Zcash payments (we seek to follow the wallet app threat model but haven’t had outside review.)
  2. Metadata protection and anonymity
  3. Forward and backward secrecy
  4. Message ordering integrity
  5. Preventing accidental archiving of messages (e.g. through misconfigured cloud backup)
  6. Managing keys across multiple devices
  7. Tools for managing unwanted messages without disabling all messages from outside teams.
5 Likes

Im very used to STRIDE, so haven’t seen this style of threat model before. I will need to read up on it before I can give more constructive feedback.

I know this isnt what you asked me for, but I need to know how this bit works.

Am I correct in these assumptions.

  • All messages are stored on the main chain
  • I have my priv key and can always generate a new view key to retrieve the messages off the block chain if needs be.

also tangential but kinda relevant.

thats one thing snowden and i agree on. otr all the way.

This is pretty easy for anyone with a modicum of tech savvy to set up. - I am surprised not more people use it or have heard of it.

First, thanks for any attention at all you can give this! It means a lot!

It’s probably a good thing to write something up using this methodology too. I’d be happy to start something if it’s helpful. There are still some design decisions we have to make though about how private groups work, and how everything works on iOS.

Also, the question I’m most looking to answer is “are the above requirements expressed coherently and are there any requirements we should probably include for this use case that we haven’t included?” since we still have to build the thing, be audited, fix things, and so on.

Right now, most messages are stored on the main chain, and we use viewing keys for group chats.

However, the requirements that 1) it must be possible to delete messages and that 2) since users need an end-to-end encrypted replacement for Slack latency must be low, are pushing us to do messaging off chain. So we’re building our own off-chain messaging solution built on Tor and IPFS (in a closed network belonging only to members) and soon no messages will be on chain.

Each team (what Slack would call a “workspace” and Discord would call a “server”) will be a group of users connected via Tor v3 onion addresses that they only use for that team.

Every message they send will be shared with all users in their team, using a gossip network.

For DMs or private channels, messages are still sent to everyone in the team but the message is encrypted to the recipients.

Once we have teams big enough where everyone receiving every message would be a problem there are easy ways to make message routing more efficient, but we want to leverage every peer’s ability to store and forward messages as much as we can.

We’re okay with the tradeoff where members theoretically can learn who is talking to who in a team, in exchange for higher message availability.

We haven’t figured out how inviting and removing members will work yet, but it might be as simple as “send them the list of members and their onion addresses” and “start a new group with everyone but the removed member.”

Hey, I haven’t had a chance to dig in and read this properly, but hot take it looks like you’ve engaged with the right people and gotten to a useful working set of invariants. Invariant-centric is a great approach. I’ve used STRIDE before as well and also asset/scenario-based modeling.

One tiny suggestion to make it easier to scan - you’ve got fine attacker definitions in terms of their capabilities, but you might want to call them NETWORK OBSERVER and KEY THEIF to be more descriptive. Up to you, just a suggestion.

Couple of scenarios that stood out, I think you probably don’t want MEMBERs to be able to degrade app security or attack a user’s keys - in this model a passive network observer doesn’t have the ability to actively attack a user, but there’s tonnes of attacks that have historically resulted in key theft through oracles at the network layer. So to fix that you might want to either add active network attacks to the existing ATTACKER/ARCHIVER or make a new category or state that requirement as a property of the MEMBERs.

Hopefully that was helpful but you’ve got my signal if not.

2 Likes

Thanks! I’ll make those changes re: more descriptive adversary names.

Do you think “NETWORK OBSERVER + KEY THIEF” is too awkwardly long?

Since we aren’t focused on metadata protection at this stage, and since everything will be end-to-end encrypted, it seemed to make sense to only list the adversary who can both observe and steal keys, since that’s the adversary that has a meaningful set of capabilities.

Got it. Does this edit work? (Edits in bold.)

Also, just thinking about this quickly, this does open up new worlds of things to think about in terms of message layer security.

Our current plan is to encrypt everything to the recipient’s key as if it was a zcash memo, and sign everything using a standard library (or using zcash keys once that becomes possible.) Any first pass thoughts on whether this is appropriate and on what weaknesses this might leave?

1 Like

Categories just come down to what you find useful for reasoning about your security. There are also sometimes attacks that can happen to network traffic (malleability attacks) that allow you to change the contents of the plaintext (by modifying the ciphertext) without knowing the key. So the ability to modify network traffic and the theft of keys are at least partially disjoint, hence having separate categories for:

. Network observer
. Network active attacker
. Key thief

Since they’re different abilities, although of course we don’t want to provide a method of escalating from either network observer or active attacker to becoming a key thief.

2 Likes

This is really great! I like how the threat model is super clean and simple and captures the invariants in many fewer words than I could have.

Here’s a brainstorm of threat-model-related things that come to mind.

Administrator-users

If there are administrator-users with special privileges like adding/removing users, consider them as a class of adversary. Are they allowed to read other users’ DMs, for example (this might be a desirable feature if an organization requires complete backups)? Can they destroy data by deleting channels/users? A lot of the invariants that apply to MEMBER (but probably not all of them) should probably also apply to the administrator-users as well.

On the same note, whichever special administrator-type privileges there are, the threat model should state that NON-MEMBERS and MEMBERS cannot perform those actions when they shouldn’t be allowed to.

Denial of Service and malicious data destruction

You might also want to consider malicious destruction of data for other adversaries as well, e.g. NON-MEMBERs cannot cause messages to be deleted, MEMBERs cannot delete messages that they aren’t supposed to be able to, etc.

Also consider denial of service attacks more generally. An active network attacker can obviously drop packets to DoS the system, there’s no way around that, but a NON-MEMBER should not be able to prevent MEMBERs from communicating (e.g. even if spamming is acceptable, it shouldn’t be possible to crash/brick another MEMBER’s client by sending them a malicious message).

Attackers who have compromised infrastructure / component systems

Anywhere that there is special infrastructure the system depends on, for example a centralized server that’s responsible for distributing public keys, add an adversary to document what shouldn’t happen when that infrastructure gets compromised.

That leads me to a question: how are keys exhanged in the system, is it assumed that’s happening securely out of band?

Metadata leakage

The metadata privacy properties will get really hairy because there’s going to be so many of them and so many adversary perspectives to consider that it will get hard to compress, but here’s a start at least:

  • * cannot determine that a private channel exists.
    • For example, if a software company uses new private channels to discuss security vulnerabilities, knowing that a new channel was created leaks the fact that there’s a security vulnerability.
  • * cannot determine whether two users are communicating.
    • e.g. an evil boss should not be able to tell their employees are talking with HR, etc.
  • NON-MEMBERS should not be able to determine when a MEMBER is online/active.
    • e.g. finding out when someone’s sitting at their computer by watching their online status
  • If a user is a member of two different groups, maybe MEMBERs shouldn’t be able to tell they’re the same user? (…and so, so many more invariants that I’m not thinking of!)

Phishing

Phishing-type impersonation: not necessarily sending messages as another user, but with a close-enough name and avatar that the recipient doesn’t notice it’s an attack.

Message integrity

Bambam covered this pretty well already, there are some more invariants I can think of:

  • * cannot make it look like Bob sent a message that Alice actually sent. I think “Send messages as any member except…” already captures this, but it’s a subtle distinction: changing who sent a message without needing to know that message’s content.
  • * cannot make it look like no message was sent when one actually was.
  • * cannot make it look like a message was sent twice when it was only sent once (replay attacks).
  • …and probably more.

Rather than define all of these invariants, it might be simpler to talk about “transcript consistency” and “transcript integrity”:

  1. Transcript consistency – all users see the same transcript for all groups/channels/DM rooms that they are in. If there’s ever a way to cause one user’s view of the transcript to be different than another user’s (e.g. messages are missing, in a different order, duplicated, wrong timestamp, etc.) then that’s treated as a security bug.
  2. Transcript integrity – for all users in a group/channel, the messages in the transcript that appear to have come from them at a certain point all actually came from them at that point.

#1 makes sure everyone sees the same thing, #2 makes sure what they’re seeing is actually correct.

(Note that it’s apparently technically challenging to implement transcript consistency, for reasons I’m not familiar with, since Signal doesn’t do it, but you can still aim for it in the threat model even if the protocol isn’t something you can formally prove meets some formal definition of consistency).

1 Like

Is it typical in this model for the app creator (us) to be an adversary class? Or somebody who can subvert our release and update process?

I had this in an earlier version and then left it out to focus attention on the more informative threats, since users who use us likely trust us, and understand that downloading and running code involves trusting the app developer.

Also, we talked about this via voice but just to respond here so there’s a record of it, I think we want to focus on metadata protection in some future version of this threat model, because it isn’t core to what users were saying they needed, and because the tools they use now offer so little metadata protection.

For this one, we disable unicode usernames, require all usernames to be lowercase alphanumeric, and would be open to other best practices. I’ll add this to the threat model so that it’s explicit.

Are these two categories all-encompassing for active network attacks of the kind zebambam mentions? Or are there some others that we should enumerate here?

Also, thanks so much again for all this feedback!!

Here’s a new draft of the threat model!

Thank you for your feedback, @mistfpga, @zebambam, and @earthrise. I think I integrated all of it. Adding some points in a separate note.


Threat model

Given the above conclusions about the threat models and needs of the users we hope to serve, our goal is to achieve the following set of security invariants for the following scenario.

(We follow the “invariant-centric threat modeling” approach outlined here: GitHub - defuse/ictm: A user-first approach to threat modeling.)

Usage scenario:

A team uses Zbay as a Slack replacement for team chat. The team has an existing secure communications channel for sending and receiving initial invitations (e.g. a Signal group).

Every team member has an authentic, non-malicious version of the Zbay app, and all team members use full-disk encryption with user-controlled keys and a strong password.

Definitions:

DELETED means any data the app has declared “deleted,” to any user, and that users have not archived using other means, for example by screenshotting chats, by inadvertently backing up app data with cloud backup tools, or by tampering with the app to block deletion. [3]

REMOVED means any team member the app has declared “removed,” to any user.

Adversaries:

OWNER is the creator of a group. A group cannot have multiple OWNERS.

MEMBER is a user who has been invited to a group by a non-malicious OWNER, with no other capabilities.

NON-MEMBER is a user who has never been invited to a group, or a user who was REMOVED by an OWNER, with no other capabilities.

DRAGNET can intercept a team’s network traffic, and archive it for later decryption.

MALWARE can access keys or messages on the device of a member VICTIM, but has no other capabilities (such as recovering deleted data from a device.) This could be a malware attacker, or an attacker with physical access to the device and tools to “unlock” it.

MALWARE + DRAGNET can do everything MALWARE and DRAGNET can do, but has no other capabilities.

NETWORK ACTIVE ATTACKER can both monitor and actively attack the network (for example by blocking access to the network entirely for everyone or certain users, blocking specific pieces of data from reaching their destination, or altering data in transit) but has no other capabilities. This includes attackers who can successfully degrade or disable the Tor network.

ZECWALLET SERVER is a party with control of our Zcash lightwalletd server. This includes the Zecwallet team (https://zecwallet.co/), their hosting provider Amazon, other infrastructure providers, and any adversary that can compromise any one of these.

Security invariants:

OWNER cannot:

  • Read messages from private chats or DMs that did not include them, or cause these messages to be DELETED.
  • Read DELETED messages.
  • Send messages that appear to be from any other MEMBER, or cause the sender of any message to appear incorrectly in any way.
  • Steal Zcash funds from other MEMBERS.
  • Learn the IP address of other MEMBERS.
  • Learn which MEMBERS are communicating to each other, and when, in private chats and DMs that do not include them.
  • Cause the contents of messages sent by other MEMBERS to appear incorrectly in any way.
  • Cause any message to appear as if it was sent twice when it was only sent once.
  • Crash the app or device of MEMBERS.
  • Learn the private keys of any MEMBER.
  • Learn if a MEMBER in one group is also a MEMBER of another group.

MEMBER cannot:

  • Do anything OWNER cannot do.
  • Add or remove MEMBERS, or make anyone OWNER.

MALWARE cannot:

  • Do anything VICTIM cannot do to others or themselves. (VICTIM can be either MEMBER or OWNER.)

MALWARE + DRAGNET cannot:

  • Access any private chats or DMs that did not include VICTIM.
  • Access any DELETED messages from before they began intercepting and archiving messages.
  • Send messages that appear to be from any MEMBER except VICTIM, or cause the sender of any message to appear incorrectly in any other way.
  • Cause the contents of messages sent by other MEMBERS to appear incorrectly in any way.
  • Cause any message to appear as if it was sent twice when it was only sent once.
  • Crash the app or device of other MEMBERS.
  • Learn the private keys of any other MEMBER.

NETWORK ACTIVE ATTACKER and ZECWALLET SERVER cannot:

  • Read any group messages.
  • Send messages that appear to be from any MEMBER.
  • Send messages to any MEMBER.
  • Steal Zcash funds from MEMBERS.
  • Learn the usernames of MEMBERS.
  • Crash the app or device of MEMBERS.
  • Learn the private keys of any MEMBER.
  • Alter the contents, sender, or timestamp of any message a MEMBER sees, in any way, including by causing any message to appear as if it was sent twice when it was only sent once.

DRAGNET cannot:

  • Do anything NETWORK ACTIVE ATTACKER cannot do.
  • Degrade app functionality for any MEMBER.

NON-MEMBER cannot:

  • Do anything DRAGNET cannot do.
  • Do anything OWNER cannot do.
  • Determine when a MEMBER is online/active.

Known weaknesses:

MEMBER can:

  • Degrade app functionality for any MEMBER, e.g. by spamming, or failing to relay messages to or from a MEMBER.
  • Prevent any message (or all messages) from being DELETED without the knowledge of other users, e.g. by screenshotting it, or by archiving app data.
  • Provide an inaccurate record of their own messages to other MEMBERS, for example by altering message contents or timestamps. [2]

OWNER can:

  • Do anything a MEMBER can do.
  • Add and remove MEMBERS.

DRAGNET can:

  • Learn who is using the app.
  • Learn the IP address of any user.
  • Learn which groups any MEMBER belongs to.
  • Learn which MEMBERS are communicating to each other, and when.
  • Intercept and archive messages for later decryption by MALWARE (see MALWARE + DRAGNET.)
  • Learn details, such as sender or recipient, of any user’s Zcash transactions.[1]

MALWARE can:

  • Do anything a MEMBER can do, as VICTIM.
  • Do anything OWNER can do, if VICTIM is OWNER.
  • Send messages as VICTIM.
  • Read all non-DELETED messages readable by VICTIM, including all future messages until VICTIM is REMOVED.
  • Steal Zcash funds from VICTIM and learn details of past and future Zcash transactions.[1]
  • Learn the IP address of VICTIM.
  • Alter VICTIM’s view of all conversations in any way.

MALWARE + DRAGNET can:

  • Do anything MALWARE or DRAGNET can do.
  • Read any DELETED messages once readable by VICTIM, provided they were intercepted and archived by MALWARE.

NETWORK ACTIVE ATTACKER can:

  • Do anything DRAGNET can do.
  • Degrade app functionality for any user.
  • Interfere with Zcash transactions, within certain limits.[1]

ZECWALLET SERVER can:

  • Interfere with Zcash transactions, within certain limits.[1]
  • Learn the IP address of all app users.
  • Learn the IP address that sent or received a given Zcash transaction.
  • Learn which users are sending Zcash to each other, for all users who use Zecwallet infrastructure—including all app users as well as Zecwallet users.[1]

Notes:

  1. For more information on the security properties of Zcash transactions, ee Wallet App Threat Model — Zcash Documentation 5.2.0 documentation. Please note, however, that its list of weaknesses may not be all-inclusive, since the implementation we use (GitHub - adityapk00/zecwallet-light-cli: Zecwallet Lightclient Library and CLI interface) differs in some ways from the one this document describes.

  2. We will treat any failure of transcript integrity or inconsistency as a security vulnerability, but because it’s unclear whether we can make strict claims that are clear and meaningful to users, we leave these claims out of this threat model. The library we’re using for storing and syncing messages attempts to achieve transcript consistency and integrity, and you can learn more about its security properties here: GitHub - orbitdb/orbit-db: Peer-to-Peer Databases for the Decentralized Web

  3. Because messages are potentially stored by every member, it won’t be possible to delete messages on-demand (e.g. when users click a delete button) for members who are offline—because there is no way to communicate to these users that messages should be deleted. This means there will inevitably be UX challenges in ensuring deletion matches user expectations. However, we can strictly adhere to the threat model above by making each client automatically report successful deletion to all members, and by telling a user that a message has been deleted only once all other member clients claim to have deleted that message. (Once all clients report deletion we can rely on the explicit assumption, in our Usage Scenario, that deletion did in fact occur.)

1 Like

@zebambam I ended up using the terms “MALWARE” instead of “KEY THIEF”, since it encompassed both the ability to steal keys, access stored messages, and generally mess with the user’s own experience of the app. It seems like in practice these capabilities come as a package (when someone successfully attacks you with malware or unlocks your device) so it seemed like a helpful simplification to the user reading this

I added a NETWORK ACTIVE ATTACKER adversary, and I added adversaries for all the central services, which will end up just being the lightwalletd we use (ZECWALLET SERVER.)

@earthrise — It seemed like transcript integrity and consistency didn’t fit very well within the invariant-centric approach, since for now at least these are only achievable by degrees. There are specific things we can write as invariants, but I was struggling to write them in ways that would be meaningful to users. For example, we can say things like “in cases where messages prove that they were sent after other messages, this proof is valid,” or “another member cannot retroactively insert messages before messages you have already seen,” but I think it’s too hard for users to unpack what this means in practice. Any thoughts on this?

We’re assuming secure distribution of the owner’s key out-of-band, assuming an invitation from an honest owner, using the key material the owner provides at the moment a user is invited as the source of truth, and not accepting any updated key information for any user.

There’s an issue similar to transcript consistency with how users who are offline when a new user joins get the key information for that user in a way that isn’t vulnerable to a malicious owner.

So we may have to add more known weaknesses against an owner adversary.

Our current plan for this is here: Private communities like Slack or Discord (discuss more and break out tickets) · Issue #721 · TryQuiet/zbay · GitHub