Ziggurat 3.0

Hey everybody, Mark here from Equilibrium . We’re happy to announce that we have submitted a proposal for Ziggurat 3.0, our most ambitious effort yet.

In previous grants we conducted single node network testing and that expanded into network-wide analysis. In this grant we will expand our crawler to gather more metrics and deeper insight about the state of the p2p network as a whole, even going as far as testing out a theoretical “suggesting best peers” method of strengthening the network. From there, we will expand the GUI to offer visualizations of the network topology and performance.

Finally, all of the work from this and the past two grants will culminate in a “Red team” event on testnet, where we will investigate all of our hypothetical attack vectors and see how it affects the network as a whole.

As always we are honored and humbled by the opportunity. See the proposal and more here

8 Likes

Hi @aphelionz very nice. Ziggurat is an example, to me, of a well-funded Zcash grant. The team has consistently updated documentation and pushed commits (I’ve followed silently until now) in the background with no real coverage from Zcash entities.

Good luck.

Heads up I think your proposal link needs the id to directly link to the post URL, instead of grid/overview page.

3 Likes

Thanks a lot @pkr! I’ll fix the gallery link.

2 Likes

Hello, @aphelionz Thank you for submitting your grant proposal! We will review it in the upcoming weeks and reach out if we have any questions.

In the meantime, if you have any questions for us, you can post them to this thread or DM us at @ZcashGrants.

Thank you.

1 Like

Our pleasure, @aiyadt. Thanks for the opportunity.

Looks great! Thank you so much for all the help you’ve given the Zebra team so far.

I would be interested in some more details for this part of the grant:

Using the crawler to provide nodes with lists of peers that would be most beneficial to the structure/goals of the network

I can imagine this might lead to some security issues, such as:

  1. A centralised crawler instance going down or being subverted, and providing peers that serve an attack chain or deny service
  2. The shared algorithm of all decentralised crawlers being gamed to provide peers that serve an attack chain or deny service
  3. The crawlers all providing the same peers, leading to those peers being overloaded

These issues already exist for the algorithms used by:

  • the ECC DNS seeder implementation (2/4 mainnet instances)
  • the ZF DNS seeder implementation (2/4 mainnet instances)
  • the zcashd peer crawler (most mainnet nodes)
  • zebra-network (a few mainnet nodes)

So it would be great to chat about what our peer and network goals are, and how we get a diverse range of implementations.

2 Likes

You’re very welcome, and thank you for everything on your end as well :slight_smile:

We discussed this briefly and the key is to keep things unpredictable enough so that there isn’t a viable attack vector. Some initial possibilities are:

  1. Introduce randomness or only react in the case of undesirable situations (the aforementioned islands or heavy node centrality)
  2. Use a “pac man ghost” strategy where we have multiple crawlers, each with their own task

Overall, yes, let’s talk about what the specific goals are and how we might tailor this work towards that. That will be the beacon that we follow. We can have a call or we can discuss here, whichever suits you.

I guess to start - is there a specific type of topology that you’re aiming for?

1 Like

Not particularly - there are a few different topologies that could harm particular network participants. But it’s hard to say anything specific without knowing what the current state of the network is, and how it changes over time.

So first I would ask:

  • What is the current network topology?
  • How stable is it?
  • Is the current topology acceptable, or do we need to make changes?
  • If we need to make changes, should we change nodes, DNS seeders, or something else?
1 Like

Makes total sense!

Would it make sense then to consider the peerlist suggestion milestone in the grant one of many potential remedies, based on the results of the more detailed (and passive) crawler analysis?

@teor friendly ping on the above question ^

Sure!

I had assumed that most grants were flexible enough to change if the earlier milestones revealed new information.

Sure, we’re flexible on our end as well. Thanks again!

Hi @aphelionz and Zcash community! I’m on the DevSecOps team at ECC and I’ve compiled a few comments from our reviews/discussions.

The Ziggurat 3.0 proposal looks to be a comprehensive security solution.

Ziggurat/Equilibrium has been a trusted partner since before my time, and the team trusts them and is comfortable with their work. They provide expertise in cryptography and economics, along with the technical and blockchain expertise expected of a web3 security offering.

The project goes an additional step further with network analysis at the P2P layer instead of relying solely on RPC - anyone can fuzz an API endpoint, Ziggurat/Equilibrium has actual blockchain expertise. Also helpful for gathering detailed network metrics.

Focus on the network layer seems appropriate given there are no smart contracts or user uploadable code like e.g. Ethereum, Solana.

Network topography metrics could facilitate increased awareness of any intended or unintended centralization.

Proposed Solution - bullet point 5 “Using the crawler to provide nodes with lists of peers that would be most beneficial to the structure/goals of the network” potentially effective at identifying and removing malicious nodes.

Proposed redteaming exercise could confirm these mitigations.

Currently the majority of the work exists in two GitHub repositories:

The second link is broken

  1. https://github.com/runziggurat/zcash-gui

A privacy concern:

“Anonymized” topography data such as connection speeds, cloud status, and other stats that tools like nmap might allow.

This mainly concerns me from the standpoint of undermining Zcash privacy features. As long as this is not done in a way where this is possible, it should be ok.

Historical metrics in the GUI would indeed be useful, especially for on-call responders.

As would the Intelligent Peer Sharing Option, as long as it properly mitigates centrality.

Given their past experience with Zcash, overall blockchain/crypto body of work, and liaison with the developers, I believe we can trust their red teaming exercise to be appropriately thorough and tailored to our project. If enough testnet nodes can be coordinated, it could be quite a valuable simulation.

Unintended consequences are valid, and in-line with any other security offering from anyone else. While the concern about weaponization of scanners is valid, as with many other open source security tools, the benefits of leaving the code open sourced likely outweigh any downsides.

The “risks and mitigations” reflect the difficulty of the project. In summary, this is a HUGE undertaking which, if done correctly, would strengthen the security posture of the Zcash network.

Overall, the project is a substantial undertaking that has the potential to significantly increase Zcash security posture.

8 Likes

Thanks for the feedback @bbeale. I’m looking into making the runziggurat/zcash-gui repo public now, but if you want to just see the current GUI, you can see it here: Ziggurat Explorer

2 Likes

@aphelionz & Zcash Community, I am happy to announce that the @ZcashGrants Committee has unanimously voted to approve the Ziggurat 3.0 grant.

5 Likes

Great news! We’re so honored to be able to continue the work. Thank you all.

5 Likes

Hello! We’re pleased to report our first milestone is finally complete. This is a hefty one, as its work contains the deliverables for:

  1. A survey and analysis of network topology and topography that led to a P2P Network Visualizer
  2. A “back tested” Intelligent Peer Sharing (IPS) mechanism.
  3. A brand new tool in the Ziggurat toolbox called crunchy which performs the heavy-lifting data analysis on the crawler output
  4. A healthy set of network crawler updates required by the above

Here is a flowchart that shows the full network-wide section of the “Ziggurat Stack”:

Starting in the upper left with the ZCash Network:

  1. The crawler scans the network and generates a sample.json file.
  2. A separate process (now called crunchy) does detailed data analysis on the crawler output and produces its own state.json
  3. Both IPS and p2p-viz consume state.json
    • IPS produces peer.json which could be used by the crawler to “suggest” suitable peers to other nodes
    • p2p-viz displays visualizations of the data via WebGL/WebGPU powered browser application

Deliverable Breakdown

What follows are details about the tangible deliverables, and links to them when appropriate.

Crawler output and Visualizer

You can see the staging demo of the visualizer here http://35.210.188.30:3000. You will need load the HTTP version of the page and then click Geolocation → Load Default State.

If you don’t want to do that for any reason, you can see the p2p-viz screenshots below, copied from the README.

Understanding these screenshots is a little easier if you understand the following networking concepts:

  • degree – a count, representing how many direct, ‘one hop’ connections each node has to other nodes in the network.
  • betweenness – broadly, this tells us us how often a node lies on a path between other network nodes. It is computed by identifying all the shortest paths and then counting how many times each node falls on one.
  • closeness – this measure calculates the shortest paths between all nodes, then assigns each node a score based on its sum of shortest paths. This is not very relevant here, as neither the density or sparseness of a network is intrinsically bad. IPS tries to keep its own centrality high and connect to peers with high closeness (if the MCDA weights allow it).

In general, the lower delta between the min and max of these metrics is ideal. These, as well as other metrics and factors are described more in the IPS documentation

Here, a node in Barcelona is selected. Node metadata is displayed in the upper right corner. The keyboard command overlay and fps display are both active.

A node in Dublin is selected, and the display of connections is activated.

A histogram is displayed for the betweenness centrality.

We group several nodes at the same geolocation into a supernode, which is displayed as a magenta cube. The subnodes which comprise a supernode may be viewed by clicking on a selected supernode.

This is a rendering using a directed force library in 3d. The node coloring is based on the degree centrality, so number of connections.

Intelligent Peer Sharing (IPS)

IPS is fully implemented but has not been run on mainnet and thus remains mostly theoretical, however when we run it on the state.json files that we have, we observe:

  • There are no islands
  • Attacking (removing) some % of the top nodes cannot fragment the network.
  • During network optimization we’ve achieved increasing minimal degree (that’s good)
  • Decreasing maximal degree as well as lower delta (that’s also good).
  • Decreased max betweenees - but not increased the minimum one (still good but not as good)

The crawler can serve the peerlist as a response to getaddr request. However this is not released yet and this is the PR to be taken into consideration.

Other Documentation

  • The test directory of crunchy is useful to see how it works on a technical level
  • Crawler results are located in results directory
  • We have a machine readable version of this graph output from the crawler. You can download a gzipped version of it here) of the crawler.
  • Different graph measures are available in IPS report
  • Information about island formation is available in IPS report
  • Peer proposition for each node is available as crunchy (IPS) output
  • Peer statistics are also available in IPS report
  • A snapshot of the data on its latest crawl: zcash/2023-03-09.json.gz at main · runziggurat/zcash · GitHub
  • CI/CD details can also be found in the .github folders of the above linked repos
    /blob/main/README.md)
  • Contributor-level documentation focused on development efficiency is found in the quick guide
  • User documentation (install / getting started) can be found in the README files of most of the repos at Ziggurat · GitHub
10 Likes

Hello again. We’re please to report that our second milestone (CI/CD updates) for the Ziggurat 3.0 is complete.

In this post we go over the milestone deliverables in detail, as well as report on other ZCash-related work we completed that was not directly related to the milestone.

Milestone 2 Deliverables

The original grant milestone reads:

Currently, two GitHub Action workflows that run every 24 hours: one to run the Ziggurat test suite for zcashd and one for Zebra. Additionally, the network crawler runs at the same cadence.
Each of these workflows outputs timestamped json or jsonl files to the results folder, which are then ingested to the UI via the GitHub API.
We would like to continue the “nightly” testing but also add special cases whenever new zcashd and Zebra releases are tested. The intent is to be able to apply special testing and GUI display for the releases.

As we started on this work, we quickly realized the requirements for this milestone were slightly misguides and overall incomplete: we already do test every version by way of our nightly work, and we track version numbers in our UI. Thus, the actual work completed took a slightly different, but ultimately more meaningful direction.

So here, we present a full overhaul of our CI/CD system.

Google Cloud Storage for workflow results

Previously, we had stored our results in GitHub as JSON files. When that became unwieldy, we GZipped them. However, now our data processing is becoming much more sophisticated and complex (particularly for ZCash) so we finally moved to a cloud-based solution.

Right now access to these data sets are gated via a Google Cloud service account, but we plan on making these publicly accessible again soon.

We also cleared our Git history to make clone and pull times much shorter.

The Ziggurat Core Workflows

These workflows are shared across all network implementations. These include a build step, data processing, diff with previous, and standard static analysis.

Node Building

This core workflow provides a dead simple baseline for the Ziggurat implementations - check out the code, build it, and then upload it as an artifact. It allows command line argument passthrough as a workflow parameter.

This is then extended in the ZCash-specific Ziggurat repo here.

Data Processing

This workflow takes output from the Ziggurat crawler and readies it for crunchy-based analysis. Currently it only uploads the files to GCS but this step can be extended with any other pre-processing we would need.

The workflow is parameterized with name, extension and repository so that it can be similarly extended in the implementation repos, like ZCash does.

Diff With Previous

This seemingly simple workflow ends up being extremely useful for us: diffing the previous test results with the current one using a clever bash function in the workflow.

Like the “Node Building” step above, this is extended in the ZCash-specific Ziggurat repo.

Static Analysis

This step performs the typical linting required as a first quality gate for the Ziggurat code. We are standardized on Rust across all of our repos, so this is easily modularized into a baseline workflow.

Note: Nix-based Development Environments Introduced

For the static analysis steps, we are experimenting with using Nix-based development environments. This allows for consistent environments with all the necessary dependencies to be pre-shipped for testing, as well as cached to further speed up testing runs.

Other Work

In addition to the milestone requirements, and also in addition to any required internal housekeeping /bug fixes we also have completed the following work:

Crunchy updates to filter out non-ZCash nodes.

It came to our attention during a recent Aborist call that many of the nodes in the ZCash network were only part of the p2p network, and not part of the consensus network for the ZEC cryptocurrency. We have since filtered them out.

P2P-Viz updates

The above filtering was also built into the p2p-viz GUI. We also added some new filtering options. You can see it all in action here: http://34.117.79.143/

DNS seeders support

Our crawler has the new ability to gather initial peers from DNS seeders. This simplifies the CI/CD workflow as the crawler now takes the peer set directly from those seeders instead of spawning a local node just to obtain those first peers. Moreover, the crawler can use multiple DNS seeders, checking their presence and logging possible down ones. Downed seeders are reported using error entries in crawler output log.

Ziggurat has a domain!

For those of you who clicked the “UI” link in the previous section, you might have noticed that the Ziggurat GUI has some shiny new lettering: https://app.runziggurat.com. Website coming soon!

3 Likes

This is neat!

3 Likes

I may ask for some help from you with building this! I don’t know yet i’m gonna follow these work flows see if I can get one of them to work hehe