Hello again. We’re please to report that our second milestone (CI/CD updates) for the Ziggurat 3.0 is complete.
In this post we go over the milestone deliverables in detail, as well as report on other ZCash-related work we completed that was not directly related to the milestone.
Milestone 2 Deliverables
The original grant milestone reads:
Currently, two GitHub Action workflows that run every 24 hours: one to run the Ziggurat test suite for zcashd and one for Zebra. Additionally, the network crawler runs at the same cadence.
Each of these workflows outputs timestamped json or jsonl files to the results folder, which are then ingested to the UI via the GitHub API.
We would like to continue the “nightly” testing but also add special cases whenever new zcashd and Zebra releases are tested. The intent is to be able to apply special testing and GUI display for the releases.
As we started on this work, we quickly realized the requirements for this milestone were slightly misguides and overall incomplete: we already do test every version by way of our nightly work, and we track version numbers in our UI. Thus, the actual work completed took a slightly different, but ultimately more meaningful direction.
So here, we present a full overhaul of our CI/CD system.
Google Cloud Storage for workflow results
Previously, we had stored our results in GitHub as JSON files. When that became unwieldy, we GZipped them. However, now our data processing is becoming much more sophisticated and complex (particularly for ZCash) so we finally moved to a cloud-based solution.
Right now access to these data sets are gated via a Google Cloud service account, but we plan on making these publicly accessible again soon.
We also cleared our Git history to make clone and pull times much shorter.
The Ziggurat Core Workflows
These workflows are shared across all network implementations. These include a build step, data processing, diff with previous, and standard static analysis.
This core workflow provides a dead simple baseline for the Ziggurat implementations - check out the code, build it, and then upload it as an artifact. It allows command line argument passthrough as a workflow parameter.
This is then extended in the ZCash-specific Ziggurat repo here.
This workflow takes output from the Ziggurat crawler and readies it for
crunchy-based analysis. Currently it only uploads the files to GCS but this step can be extended with any other pre-processing we would need.
The workflow is parameterized with name, extension and repository so that it can be similarly extended in the implementation repos, like ZCash does.
Diff With Previous
This seemingly simple workflow ends up being extremely useful for us: diffing the previous test results with the current one using a clever bash function in the workflow.
Like the “Node Building” step above, this is extended in the ZCash-specific Ziggurat repo.
This step performs the typical linting required as a first quality gate for the Ziggurat code. We are standardized on Rust across all of our repos, so this is easily modularized into a baseline workflow.
Note: Nix-based Development Environments Introduced
For the static analysis steps, we are experimenting with using Nix-based development environments. This allows for consistent environments with all the necessary dependencies to be pre-shipped for testing, as well as cached to further speed up testing runs.
In addition to the milestone requirements, and also in addition to any required internal housekeeping /bug fixes we also have completed the following work:
Crunchy updates to filter out non-ZCash nodes.
It came to our attention during a recent Aborist call that many of the nodes in the ZCash network were only part of the p2p network, and not part of the consensus network for the ZEC cryptocurrency. We have since filtered them out.
The above filtering was also built into the p2p-viz GUI. We also added some new filtering options. You can see it all in action here: http://220.127.116.11/
DNS seeders support
Our crawler has the new ability to gather initial peers from DNS seeders. This simplifies the CI/CD workflow as the crawler now takes the peer set directly from those seeders instead of spawning a local node just to obtain those first peers. Moreover, the crawler can use multiple DNS seeders, checking their presence and logging possible down ones. Downed seeders are reported using error entries in crawler output log.
Ziggurat has a domain!
For those of you who clicked the “UI” link in the previous section, you might have noticed that the Ziggurat GUI has some shiny new lettering: https://app.runziggurat.com. Website coming soon!