Zebra 1.0.0 Stable Release

To clarify, I wasn’t contesting using checkpoints, I was contesting trusting the data within checkpoints to be accurate.

Zebra requires a checkpoint shortly after Canopy, because it lets us skip implementing some legacy consensus rules.

So Zebra is not currently usable to verify the historical blockchain, solely the current blockchain as defined by social consensus, trusting old proofs to have been valid for the time because of their inclusion in social consensus. That’s… unfortunate, yet I understand why. Is implementing this verification planned?

Using checkpoints for older blocks is more secure than full verification,

While I agree checkpoints increase security, I do not believe trusting old blocks to have been correct is more secure than so trusting them, especially when one of the main points of an independent implementation is to detect implementation faults. By not performing an independent implementation of the legacy blockchain, it’s trusting the traditional Zcash software to have been correct, and trusting the social consensus of the current blockchain to only have included technically valid blocks. That last part is something I’m explicitly unappreciative of.

Is there a specific reason you want to turn off checkpoints? Or is it just to compare performance using the same operations?

I asked here as I thought it’d be optimal to understand performance. Now, as a person and not as a potential integrator, I’m shifting to discussing it for the purposes of actually independently verifying the Zcash protocol. While social consensus does have Zebra following the correct protocol, without verifying old blocks, Zebra is unable to verify social consensus was properly formed in the past, leaving room for uncaught inconsistency between the technical consensus and social consensus, which, at worst, would demonstrate a prior undiscovered protocol break (not that I believe that likely).

Re: block intake, not to dismiss that as a solid optimization (full support to it), my intention was to figure out memory usage during real-time operations (where blocks are intaked once every few minutes). I believe your logs are still during the sync process? As I’m unsure how else it would have had 3 new blocks within that time frame. Thank you for them though. A few hundred ms of CPU load is manageable.

And thank you for the rough memory information. Now I just have to find the numbers for Zcashd :sweat_smile:

That’s a legitimate concern - we want to generate checkpoints securely.

Currently we generate checkpoints using blocks fully verified by Zebra. We have also generated them from zcashd in the past. (We use a Rust tool that makes RPC calls.)

We’re open to making changes to this process to make it more secure.

You can disable consensus.checkpoint_sync to do full verification from just after Canopy (the end of the ZIP-212 grace period) onwards:

[consensus]
checkpoint_sync = false

https://doc.zebra.zfnd.org/zebra_consensus/struct.Config.html#structfield.checkpoint_sync

We don’t have any plans to implement full verification before Canopy, but we can take this feedback on board. It would also be possible to verify most of the rules before Canopy, and just skip the ones we haven’t implemented yet. But either way we would need some changes to our verifiers.

That’s a good point, there is a small risk that technically invalid blocks have been mined.

We discovered multiple consensus rule inconsistencies during Zebra development. You can see a partial list here (we didn’t tag all of them):
https://github.com/ZcashFoundation/zebra/issues?q=label%3AS-needs-spec-update

In most cases the specification was changed to match the chain. As far as I remember, they were all technicalities around how specific edge cases were handled. There was nothing that impacted any money.

In practice, both Zebra and zcashd make it difficult to do rollbacks past the last 100 blocks, and even harder to roll back past the last network upgrade. So that has influenced our development priorities.

Zebra was definitely finished its initial sync, we have logs that show sync progress every minute.

It’s possible to download 3 new blocks during a chain fork. Zebra verifies all recent forks in parallel. So if miners are operating on multiple chains, it will download the whole other chain as soon as it finds it.

1 Like

This isn’t true for the in-protocol checkpoints as you can’t verify Canopy, per my understanding.

Thank you for clarifying checkpoint_sync=false as an assumevalid=false equivalent. I guess my requested benchmark would shift to a post-Canopy assumevalid=fale, checkpoint_sync=fale comparison.

As for the list of inconsistencies, fascinating to have :smiley: I assumed there’d be quite a few both do to Bitcoin archaicness and the lack of a prior impl of Zebra. Being able to view a catalogue of some of them is great.

I did assume the spec would be adjusted to meet the social consensus and would not stand against such a decision. Solely a desire for the technical spec to be proper to whatever our modern social consensus requires.

Thank you for clarifying your node had a chain fork occur, hence the burst. If you do find a log for a single block, it’d be appreciated, yet that the on-fork timing is distinctly appreciated.

2 Likes

Almost all our previous checkpoints were generated from a fully verifying zcashd instance. (Sorry I wasn’t clear about this!)

We generated checkpoints using zcashd up until at least January 2023 and zebra 1.0.0-rc.4. These checkpoints include the Canopy and NU5 activation checkpoints, and a significant number of additional checkpoints.

Then we implemented checkpoints from Zebra, and did cross-checks with zcashd:

If any older checkpoints changed in a checkpoint PR, the author or reviewer would notice, and open a bug ticket. And anyone is welcome to verify our checkpoints using zcashd:
https://github.com/ZcashFoundation/zebra/blob/main/zebra-utils/README.md#zebra-checkpoints

Our integration tests provide live checks that our verification matches zcashd miners, and we regularly run a full mainnet sync to verify our checkpoints and full verification against the social consensus chain.

3 Likes

Join us Thursday 6/29 @ 21:00 UTC!

The ZF engineering team will be discussing & showcasing Zebra

:zebra: Launch Meeting - Zoom :zebra:

4 Likes

NCC Group have published the audit report!

/cc @kayabaNerve

8 Likes

Is the ECC going to begin developing on Zebra as soon as it reaches feature parity?

1 Like

im not expert in nodes stuff.
but was wondering if it would be possible to make node download most of the chain in one go maybe 90-95% of the full blockchain data and then start syncing it when it is most downloaded.

cause right now it seems it downloads bits of it and then syncs at the same time. is this correct?

or its not possible or has no benefit for any speed?

Syncing is the act of downloading blocks and verifying the chain which has to be done irt so you know whether or not the block and its assoc chain are correct. You could download a trusted copy of the chain and basically do that if you knew somebody with a copy. Otherwise its the equivalent of (re)indexing in zcashd and only requires doing once correctly w/o any rescans since theres no internal wallet.

2 Likes

This is neat idea. I suspect it wouldn’t make much difference because Zebra kinda of already does this indirectly, it downloads blocks in parallel as previous blocks are validated. But the overall result would depend on whether it’s faster to download or to validate a block. I think validating takes longer, so it’s very likely that validating won’t need to wait for a block to be downloaded (which is what would be improve if blocks were pre-downloaded)

4 Likes

Zebra 1.0.1 seems to really use much less (50%) cpu and bit less(20%) ram so far. also sync time seems 2x faster for now, but can’t say if because of update or not. anyway great job devs.

7 Likes

Thanks for reporting this bug! It turns out we had an async task using 100% CPU most of the time, which caused all sorts of problems.

That’s fixed in Zebra v1.0.1, please upgrade:

You can see me demoing the progress bars in last week’s recording of the Zebra showcase. We’re still waiting for our progress bar library to release fixes to a panic bug, so they’re not quite ready for use yet.

6 Likes
4 Likes

@arya2

4 Likes