Sapling advocacy infographic opportunity

With tonight’s Sapling activation, and growing wallet support in coming months, there’s going to be strong continuing discussion & media coverage of the Sapling performance improvements.

Many have used, and are likely to keep using, this graphic from the Zcash ‘Cultivating Sapling’ blog post:

Unfortunately, these radial gauges are weak at communicating the magnitude of the changes, and the usability implications. For example, there’s no proportional contrast between large- and small- RAM consumption - just pegged needles pointing ~180’ different directions. The 90’ difference in duration values is similarly arbitrary, based on choice-of-ranges. And dominating red-yellow-green colored-regions imply some ‘acceptability’ judgement, but based on unclear criteria, and unrelated to the space-taken-up.

This same info, in another graphic presentation, could be more impactful, and such a graphic likely get wide re-use among those reporting/blogging/tweeting about Sapling improvements.

I have some strong ideas for this alternative view, but not quite the graphical skills & time to get it right. So I wanted to share ideas here for those who might be able to take it up.

In particular:

  • Durations are well-indicated linearly, such as by horizontal bars of contrasting proportional lengths. (Time is often naturally understood graphically as a left-to-right ‘timeline’.)

  • Resource utilization, as with RAM, is well-indicated as contrasting, inscribed areas. (This matches the physicality of memory-mediums, in chips or storage-media-surfaces, and the need for implementations to "fit inside’ a certain amount of ‘memory space’.)

But even further:

  • What is an acceptable transaction delay for payment applications? Well, the chip-card standard now rolled out at retail terminals in the US is called “EMV”. UL did a real-world study of chip-card transaction times, and found an average duration of 11 seconds.

  • What’s the most relevant RAM capacity for mass usage? Even low-end smartphones have 512MiB.

Adding these two outside benchmark values – perhaps as lower-contrast reference indicators – hammers home the messages: Sapling speed-to-transaction-visible-on-network is competitive with legacy centralized systems, and Sapling resource requirements fit within mass-market phones.

As a really-simple ascii-fication of this:

'Sprout' (1st-gen October 2016) vs 'Sapling' (2nd-gen: October 2018)

Transaction ('Proof') Creation Time

 Zcash Sapling: ******* 7 seconds
EMV chip-cards: *********** 11 seconds
  Zcash Sprout: ************************************* 37 seconds

Device Memory Required

#40 MiB###                     |                           |
|                              |                           |
|                              |                           |
x                              x                           x
|                       Phones |                           |
|                       512MiB |                           |
|--------x---------x---------x-+                           |
|                                                          |
x                                                          x
|                                                          |
|                                                          |
|                                                          |
|                                                          |
x                                                          x
|                                                          |
|                                                          |
|                                                          |
|                                                          |
x                                                          x
|                                                          |
|                                                          |
|                                                 Sprout   |
|                                                 3000MiB  |

Anyone up for making this as a pretty PNG that’s easy to embed elsewhere? (For example, a 1024x512 image that will be displayed unclipped in tweets?)

  • Gordon

I’ve NEVER seen a real-world Sprout txn be generated that fast, its normally been MINUTES…but Sapling on the same machine has done it in ~2 seconds on testnet, and even an aarch64 ARM chip that took over 4 minutes for a Sprout one only took 7 seconds for Sapling


Good point - more recent & precise numbers might be even better.

From the exact same Sapling-enabled software, it should now be possible to get more precise, same-software/same-machine timings, for both Z-Sprout and Z-Sapling sends.

And maybe, memory-use as well? I recall elapsed proving time being in either the logs or the z_getoperationstatus output, but I don’t know how a normal zcashd user might log the RAM used.


total transaction generation time is in both, but no, RAM usage isn’t in either


Nice idea r.e. infographic.

I tried this r.e. comparing memory usage in the real world and it’s not easy. I’ll repeat what Zooko told me in the community chat here when asking how to accomplish this:

Suggest either (a) give up in despair because it is impossible to measure memory usage on modern OSes, (b) use a heap-allocation counter like massif which at least doesn’t respond to things you don’t care about, © turn off swap, turn off most of your RAM, turn off memory overcommit, and test whether the program completes or fails. At least that measures what you actually care about.

After using atop on Linux I also used Valgrind with the massif option and massif visualizer to interpret and (given the caveat above) a really rough comparison is 1.4GB before (down to 1.1GB on the new proving system for Sprout once Sapling activates) and circa 30MB for Sapling but there will be official benchmarks shortly. Those 3GB numbers are pre low memory proving implementations.

Proof times vary widely on hardware as noted above. On a relatively high-end machine, it currently takes less than 30 seconds for a Sprout transaction (down to around 17 seconds for a Sprout tx post Sapling) and on another machine, it takes about 70 seconds for a current Sprout tx (down to 34 post Sapling).

Ballparks seem to be for a single output Sapling transaction around a couple seconds on a decent machine. It’s also crazy fast to do a t->z transaction as only output proofs (say 0.2-0.5 seconds) - this is really important when considering exchange payouts etc…


Just for info I ended up going for option a) above and gave up but I went back and found this work in progress which was a direct comparison of Sprout/Sapling on the same machine. This is an older circa 6 year old machine hence the slower times.

Edit, it’s not clear on this slide as the Sprout result above was obtained on v1.1.0 on testnet so this is the old Sprout not new Sprout proving system on Sapling :confused:


Win7 srvpk 1 AMD A6-3420M APU 1.5GHz, 4GB ram
T->Zc 120 secs at peak 1.3 GB
The blocks in the graph are 5 secs x 0.5 GB
The blue

T->Zs 3 secs and mem usage on that scale doesn’t show anything but a straight line

1 Like

Sends that don’t consume any shielded value strike me as unrepresentative, at least compared to the “ideal world” where all transactions are fully shielded. Then, every transaction consumes 1 to N prior shielded outputs, and results in 1-2 new shielded outputs (and most commonly, 2).

So, the benchmark that may be of most interest:

  • decide on some base amount x of ZEC, and start with at least 8x of t-ZEC value. Send x ZEC to a zc address 8 times. (Would a single z_sendmany with the same zc address repeated 8 times actually create 8 shielded outputs, or would it take 8 transactions?)
  • measure a send of x-ε from that zc address to another zc address (for a 1-input:2-outputs txn)
  • measure a send of 3x-ε (for a 3:2 txn)
  • measure a send of 5x-ε (for a 5:2 txn)

(Where ε is equal or slightly more than the transaction-fee, I’d expect that any sensible output-selection policy would necessarily choose the desired number of inputs for each case in the scenario above.)

By measuring the 1:2, 3:2, and 5:2 cases, we might get a rough sense of time/space as a function of necessary inputs.

It seems the massif approach should reflect an accurate indicator of heap usage, if the possibility of other concurrent memory-consumption can be ruled out - for example by benchmarking while disconnected from the p2p network (via a -connect= launch). If valgrind+massif significantly slows execution, the time & space measures would best be done separately, but each starting from an identical chainstate.

(Is there any chance the stack memory usage of proving is significant enough to deserve benchmarking?)



Watching this topic, would love to see a side-by-side comparison/benchmark. I can work on a Graphic once it’s settled.

3 Likes looks promising


Looks like Zooko has some nice infographics:


Ive had no time this week
The speed improvment is quite remarkable
A one minute transaction is 100% faster than a two minute transaction,
200% - 30sec
400% - 15sec
800% - 7.5 sec
1600%- 3.25 sec
A slower machine will prolly do about 7 sec Zs->Zs
A fast machine is probably like 3 sec so >1600% increase but would also do a sprout much faster too so…


1 Like

That may not be accurate, what it comes down to is a number of sapling transactions you can do in the time it takes to do one Sprout transaction say 2 minutes
At 6 seconds sapling Zs transaction would be 20 to 1 ratio i.e 2000%

So anywhere between 800% and 2000% speed increase

The Zcash Co benchmark (according to their Infographic) is 37 sec and 2.3 sec which is a 16 to 1 ratio, 1600%
Time reduction would be 93.75%

This is a serious question, can you name anything in the history of technology (or anything maybe too!) that has increased speed by that much in one upgrade?
(I cant!)

When we upgraded from bow and arrow (recurve 225fps) to musket (1450 fps) it was at best 700%

1 Like

I just remembered this tool released by Netflix for visualization of cpu/ram performance for applications, called FlameScope. Perhaps it could be used for benchmarking?

1 Like

Ive been testing, the computers relative state can affect speeds greatly, on my older device a Zs->Zs can vary from 15 to 25 secs with no discernable difference in states
Ive got another device with greater memory will try that and compare

Oh, I had no idea about Netflix’s Flame Scoop tool, this was an informative read


Welcome to the forum!

Thank you so much, Sonya :slight_smile:

1 Like

Yes!I agree with you.