Announcing Halo 2

First, you said (emphasis mine):

It’s easy to ensure compatibility of the TGGPL with any desired license.

Second, the whole derivative work is (by definition) subject to OTHER_LICENSE, no matter what that license says about the work it covers. If OTHER_LICENSE then says “have fun”, and the TGPPL exempted that from its own constraints, then you can have fun with the entire derivative work. Which beats the whole point of the TGPPL.

I don’t know who you talked to, but I think you must have misinterpreted. I’m completely confident that scaling using Halo 2 will work. Indeed, any of several possible designs will work. Based on recent benchmarks, I’m also very confident that verification performance of Halo 2 will be good enough for the first stage in which recursive validation is not used.

Communicating detailed designs is a lot of work. I don’t think it’s at all realistic to expect this to be done at the same time as making an initial announcement that we’re planning to use Halo 2. And with all due respect, there is an inconsistency between wanting everything to be planned out in advance of an announcement, and wanting to know everything we’re working on incrementally. There have already been public discussions of some of what we’re working on, including my talks at Zcon1 and Amsterdam, and a comprehensive paper about Halo with full proofs. I also just opened a bunch of tickets about Sapling-on-Halo-2 on GitHub. Frankly, we are more open about protocol design than practically any other project, and the constant negativity from some quarters is really getting on my wick.

I will stake my reputation on the fact that this multi-stage approach to scaling and changing the proof system is sound engineering and risk management. Have some patience, please.

11 Likes

This is conflating derivative works with combined works. A work that uses the zcash/halo2 implementation as a library would be a combined work. In any case, we’ve already said that we’ll make whatever licensing changes are needed to ensure compatibility.

2 Likes

@daira, I have great respect for your reputation, as well as @ebfull.

I also have great appreciation for how difficult it is to design cryptographic protocol (especially those involving complex ZK statement), and distributed systems (especially those involving complex consensus rules and highly-incentivized malicous actors), and doubly so for systems that have both. Getting these right is a huge stretch of the capabilities even of the best experts. Especially if those experts have many other responsibilities.

And to be clear, what’s at the stake is more than reputation. Already in the near term, what’s at stake is the opportunity cost. If you’re wrong and this won’t work, or will take much longer to work, then we will have wasted years of development, in which Zcash could have benefited from your expertise to improve transaction speed, or to add functionality such as User Defined Assets, or better wallet protocols, or improved network privacy, or numerous other things to which you could have put your mind.

So it saddens me to see you relying on staked reputation and secret intuitions about (a) what’s possible and (b) what’s desirable, when you could rely on an enthusiastic community and its resident experts.

Constructively, may I suggest opening a dedicated forum post that gathers the links you mentioned, and contextualizes them? That could be a great start for the discussion we need.

2 Likes

@daira I like your scaling talk at Amsterdamn. In fact, thinking about it was exactly how I started down the line of trying to figure out what our scaling bottlenecks actually were.

The issue isn’t communicating detail designs, its working out even the outlines of a design.
Basic questions like 1) are we bandwidth bound, compute bound, or memory bound?
Are we limited by block verification (effectively verifying batches of transactions) or the gossip protocol (effectively the cost of verifying individual transactions).

We also don’t have estimates on alternatives like: if we are tx verification limited, what about just speeding up sapling verification in blocks. Because if we do that, and then hit some other limit thats not block verification, recursive block verification won’t help us scale.

We simply don’t know the outline of the problem rigorously. All we have is intuitions. And the experts disagree on the intuitions. We should have numbers to back this up first.

Note, your shading proposal can potentially address either concern. But Halo possibly cannot: it makes individual tx verification 10x slower at best and proofs 10 times larger. Maybe you can apply batching/recursion/ aggregation to this , maybe you cannot or maybe its crazy complicated and you end up with DAG consensus. Depends where the scaling bottleneck actually is. Halo could plausibly make scaling way worse. And note, I’m not staking my reputation on that: i’m stating that in my expert opinion its a possibility and we don’t have the data to discount it.

2 Likes

That is precisely why we’re taking this approach of not trying to design and deploy everything for the same upgrade.

On the subject of opportunity cost for other features, the per-circuit setup requirement of Groth16 is a major obstacle to updating the circuit in practice, for any features that require that (including User-Defined Assets). Yes we could potentially use a proving system with trusted but universal setup like PLONK, but in my opinion it’s a sound engineering decision to use Halo 2 instead (which reuses many ideas from PLONK). It wouldn’t be feasible to do recursive validation later with PLONK alone, although that option was considered.

You’ve now introduced the assumption (restriction?) that the way of combining would be as libraries. But even this wouldn’t achieve the barrier you assume, because even linking libraries makes the application a derivative work of the library. Or at least, that’s what the FSF and many others believe (there’s some controversy). Indeed it’s the essential difference between the GPL (which does aim to cross linking boundary) and the LGPL (which doesn’t).

In any case, we’ve already said that we’ll make whatever licensing changes are needed to ensure compatibility.

OK, I’m listening.

I don’t think anyone is calling for that. People are calling for a technically-substantial discussion of the prospective design and implementation roadmap. The talks you gave on scalability were a great start, but when we had complete silence for a while and in particular never learned how Halo will actually be used, especially in the interim stages, and in the node network, etc. (Maybe there’s discussion in your new GitHub issues? I haven’t seen them yet.)

Yes we could potentially use a proving system with trusted but universal setup like PLONK, but in my opinion it’s a sound engineering decision to use Halo 2 instead (which reuses many ideas from PLONK). It wouldn’t be feasible to do recursive validation later with PLONK alone, although that option was considered.

OK, I’m intrigued. Can you elaborate on this engineering decision? For example?

  • How does prover and verifier performance compare, between Halo 2 and PLONK (when you don’t use Halo’s recursion amortization, and with the same level of circuit optimization)?
  • How does ECC development time (and thus opportunities cost) compare?
  • If you used PLONK today to get universal setup, but when moved to Halo later (much) later for scalability, how much work would be thrown away? In particular, since Halo 2 uses a similar arithmetization to PLONK, wouldn’t most of the circuit design, implementation and optimization carry over?
3 Likes

Linking libraries does not make the application a derivative work, it makes it a combined work. This is consistent with what the FSF says. (Most of what they say is specifically about their own licenses, but they say nothing that contradicts this, as far as I’m aware.)

I can’t speak for ECC on what license changes will be made, obviously.

1 Like

Both, actually. “Combined work” is something defined in the FSF licenses and as you say, it explicitly covers app+library linking.

Derivative work is defined by law and case law, and the question of whether linking a library creates a derivative-work taint is notoriously tricky.

As for FSF position, I’m pretty sure I saw them (or at least Richard Stallman) taking an explicit position. Unfortunately I can’t find a clear-cut reference at the moment, but I did find two close things by FSF, which are instructive:

First, they have this FAQ entry on linking:

You have a GPL’ed program that I’d like to link with my code to build a proprietary program. Does the fact that I link with your program mean I have to GPL my program?
Yes.

But actually they can say this without relying on the derivative work notion. They can (and do) simply say that you’re not allowed to use the library at all unless you agree to the GPL’s terms that cover also apps that the library. This isn’t a coincidence. The GPL is designed to be maximally robust to jurisdictional whims, and thus minimizes reliance on fuzzy things like derivative work tests.

But then, there’s the FAQ entry on subclassing:

In an object-oriented language such as Java, if I use a class that is GPLed without modifying, and subclass it, in what way does the GPL affect the larger program?
Subclassing is creating a derivative work. Therefore, the terms of the GPL affect the whole program where you create a subclass of a GPLed class.

So if the library happened to be written in an OOP style where uses it requires subclassing, then derivative work taint does carry over, according to FSF. (edit: I removed a stronger statement here, which I need to rethink.)

So we see that things are murky (or worse) even in the simple case where there is a clear separation using libraries. If the app developer (heavens forbid) just copies and changes the code, then things can become entangled as derivative works even more easily.

2 Likes

A good reason why Zcash needs to protect its’ code/technology. Does leaving the code completely open lead to innovation or more scams? Can anyone name me a derivative project that is worth more than the parent? All Zcash forks are almost completely dead. How much money was lost in these failed ventures?

4 Likes

In our experience, Plonk+Halo proof times can be quite a bit slower than Plonk+KZG, on the order of ~2x depending on the circuit model. Both involve an MSM for each prover polynomial, but Plonk+Halo also needs to compute G' \leftarrow G'_\mathrm{lo} u^{-1} + G'_\mathrm{hi} u in each round (as with any Bulletproof-like scheme). It’s a smaller number of curve multiplications, but it lacks the MSM structure, and it involves variable points after the first round (depending on the approach), so it requires slower multiplication algorithms.

Maybe the ECC will come up with a clever way to mitigate this cost though. (And in any case I think Halo is a very promising path to scalability, so an increase in prover costs could be justified.)

9 Likes

Halo is great…and tasty!

3 Likes

After rhe « what are you listening to? » thread, a « what are you drinking? » :slight_smile:

I have a question about Halo if @ebfull or anyone else who knows can answer I’d appreciate it :slight_smile:

2 Likes

hi @john-light

I am not sure how much sean uses the forums. @str4d and @daira are probably better to tag. :slight_smile:

2 Likes

Any updates on when the preliminary performance numbers based on different circuit sizes will be finished? @ebfull

2 Likes

I guess we are not getting any updates this year. I hope there aren’t any problems.

ECC Q4 livestream mentioned a few things about Halo 2 and its plan for inclusion into Zcash next year. I don’t think they mentioned any hard number but they seem to be confident about it.

Nice work guys! @ebfull @daira @str4d

Hopefully everything works out and you can share more details about it.

8 Likes

Yeah, sorry for not following up on this thread. In our Q3 livestream we announced our tentative benchmarks and they’re very competitive with the existing proofs we use in Sapling. (30ms single thread verification with parallelism, batching, accumulation and aggregation available to speed it up, faster proving times than Sapling, not substantially larger transaction sizes.)

Performance will not be a barrier for deploying Halo in Zcash at all. Concerns from earlier in the thread to the contrary were, as I mentioned, very premature.

11 Likes