About the endomorphism-based optimization in halo

Halo has this endomorphism-based optimization which nearly halves the number of constraints in scalar multiplication [r]G for a challenge r, and it is used to aggregate a bunch of polynomial commitments into one, as C = \sum_i^m [r^i] C_i. But to aggregate the corresponding openings, instead of v = \sum_i^m z^i * v_i and then one single scalar multiplication V = [v]G, with endomorphism we have to do more scalar multiplication V_i = [v_i]G and then V = \sum_i^m [r^i ]V_i, and hence the total number of scalar multiplications doubles…
In this case, how does endomorphism help to the number of constraints? Hope someone may clarify this for me, thanks!

When needing to compute a scalar multiplication where the scalar is some function of the challenges, like you indicate, the “endoscaling” algorithm is no longer helpful in the circuit; you need to compute the “effective” scalar that each challenge maps to in the field’s challenge space, which you can do by following a version of the same algorithm where instead of multiplying or doubling group elements you’re multiplying scalars. Then you can compute the scalar, and do an arbitrary-scalar multiplication of the group element.

It turns out that in Halo-like protocols, very rarely do you need to do the arbitrary-scalar multiplication. It only happens in, I think, two or three circumstances in the inner product argument. And you can defer computing these particular scalars to outside the circuit or, in 2-cycle recursion, to the other field where they are more efficient to evaluate. You can launder the result of the computation through public inputs.


I think I get what you’re saying. Thank you very much for the reply!