Batching Transactions - Fork and Merge?

Batching Transactions Using Fork And Merge

So I imagine one aspect of scalability that will need to be addressed, with the help of recursive proofs, is to allow parties to batch transactions.

There are probably a few ways we could represent batched transactions but to me that feels like a “fork and merge” chain where Zcash allows forks to merge back into Zcash chain as long as the fork doesn’t break a subset of the consensus rules? This also allows for some other interesting scenarios. Things like larger block size, shorter block time, lower difficulty, chain centralisation, etc are all thing we might allow in the fork. Is this possible and/or desired?

I’ll describe in a little more detail below and hopefully convey why I find it interesting…

Forking the Zcash chain

Provide zebrad (or zcashd) APIs for forking the chain. The user could then change a few common features for example:

  • Block time
  • Difficulty
  • Private/centralised chain
  • Fees/commision

Merging the fork

As long as the fork doesn’t break a subset of the consensus rules it could be merged back. There might be a couple of different options we want to allow like soft and hard merging. Soft merging could ignores failed transactions (e.g. double spend) and hard merging only merges if all transactions are valid.

There might be some other features we might choose to add to make this move viable like asset locking in the Zcash chain but that’s a larger discussion.

Fork vs Traditional Finance

There are some really interesting similarities of this mechanism to some things in transitional finance.

Direct Debit

A direct debit is just permission for a third party to execute a transaction in the future. A user could execute a transaction on the fork which the third party could execute/merge with the main Zcash chain at a later date. In both instances the transactions can fail if funds available are too low.

Insurance/options

A claim chain could be provided that is only valid when some conditions are met. An example might be a fork that only merges if a transaction did/didn’t occur or funds at a specific address go lower then a certain threshold.

Crowdfunding

Many transactions grouped together that can only be executed if the funding goal is reached.

Futures

Transactions that can only be executed/merged in the future.

Fork vs Smart Contract

Forks also have some similarities to some smart contract applications.

Atomic Transactions

Complex multi-user atomic transactions can be created.

Yield

Lots of small yield transactions are made but the merge (and fees) only occur once.

Receiver Paid Fees

Fork is merged by the receiver of funds effectively making the merge paid by receiver.

Rollups

This could be an automatic feature of the merge process.

L2 Chains

L2 Chains are just L1 forks that regularly merge back.

User Programmed Contracts

Forks could be decentralized with many nodes and include a new subset of consensus rules to manage code execution.

Superset of Consensus Rules

The fork will need to enforce the Zcash rules to allow merging back into Zcash chain but may add new consensus rules.

Conditional Asset Locking

Lock assets and prevent transfers unless a condition is true.

Conditional Transfer

Prevent a transfer from occuring unless a set of conditions are true.

Delayed Transfer

Delay a transfer. Optionally this could trigger contract to be run next block to test conditions for delay again.

Fees/Commision

Fee structure could be changed and routed to different addresses. Part of these funds could fund the merge back to the Zcash chain.

5 Likes

Sharding :thinking:?

Fluid Main Chain

Maybe the main Zcash chain could be fluid. There could be some kind of voting system to decide on the main chain.

No Main Chain

Maybe there is no main chain :hushed:. Almost feels like a dynamic version of sharding :thinking:. Organisations spinning up their own “shards”/“forks” to meet the needs of their users :exploding_head:.

1 Like

In a future with recursive proofs, do we still need a main chain? If the validity of proofs are the same in all level of recursion, everything just become layer 0.

In this kind of future, I wonder how would we account for all asset types issued on Zcash?

Say, company G forks Zcash and issue Fire tokens for use inside their cloud platform. Meanwhile, company A also forks Zcash and issue Water tokens for their web service. How would we settle all of that when their community wants to join the Zcash back?

1 Like

In a traditional view of sharding someone would have to “transfer” the tokens from shard A to shard G. I think the main difference in what I’ve described is we now have 2 options:

  • merge fork A with fork G
  • merge fork A and fork G with main chain

While I find the fork scenario really interesting we’ve also introduce some complexity to the system compared to traditional shards. But I think that’s just a feature of debt. Debt is complex and can’t always be resolved. But debt is a very important (maybe the most important) part of our financial systems. Forking is just a representation of debt.

In the main chain example:

|------------| ---> |-----------|
| Zcash main |      | Fork A    |
|------------|      |-----------| --->
                                      Transfer 1 ZEC (debt)
                                  <---
               <--- Merge (reconcile/settle debt)

I guess it all depends on the requirements of the fork. Example:

  • A shopping platform might only need to reconcile their chain once a day before they start preparing shipments.
  • An exchange might have a block time of 1 second and reconcile every main chain block.

To a user this is probably a better overall view of the world. A wallet could view a combination of all forks and display a true view of the user’s current financial position and warn the user before making a transaction that they are currently unable to settle.

1 Like

In comparison to how debt in ETH works. In ETH the user has to make some educated assumptions about the contract they are participating in and kind of debt that it represents. The user’s understanding of that debt may not be correct.

In the most extreme case we can see this play out when contracts get hacked. Almost certainly the user’s understanding of that debt and the actually debt they were entering didn’t align.

Also a fork corrects the onus of risk onto the receiver of debt not the user of debt. When a smart contract gets hack it’s the users collateral/funds that are lost. If a forked chain shifts too far from reality the risk is incurred by the issuer of that debt.

Collateral

The collateral problem can be solved by allowing users to lock assets on the main chain and only allow them to be transferred on a specified (and register) fork. This would ensure a double spend between forks can’t occur.

1 Like

Forks (vertical scaling) + Sharding (horizontal scaling)

Forks are a kind of shard but with the transaction history. Even once shards have been introduced to Zcash we still have to solve the vertical scaling problem. An organisation may perform better when only processing transactions within their organisation. The organisation would then do a batched merge of these transactions in the future.

1 Like

POC Project?

I wonder if @ZcashGrants is in the business of RFPing a fork POC?

I imagine the first steps are to:

  1. Add a Zebrad “fork” feature that allows the node to freeze any block additions
  2. Cache any transactions locally
  3. Fix all existing APIs to include the cached transactions in their calculations
  4. Add a Zebrad “merge” feature that unfreezes the node and transmits the transactions
2 Likes

I just found a closely related discussion on GitHub regarding “shielded transaction aggregates” Shielded transaction aggregates · Issue #4946 · zcash/zcash · GitHub.

The significance of recursive proofs here is that they potentially allow aggregates to be created in a distributed way and then combined.

This “fork and merge” functionality could be an API used to access that transaction aggregation functionality. Where the merge is simply an aggregate transaction.

This could all work at an extraordinary scale since there’s no reason that a fork can’t fork a fork. And no reason a merge can’t merge a merge.

Maybe we don’t gain anything over some more simple/obvious APIs that @daira and others already intuitively know. Either way, if the fork and merge idea is till sound I can’t imagine any of this should stop a non scaled POC testing the idea…

1 Like

@zooko, @ZcashGrants I found the last question really interesting. 1906 earthquake and we lose power and connectivity. I don’t know how to solve the power problem but being able to still transact on a local (forked) node and merging them when connectivity is restored would at least mitigate some of the problems. As long as a accessible node existed before the connection was lost there’s no reason why the admin should be allowed to tell the node to fork itself from the main chain with the expectation of merging back when the connectivity is restored.

While there are the obvious cyber and physical issues that prevent countries from staying connected during times of attack/war there are also normal day to day issues around infrastructure.

Take yesterday for example. Tasmania, a state of Australia with over 500,000 people lost internet (https://www.abc.net.au/news/2022-03-02/tasmania-reliance-on-undersea-cables-fails-again/100873280). People rarely carry cash anymore. Basically all shops had to close.

1 Like

@zooko Has ECC had any discussions with Filecoin about their new possible hierarchical consensus implementation? In the case of Zcash I imagine the spec could be greatly simplified. We could even go so far as forcing subnets to only interact with their own ZSA if we wanted to reduce consensus complexity. But the high level description kind of matches what I’ve been describing. I imagine with the switch to POS we could using a finality gadget ontop of a simplified hierarchical consensus.

Hierarchical Consensus Links:

3 Likes

Great question! I don’t know if we have. This sounds like the kind of thing that would be great to discuss at our community Arborist call! Why don’t you join the next one?

2 Likes