Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement workaround for sharding #437

Open
wwared opened this issue Mar 13, 2025 · 0 comments
Open

Implement workaround for sharding #437

wwared opened this issue Mar 13, 2025 · 0 comments

Comments

@wwared
Copy link
Contributor

wwared commented Mar 13, 2025

As of #413, we're using an up to date version of SP1 directly as our STARK prover. However because of changes to the SP1 lookup arguments in v4, the way cross-shard lookups are performed changed and there is now a specific distinction between local and global lookups. The lookups we currently use are now only local, thus breaking our existing Memoset implementation when we generate a proof with more than one shard. (Related to this issue is #259)

The workaround that would need to be need to be implemented is to:

  • When creating the chip vector, add N copies of each chip that can be sharded, and limit the number of queries to each by a shard_size parameter
    • This means maximum number of shards is set when the chip vector is created. In principle this bounds the computation
  • Ideally this sharding happens transparently only when generating a sharded proof
    • This is because we don't want to actually have duplicate Lair chips to minimize changes to the execution code
  • The proof now always has a single shard as far as the SP1 prover backend is concerned, and our lookups still work
    • This is a temporary solution

I've made some experiments locally to ensure that this workaround could work (i.e. the proof generates and verifies successfully), but because adding or removing chips changes the verification key, the fact that this change requires adding a layer of indirection inside the QueryRecord, and that this hacky workaround isn't actually feasible long term, I've instead chosen to explicitly panic when generating a proof with more than one shard and point to this issue instead of implementing the full workaround, since the proper solution to this issue will likely be much different from this workaround when we actually implement it.

If we change our mind and find out we do need to implement the workaround, we can close this issue. The proper, long-term solution to efficient proof sharding will likely require a custom lookup argument tailored for MemoSet specifically.

wwared added a commit that referenced this issue Mar 13, 2025
After SP1 v4, the global lookup argument for cross-shard chip
communication was drastically redesigned, which means our currently
implementation is temporarily broken. See #437. The proper solution will
require a proper design for sharded Lurk computation.

This commit also removes all of the now-unnecessary dummy chips, since
only single-shard computations can be proven right now.
github-merge-queue bot pushed a commit that referenced this issue Mar 18, 2025
* chore: Replace Sphinx with SP1

* chore: Minor cleanup and fix benches

Removes the PrimeField32 bound where unnecessary

* fix: Use Arcs and remove some expensive clones

This impacts mainly long computations with a lot of sharding (see
lair_shard_test) and large matrices, or large lair programs

* chore: Minor cleanup

* feat: Update to SP1 v4+, temporarily disable sharding

After SP1 v4, the global lookup argument for cross-shard chip
communication was drastically redesigned, which means our currently
implementation is temporarily broken. See #437. The proper solution will
require a proper design for sharded Lurk computation.

This commit also removes all of the now-unnecessary dummy chips, since
only single-shard computations can be proven right now.

* chore: Address clippy

* chore: Temporarily disable `bench-regression` CI

* chore: Remove debug cargo patches

---------

Co-authored-by: wwared <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant