You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As of #413, we're using an up to date version of SP1 directly as our STARK prover. However because of changes to the SP1 lookup arguments in v4, the way cross-shard lookups are performed changed and there is now a specific distinction between local and global lookups. The lookups we currently use are now only local, thus breaking our existing Memoset implementation when we generate a proof with more than one shard. (Related to this issue is #259)
The workaround that would need to be need to be implemented is to:
When creating the chip vector, add N copies of each chip that can be sharded, and limit the number of queries to each by a shard_size parameter
This means maximum number of shards is set when the chip vector is created. In principle this bounds the computation
Ideally this sharding happens transparently only when generating a sharded proof
This is because we don't want to actually have duplicate Lair chips to minimize changes to the execution code
The proof now always has a single shard as far as the SP1 prover backend is concerned, and our lookups still work
This is a temporary solution
I've made some experiments locally to ensure that this workaround could work (i.e. the proof generates and verifies successfully), but because adding or removing chips changes the verification key, the fact that this change requires adding a layer of indirection inside the QueryRecord, and that this hacky workaround isn't actually feasible long term, I've instead chosen to explicitly panic when generating a proof with more than one shard and point to this issue instead of implementing the full workaround, since the proper solution to this issue will likely be much different from this workaround when we actually implement it.
If we change our mind and find out we do need to implement the workaround, we can close this issue. The proper, long-term solution to efficient proof sharding will likely require a custom lookup argument tailored for MemoSet specifically.
The text was updated successfully, but these errors were encountered:
After SP1 v4, the global lookup argument for cross-shard chip
communication was drastically redesigned, which means our currently
implementation is temporarily broken. See #437. The proper solution will
require a proper design for sharded Lurk computation.
This commit also removes all of the now-unnecessary dummy chips, since
only single-shard computations can be proven right now.
* chore: Replace Sphinx with SP1
* chore: Minor cleanup and fix benches
Removes the PrimeField32 bound where unnecessary
* fix: Use Arcs and remove some expensive clones
This impacts mainly long computations with a lot of sharding (see
lair_shard_test) and large matrices, or large lair programs
* chore: Minor cleanup
* feat: Update to SP1 v4+, temporarily disable sharding
After SP1 v4, the global lookup argument for cross-shard chip
communication was drastically redesigned, which means our currently
implementation is temporarily broken. See #437. The proper solution will
require a proper design for sharded Lurk computation.
This commit also removes all of the now-unnecessary dummy chips, since
only single-shard computations can be proven right now.
* chore: Address clippy
* chore: Temporarily disable `bench-regression` CI
* chore: Remove debug cargo patches
---------
Co-authored-by: wwared <[email protected]>
As of #413, we're using an up to date version of SP1 directly as our STARK prover. However because of changes to the SP1 lookup arguments in v4, the way cross-shard lookups are performed changed and there is now a specific distinction between local and global lookups. The lookups we currently use are now only local, thus breaking our existing Memoset implementation when we generate a proof with more than one shard. (Related to this issue is #259)
The workaround that would need to be need to be implemented is to:
N
copies of each chip that can be sharded, and limit the number of queries to each by ashard_size
parameterI've made some experiments locally to ensure that this workaround could work (i.e. the proof generates and verifies successfully), but because adding or removing chips changes the verification key, the fact that this change requires adding a layer of indirection inside the
QueryRecord
, and that this hacky workaround isn't actually feasible long term, I've instead chosen to explicitly panic when generating a proof with more than one shard and point to this issue instead of implementing the full workaround, since the proper solution to this issue will likely be much different from this workaround when we actually implement it.If we change our mind and find out we do need to implement the workaround, we can close this issue. The proper, long-term solution to efficient proof sharding will likely require a custom lookup argument tailored for MemoSet specifically.
The text was updated successfully, but these errors were encountered: