noir: Memory overflow while compiling BLS Signature Verification

Aim

Here is my bls signature verification implementation in Noir: https://github.com/onurinanc/noir-bls-signature. When I tried to run this, the cli crashed somehow. There are 2 more persons that tried to run this but having the same issue happens.

Here is the corresponding issue that I got for the bls signature verification compilation: https://github.com/onurinanc/noir-bls-signature/issues/2

As a summary,

@stefan-nikolov96 says that

"I am unable to run test_verify_bls_signature() with over 256GiB in RAM and just as much in swap memory. I get a memory overflow and then the program recieves a SIGKILL signal from the OS.

I tried running nargo compile in debug and release mode and with 0.9.0 and 0.15.0 compiler versions.

GDB shows that the nargo overflows in the compiler frontend. On a machine with 64GiB RAM, it uses about 50GiB during inlining and the program fails during mem2reg https://github.com/noir-lang/noir/blob/master/compiler/noirc_evaluator/src/ssa/opt/mem2reg.rs "

Expected Behavior

compiler shouldn’t crash

Bug

memory overflow and then the program recieves a SIGKILL signal from the OS.

To Reproduce

  1. run verify_bls_signature function

To have an issue solutions what’s happening in the back, you can navigate to test_proving_time branch in the repository.

When you run nargo prove if you comment the line 326 inside pairing.nr, it is working correctly.

When you not comment this line, it is not going to work. (due to memory issues)

You can the issue quickly using this repo: https://github.com/onurinanc/test-bls-proving-time

Installation Method

Binary

Nargo Version

nargo 0.17.0 (git version hash: 86704bad3af19dd03634cbec0d697ff8159ed683, is dirty: false)

Additional Context

No response

Would you like to submit a PR for this Issue?

No

Support Needs

No response

About this issue

  • Original URL
  • State: open
  • Created 8 months ago
  • Comments: 16 (9 by maintainers)

Most upvoted comments

@stefan-nikolov96 Hi, apologies for the lack of updates. I’ve started working on this issue today (thanks @wwared for the analysis). I’ve yet to have a fix for it yet but I have some initial observations of running the mem2reg pass on the test-bls-proving-time repository:

  • On latest master (on my machine with 32GiB ram) we are able to process 20k blocks before getting OOM killed.
  • Switching to a shared im::HashMap we’re able to process 45k blocks, but then still get OOM killed.
  • Switching to im::OrdMap bumps us up to 70k blocks before again getting killed.

So switching to shared map implementations improves things but does not completely fix the problem. I’m looking into pruning old entries in the blocks list now.

Edit: For reference the target program has 865k reachable blocks total.