solidity-coverage: Unresponsive with 100% CPU

I was trying out solidity-coverage today but running into issues testing it out. I have ~17 contracts, and after “Instrumenting…” the process pegs one CPU core at 100% and never seems to finish(waited a bit over 1 hour).

How can I go about debugging this? I’ve tried the traditional node DEBUG env var with no useful output.

DEBUG="*" ./node_modules/.bin/solidity-coverage

Here’s my .solcover.js file. I have tried without specifying port, testCommand, and norpc as well(also, ganache appears to not receive any transactions during this process).

module.exports = {
    port: 8545,
    testCommand: './node_modules/.bin/truffle test --network dev',
    norpc: true,
    copyNodeModules: true,
    skipFiles: ['Test.sol']
};

The process can not be terminated through a keyboard interrupt(ctrl+c) either. I’m running this on an Arch Linux system, if that helps. Let me know if I can provide any more information.

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 34 (19 by maintainers)

Commits related to this issue

Most upvoted comments

Hi @vs77bb the project type is Traditional so its 1 worker at a time i cannot participate if there is a way that i don’t currently know please inform me 😄. Also i wanted to clarify that i already have the diagnosis due to a project im working on and using oraclize service with solidity-coverage and i wasn’t targeting on the bounty in the first place (it came out as a bonus) also i want to apologize to @mttmartin for the highjack.

The exclusion issue arrise when the line https://github.com/sc-forks/solidity-coverage/blob/a40ba7b1b4c4fe03352eb4899aa27931a6734f6b/lib/app.js#L390 in postProcessPure does not properly check for excluded files. #259 is addressing this issue by adding an exclusion file/folder check. As for the bug itself the process “hangs” (in the matter of fact is working) because of the unbracketed singleton statement parser at https://github.com/sc-forks/solidity-coverage/blob/a40ba7b1b4c4fe03352eb4899aa27931a6734f6b/lib/preprocessor.js#L40 is too slow when it comes to parsing large contracts the main issue is the loop using: https://github.com/sc-forks/solidity-coverage/blob/a40ba7b1b4c4fe03352eb4899aa27931a6734f6b/lib/preprocessor.js#L45 witch takes a lot of time to parse all the contract in every pass. The aforementioned claim can be verified by applying the following patch:

--- lib/preprocessor.js (revision a40ba7b1b4c4fe03352eb4899aa27931a6734f6b)
+++ lib/preprocessor.js (date 1531305439000)
@@ -42,7 +42,10 @@
 
   while (keepRunning) {
     try {
+      console.time('parse');
       const ast = SolidityParser.parse(contract);
+      console.log('length: ' + contract.length);
+      console.timeEnd('parse');
       keepRunning = false;
       SolExplore.traverse(ast, {
         enter(node, parent) { // eslint-disable-line no-loop-func
@@ -84,6 +87,7 @@
         },
       });
     } catch (err) {
+      console.log(err);
       contract = err;
       keepRunning = false;
     }

and getting this output:

length: 46508
parse: 41473.762ms
length: 46500
parse: 41236.352ms
length: 46502
parse: 43498.465ms
length: 46504
parse: 41831.958ms
length: 46506
parse: 40028.604ms

and after approx. 40 * 40 / 60 = 26,66 minutes later

length: 46544
parse: 38760.110ms
length: 46540
parse: 41115.355ms
length: 46542
parse: 38606.990ms
length: 46544
parse: 36695.497ms
length: 46546
parse: 47062.380ms

this indicates that every iteration take approximately 40 seconds (in my machine), multiply that with the number of required passes and the result is really time consuming.

The conditions are triggered when using usingOraclize.sol (SHA256: 79283d8e5f4f30fe33c6b0ab2296531bdd159f19438ec18fe9ebae2a5bc7edae) file at: https://github.com/sc-forks/solidity-coverage/blob/a40ba7b1b4c4fe03352eb4899aa27931a6734f6b/lib/app.js#L393

Finally addressed this a bit. h3ph4est7s 's patch fix is available as an option starting with 0.5.10.

deepSkip: true

If you’re skipping files/folders and are experiencing instrumentation hangs this works. If you’re using mocks that need to be post-processed but skipping them because you don’t want them in the report (Zeppelin is an example of this), this will cause problems.

@h3ph4est7s just paid out! Great work here 🙂

Ah ok interesting! I’m satisfied with this solution - great work @h3ph4est7s!

@vs77bb - @h3ph4est7s has completed the work this bounty successfully. Thanks again both of you for helping here.

@cgewecke to run effectively oraclize tests the bridge must be used but so far i haven’t noticed any bad interaction with solidity-coverage the oraclize system is consisted of 3 parts, the client-code usingOraclize, the resolver and the actual contract. the 2 are deployed by the bridge and are not under any coverage because they are outside of scope. Also the PR https://github.com/sc-forks/solidity-coverage/pull/259 successfully correct the issue if usingOraclize is excluded but the perforance impact of the parser is still there. In other words we only care about the verification of the extending contract. About the patch i tried to tweak the functionality of the parser by traversing all the AST record without reparsing (its way more fast) by just keeping addition offsets for every block marker addition. For the visibility specifier i used empty spaces to keep the alignment but i have some misaligned brackets at a nesting point. My new approach will be to realign the AST offsets by affected part aka change the start and end values according to how many bytes added before that on every addition. @mttmartin Thank you for your cooperative spirit and kind words i really appreciate it.

Hi @h3ph4est7s would you mind claiming this on Gitcoin here? @mttmartin has kindly agreed to give up his ‘Start Work’ here as he hadn’t gotten going quite yet. We can add you from the wait list once you start work.

Hi @cgewecke, yes, we are different people.

Ok excellent @h3ph4est7s, thanks so much. That definitely satisfies the diagnosis part of this bounty in my view. PR looks reasonable as well although it would be nice if there was a less awful algorithm in the pre-processor (I wrote it).

Also @h3ph4est7s to be clear - you and @mttmartin are different people?