go: runtime/pprof: TestMutexProfile failures

#!watchflakes
post <- pkg == "runtime/pprof" && test == "TestMutexProfile" && `profile samples total .*, want .*`

Issue created automatically to collect these failures.

Example (log):

--- FAIL: TestMutexProfile (0.23s)
    --- FAIL: TestMutexProfile/proto (0.00s)
        pprof_test.go:1283: parsed proto: PeriodType: contentions count
            Period: 1
            Time: 2023-08-16 21:15:15.609376 -0700 PDT
            Samples:
            contentions/count delay/nanoseconds
                      1 20537311749: 1 2 
                     99   47560262: 1 3 
            Locations
                 1: 0xd611c48 M=1 sync.(*Mutex).Unlock /tmp/buildlet/go/src/sync/mutex.go:223 s=212
                 2: 0xd796ef8 M=1 runtime/pprof.blockMutexN.func1 /tmp/buildlet/go/src/runtime/pprof/pprof_test.go:1135 s=1132
                 3: 0xd796dd7 M=1 runtime/pprof.blockMutexN.func2 /tmp/buildlet/go/src/runtime/pprof/pprof_test.go:1146 s=1143
            Mappings
            1: 0xd549000/0xd815000/0x0 /private/tmp/buildlet/tmp/go-build3396719100/b110/pprof.test  [FN]
            2: 0x149fe000/0x14a6a000/0xa8000 /usr/lib/dyld  
            3: 0x7ffffff58000/0x7ffffff59000/0x0   
        pprof_test.go:1312: profile samples total 20.584872011s, want 10s

watchflakes

About this issue

  • Original URL
  • State: open
  • Created 10 months ago
  • Comments: 38 (14 by maintainers)

Most upvoted comments

Here’s an idea: only check the lower-bound, not the upper bound. That’s really all the new behavior is about anyway. OS scheduling can always interfere and make the timing go sky-high. It’s much harder to make the time be less than that, though there is still an opportunity because the sleep happens before all the other goroutines have properly blocked.

OK, I have an idea. I will send a patch. At the very least it should downgrade this issue from Soon, even if it doesn’t fully resolve the flakes.