github-action-benchmark: Benchmark job never finishes

Hello,

We are currently having an issue with v1.19.2 of this action where it just hangs and never finishes(even after hours). We tried enabling debug logs but there’s nothing there.

PR showing the issue: https://github.com/gofiber/fiber/pull/2818 Job(Debug enabled): https://github.com/gofiber/fiber/actions/runs/7695977872/job/21065465963?pr=2818

CI File:

on:
  push:
    branches:
      - master
      - main
    paths:
      - "**"
      - "!docs/**"
      - "!**.md"
  pull_request:
    paths:
      - "**"
      - "!docs/**"
      - "!**.md"

name: Benchmark
jobs:
  Compare:
    runs-on: ubuntu-latest
    steps:
      - name: Fetch Repository
        uses: actions/checkout@v4

      - name: Install Go
        uses: actions/setup-go@v5
        with:
          go-version: "1.20.x"

      - name: Run Benchmark
        run: set -o pipefail; go test ./... -benchmem -run=^$ -bench . | tee output.txt

      - name: Get Previous Benchmark Results
        uses: actions/cache@v4
        with:
          path: ./cache
          key: ${{ runner.os }}-benchmark

      - name: Save Benchmark Results
        uses: benchmark-action/github-action-benchmark@v1.19.2
        with:
          tool: "go"
          output-file-path: output.txt
          github-token: ${{ secrets.BENCHMARK_TOKEN }}
          benchmark-data-dir-path: "benchmarks"
          fail-on-alert: true
          comment-on-alert: ${{ github.event_name == 'push' || github.event_name == 'workflow_dispatch' }}
          auto-push: false
          save-data-file: ${{ github.event_name == 'push' || github.event_name == 'workflow_dispatch' }}

About this issue

  • Original URL
  • State: closed
  • Created 5 months ago
  • Reactions: 1
  • Comments: 18 (9 by maintainers)

Most upvoted comments

@gaby I’m glad! I will wait for @ningziwen to have a look at the PR I’ve added some backward compatibility as I’ve noticed that before v1.18.0 in case of multiple benchmark metrics in Go there was a weird unit being extracted eg. ns/op 0 B/op 0 allocs/op was treated as a unit instead of just ns/op

So in your case the resulting benchmarks will be:

  • <YourRegularBenchName> - this will be backward compatible metric with the unit being a slightly wrong (as it used to be)
  • <YourRegularBenchName> - ns/op
  • <YourRegularBenchName> - B/op
  • <YourRegularBenchName> - allocs/op

I’ll probably make an option like goBackwardCompatibleMetrics that will default to true and will deprecate it in future versions. Created an issue to track that #226

released in v1.19.3

@gaby pls confirm if that works for you

hey @gaby I believe I have a fix ready! Would you mind having a look at PR #225

One thing to note is that with this upgrade you will receive much more benchmark results as since #177 we are extracting all the additional metrics from benchmark results so in your case it would be:

  • <YourRegularBenchName>
  • <YourRegularBenchName> - B/op
  • <YourRegularBenchName> - allocs/op

hey @gaby I believe I have a fix ready! Would you mind having a look at PR #225

@gaby I think so. Can I use output of your benchmarks as a test case? I’ll trim it down when I find what is causing the issue

@ktrz Is that related to this PR? #177

Kinda makes sense, our Benchmark is quite extensive so it could had exposed some edge case bug.