runner: github.workspace and runner.workspace are incorrect inside container jobs

Describe the bug

github.Workspace and runner.Workspace don’t point to container valid paths when executing inside a container job.

The values are also inconsistent with the values for env variables GITHUB_WORKSPACE and RUNNER_WORKSPACE (both contain a valid path.

To Reproduce Steps to reproduce the behavior:

  1. Create a workflow with the following jobs
jobs:
  runner:
    runs-on: ubuntu-latest

    steps:
      - name: dump
        run: |
          echo 'github.workspace === ${{ github.workspace }}'
          echo "GITHUB_WORKSPACE === $GITHUB_WORKSPACE"
          echo 'runner.workspace === ${{ runner.workspace }}'
          echo "RUNNER_WORKSPACE === $RUNNER_WORKSPACE"
  container:    
    runs-on: ubuntu-latest
    container:
      image: node:14.16
      
    steps:
      - name: dump
        run: |
          echo 'github.workspace === ${{ github.workspace }}'
          echo "GITHUB_WORKSPACE === $GITHUB_WORKSPACE"
          echo 'runner.workspace === ${{ runner.workspace }}'
          echo "RUNNER_WORKSPACE === $RUNNER_WORKSPACE"
  1. Check the value in the container job for ${{ github.workspace }} and $GITHUB_WORKSPACE which are not the same
  2. Check the value in the container job for ${{ runner.workspace }} and $RUNNER_WORKSPACE which are not the same

Expected behavior

On the container job github.workspaceand runner.workspace should point to a path in the directory /__w

Runner Version and Platform

Version of your runner? 2.295.0

OS of the machine running the runner? ubuntu 20.04.4 (ubuntu-latest)

What’s not working?

The values for github.workspace and runner.workspace are incorrect in the container job (and inconsistent with respective env variables)

Job Log Output

If applicable, include the relevant part of the job / step log output here. All sensitive information should already be masked out, but please double-check before pasting here.

Output of container dump step in the container job

github.workspace === /home/runner/work/testContainers/testContainers
GITHUB_WORKSPACE === /__w/testContainers/testContainers
runner.workspace === /home/runner/work/testContainers
RUNNER_WORKSPACE === /__w/testContainers

Runner and Worker’s Diagnostic Logs

If applicable, add relevant diagnostic log information. Logs are located in the runner’s _diag folder. The runner logs are prefixed with Runner_ and the worker logs are prefixed with Worker_. Each job run correlates to a worker log. All sensitive information should already be masked out, but please double-check before pasting here.

About this issue

  • Original URL
  • State: open
  • Created 2 years ago
  • Reactions: 51
  • Comments: 18 (2 by maintainers)

Commits related to this issue

Most upvoted comments

Hi @tspascoal,

Thanks for reporting this issue. We’ve investigated this issue before and are working on resolving it 👍

2022-08-22 Update: Correctly translating github.workspace causes some regressions, considering introducing github.host-workspace

One thing to note is that ${{runner.workspace}} inside the run part of the step, yields different results than ${{runner.workspace}} being inside the working-directory of a step.

In my case the working-directory will spit out a /__w/REPOSITORY/, while run will spit out a /home/ubuntu/actions-runner/_work/REPOSITORY/.

Hello @fhammerl,

Did you make any progress on this issue ?

We’ve started experiencing same issues since we moved to running jobs inside containers and what was the most interesting is the fact that we used in all cases ${{ github.workspace }} variable. Our pipeline looked like this:

  build:
    name: Build, Test & Publish
    runs-on: self-hosted
    container:
      image: internalregistry.io/internal-image:3
    steps:
      - uses: actions/checkout@v3
          
      - name: Build, Test and analyze
        shell: bash
        run: |
          dotnet test 

      - name: Publish
        shell: bash
        run: |
          for f in $(find src -name '*.csproj'); do
            d=$(dirname  $f)
            outputFolder=${{ github.workspace }}/${{ env.ARTIFACT }}/$d
            ( cd "$d" && dotnet publish --no-self-contained -c $BUILD_CONFIGURATION -o $outputFolder )
          done

      - name: Publish artifacts
        uses: actions/upload-artifact@v3.1.1
        with:
          name: ${{ env.ARTIFACT }}-v${{ github.run_number }}
          retention-days: 1
          path: ${{ github.workspace }}/${{ env.ARTIFACT }}/src
          if-no-files-found: error

And $outputFolder looked like this /__w/<project_name>/<project_name>/<env.ARTIFACT>/src/SomeProject. But still in Publish artifacts step we were getting error:

Error: No files were found with the provided path: /__w/<project_name>/<project_name>/<env.ARTIFACT>/src. No artifacts will be uploaded.

Then we changed workflow to this to overcome this problem and it started working:

  build:
    name: Build, Test & Publish
    runs-on: self-hosted
    container:
      image: internalregistry.io/internal-image:3
    steps:
      - uses: actions/checkout@v3
          
      - name: Build, Test and analyze
        shell: bash
        run: |
          dotnet test 

      - name: Publish
        shell: bash
        run: |
          echo "GITHUB_WORKSPACE=$GITHUB_WORKSPACE" >> $GITHUB_ENV
          for f in $(find src -name '*.csproj'); do
            d=$(dirname  $f)
            outputFolder=${{ env.GITHUB_WORKSPACE }}/${{ env.ARTIFACT }}/$d
            ( cd "$d" && dotnet publish --no-self-contained -c $BUILD_CONFIGURATION -o $outputFolder )
          done

      - name: Publish artifacts
        uses: actions/upload-artifact@v3.1.1
        with:
          name: ${{ env.ARTIFACT }}-v${{ github.run_number }}
          retention-days: 1
          path: ${{ env.GITHUB_WORKSPACE }}/${{ env.ARTIFACT }}/src
          if-no-files-found: error

Now paths in both steps look similar: /__w/<project_name>/<project_name>/<env.ARTIFACT>/src. So in short we saved $GITHUB_WORKSPACE to environment variables and started using it instead of ${{ github.workspace }}. Maybe we don’t even need to save it and can just use it out of the box, didn’t try this.

I started using Artifacts. In the first Job 1: push file to Artifact

  # Archive zip
  - name: Archive artifact
    uses: actions/upload-artifact@v3
    with:
      name: files-${{ github.run_id }}.tar.gz
      retention-days: 1
      if-no-files-found: error
      path: |
        ${{ github.workspace }}/files-${{ github.run_id }}.tar.gz

In the second Job, download the file in order to use it.

  # Fetch Dump from artifact storage
  - uses: actions/download-artifact@v2
    with:
      name: files-${{ github.run_id }}.tar.gz

After all other tasks: Delete the artifact, we dont want to keep it outside of the job run

  # Delete artifact
  - uses: geekyeggo/delete-artifact@v2
    with:
      name: |
          files-${{ github.run_id }}.tar.gz

The infra people at our organisation wanted the workers to be ephermeral and not link in storage. The “workaround” with the arifacts actually works quite well.

Wh I do differently is that I have the helm charts in a separate rep and aI pull them in

      - name: Checkout helm repo
        uses: actions/checkout@v3
        with:
          repository: woutersf/drupal-helm-charts-k8s
          path: upstream_helm
          token: ${{ secrets.GIT_CHECKOUT_TOKEN }}
          ref: '${{ inputs.aks_branch }}'

and then in a later step

      # Validate helm stuff to not leave a broken state.
      - name: DRY RUN
        run: |
          cd helm
          helm upgrade --dry-run --timeout 15m --install -n ${{ inputs.app_key }}-${{ inputs.environment }} -f values-${{ inputs.environment }}.yaml --set deploy.runid=${{ github.run_id }} --set image.tag=${{ inputs.version }}  --set image.version=${{ inputs.version }}  ${{ inputs.app_key }} .

This takes the yaml files in the CICD and trues to apply them to kubernetes. the files are available over the multiple steps.

If you realley want the files on the pods, that’s what kubectl cp is for but That should not be needed to apply helm charts.

    steps:
      -
        name: Checkout
        uses: actions/checkout@v3

To move files in and out of a POD i use the following:

      # COPY FILE TO WORKSPACE
      - name: COPY FILE FROM POD TO WORKSPACE
        shell: bash 
        run: |
         kubectl cp -n ${{ inputs.app_key }}-${{ inputs.environment }} ${{ inputs.app_key }}-${{ inputs.environment }}/$POD_NAME:/tmp/${{ inputs.app_key }}-${{ inputs.environment }}-${{ github.run_id }}.sql.gz ${{ github.workspace }}/${{ inputs.app_key }}-${{ inputs.environment }}-${{ github.run_id }}.sql.gz

I would suspect that first a checkout step fetches the helm charts you need and then in a next step your example would suffice.