azure-pipelines-tasks: Azure File Copy build task not available on hosted Ubuntu build agent

Environment

  • Server - Azure Pipelines Account name: chaosbyte Team project name: D Test Build definition name: D Test-CI Build number: 9

  • Agent - Hosted: Queue name: Not sure It is hosted a Ubuntu 16.04 build agent

Issue Description

I got a project building some pulling from an external Git repository, and building it with gcc through a simple bash script - no problem. However, when I attempt to use the Azure File Copy task to copy over the single output file to a Azure Blob Storage container, I am told that this task won’t run on Linux. Is there any plans to make Azure File Copy work on the hosted Ubuntu 16.04 build agent, or do you have any other suggestion on how to achieve the same result with existing tasks that will run on this hosted build server?

Error logs

2018-11-27T18:03:41.7033273Z ##[section]Starting: AzureBlob File Copy
2018-11-27T18:03:41.7038323Z ==============================================================================
2018-11-27T18:03:41.7038468Z Task         : Azure File Copy
2018-11-27T18:03:41.7038567Z Description  : Copy files to Azure blob or VM(s)
2018-11-27T18:03:41.7038647Z Version      : 2.0.15
2018-11-27T18:03:41.7038738Z Author       : Microsoft Corporation
2018-11-27T18:03:41.7038823Z Help         : [More Information](https://aka.ms/azurefilecopyreadme)
2018-11-27T18:03:41.7038939Z ==============================================================================
2018-11-27T18:03:41.7244544Z ##[error]The current operating system is not capable of running this task. That typically means the task was written for Windows only. For example, written for Windows Desktop PowerShell.
2018-11-27T18:03:41.7258675Z ##[section]Finishing: AzureBlob File Copy

About this issue

  • Original URL
  • State: open
  • Created 6 years ago
  • Reactions: 56
  • Comments: 57 (6 by maintainers)

Most upvoted comments

I wanted to share my own solution to this that works with vmImage:ubuntu-latest:

  - task: AzureCLI@1
    displayName: Az File Copy to Storage
    inputs:
      azureSubscription: $(azureSubscription)
      scriptLocation: inlineScript
      inlineScript: |
        az storage blob upload-batch \
          --destination \$web \
          --account-name "$(storageAccountName)" \
          --source "$(Agent.BuildDirectory)/$(outputDir)"

The AzureFileCopy task and the az cli workaround is not at feature parity with each other. I wonder how many teams, like I just did, spent time adding the task to the pipeline, watching it fail and then searching the internet for a couple of hours. This is how I found this issue. The docs on this are also weak.

+1, it would be great to see an update on this

Is there any progress to this? It is very strange that tasks like JavaToolInstaller are able to copy installer file from Azure Storage but we can’t use AzureFileCopy to copy files to Azure Storage.

This is a terrible experience for your customers. Why is something so basic as copying files not supported on Linux agents the same as it is supported on Windows?

umm, have been hoping to see some kind of update on this for over a year, what gives guys, can we get some support on this? GA opened since azcopy v10 for linux, why couldnt this be integrated?

Trying to migrate our pipelines to use self-hosted Linux agent and discovered that its failing due to AzureFileCopy not being supported on Linux. It would be great if this can be made to work with Linux!

Not to +1 and “me too” but if the guidance from MS is to use azcopy, then they should support it in their products. What happened to MS ❤️ Linux? :trollface:

+1 (i can already see the headlines in the washington post newspaper in the year 2038 - “Microsoft has finalized file copy for linux pipelines - it is now in the testing stage and soon ready for RC stage”) 😃

Another bump for an update on this.

Any progress? This should be a thing for linux too 😃

This also left me very confused. This obviously should work on linux hosts.

Bouncing an idea here, we are evaluating whether we can expose Azure storage as a file drive. When available you can copy the files over using SSH or Remote PS.

@benne - Making Azure File copy xPlat is in our backlog. Adding @RoopeshNair . Until then using Azure CLI will be the right way to solve.

+1, please allow to run on linux

if we are using the az cli to achieve the same functionality, we are not getting the same flexibility as the Azure File Copy task in terms of getting the storage uri and sastoken as output. We need to make additional cli calls to achieve the same . So, it would be great if the file copy task works xPlat.

We still want this feature 😉

Thanks for sharing your workarounds! Yet, I am still waiting for the AzureFileCopy task on Linux agents because I can’t always remember this issue until I see another failed pipeline. 😃

Critical thing here is that Azure File Copy will return a SAS Token allowing you to then access these files via ARM deployment.

is there an update on this?

@benne I will leave the roadmap for that task up to the team that owns the task. The az cli task should work with the same service principal you would have setup for the Azure File Copy task.

@chrisrpatterson az cli might be my best option for now, using a non-interactive service principal. I am gonna experiment a bit with this.

Still interested in if this is gonna make it to the road-map at some point. It would be nice to use the build-in build task for future projects 😃

Since this is still a thing with a recent comment and since I had same problem landing eventually here, I would like to tell that

  1. @kolosovpetro solution worked great
  2. but only when I changed network access rules with az storage account update --name $ACCOUNT --resource-group $RESOURCE_GROUP --default-action Allow

otherwise the error you get is just telling you that you have authentication problem. This might save you a lot of hours thinking about your life choices while you are trying to resolve this.

The following seems to work to “copy” text files (or shell scripts, or …) to a VM from a pipeline, using a pipeline variable:

Disclaimer: I must admit I scared myself by doing this. Far from clean, but at least it seems to work.

jobs:
- job: TextCopyToVM
 
  steps: 
  
  - bash: |
      filePath="$(< path/to/some/text/file/from/git/repo.txt)"
      #
      # Replace \n with %0D%0A, so a multiline pipeline variable is possible
      #
      escapedFile=$(echo "$filePath" | sed ':a;N;$!ba;s/\n/%0D%0A/g')
      echo "##vso[task.setvariable variable=localFileEscaped]$escapedFile"

  - task: AzureCLI@2
    displayName: 'Echo var into file'
    inputs:
      azureSubscription: '<Azure Subscription Name>'
      scriptType: 'bash'
      scriptLocation: 'inlineScript'
      inlineScript: |
        echo "localFileEscaped: $(localFileEscaped)"
        COPY_FILE="$(az vm run-command invoke -g <RESOURCE GROUP> -n <VM NAME> --command-id RunShellScript --scripts 'echo "$(localFileEscaped)" > /some/path/to/file/on/vm.txt')"

AzCopy may be used directly from AzureCLI@2 as follows

- task: AzureCLI@2
  displayName: 'AzCopy via Azure CLI'
  inputs:
    azureSubscription: 'AzCopyARM'
    scriptType: 'pscore'
    scriptLocation: 'inlineScript'
    inlineScript: |
      $Date = (Get-Date).AddDays(1).ToString('yyyy-MM-dd')
      $key = $( az storage account keys list --resource-group $(rgName) --account-name $(storageAccount) --query [0].value -o tsv )
      $sas = $( az storage container generate-sas --name $(container) --expiry $Date --permissions "racwdli" --account-name $(storageAccount) --account-key "$key" )
      azcopy copy "./seed_images/*" "https://$(storageAccount).blob.core.windows.net/$(container)/from_az_cli?$(SAS)" --recursive=true

See docs:

Four and a half years (including a whole pandemic) and this still isn’t working? Is it still on the backlog @kmkumaran ?

@jamiehaywood thank you so much! Does anyone know how I can narrow down copying to a repo’s subfolder? I’m trying this now with wild guesses and it’s either erroring out or copying all the unnecessary files, which ends up taking over 60 minutes and erroring out anyway.