azure-cli: az storage blob {upload|copy blob} doesn't support SAS URLs
Describe the bug My entire complaints about MD and storage accounts seems to actually be already solved by Azure, but not documented. Further, after some time, I still can’t piece together if it’s actually possible with the CLI.
azure disk create ... --for-upload true ...
returns a SAS URL that looks like this:
https://md-impexp-t4nfwj1qcc3j.blob.core.windows.net/pk5qjbl3lnhn/abcd?sv=2017-04-17&sr=b&si=1d2b08af-d097-4d3f-b468-a7d4c91c7d04&sig=%2FjClxzQFf3O1ulyHn3xR1287XPdweHLnJKdYfj3ErZI%3D
Does the CLI support using that SAS url directly with az storage blob upload
or az storage blob copy start
? If not, I don’t really get how this scenario is complete under the CLI.
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 48 (34 by maintainers)
@colemickens The direct upload to managed disk feature is currently in the beta phase. We haven’t even announced it publicly. That is why you are experiencing rough edges in the customer experience. Thanks for trying out the feature by looking at the rest API. We appreciate it!
Now, lets talk about the problem:
az storage blob upload uses a storage API that first creates a blob on the target and then copies the data from source to the target. We don’t allow clients to create a blob in md-impext- storage account as these are special accounts used only for exporting/importing managed disks. Clients can only call Put Blob on the blobs pre-created by the system. That is why you are getting the error message.
We are planning to fix this problem soon by using Put blob API. @Juliehzl already mentioned about it.
In the mean time, we recommend you to use the command below that uses azcopy under the hood. Latest version of azcopy is already aware of managed disks in upload state. It might work for NixOS.
az storage copy -s https://yourstorageaccountname.blob.core.windows.net/test/d.vhd -d managedDiskSASUri
As an update from me, and in case its helpful to someone else,
azcopy
can’t handle page blobs piped from standard in, so now I useblobxfer
. I can now store images compressed withzstd
, and then I canzstdcat
intoblobxfer
to upload.blobxfer
properly handles the input from stdin and skips the empty sections of the extracted image, so I get the benefits of having a pre-sized image, without having to store it decompressed anywhere.(@ramankumarlive also, to circle back, I think you were right. I added the
revoke-access
to my script and it seems to work reliably.)@ramankumarlive @zezha-msft @Juliehzl : Thank you all for the help. I have successfully upload an image and replicated it to many regions using a SIG. I will be finishing this up and pushing it into NixOS’s nixpkgs in the coming week or so, where it could serve as a full e2e example of custom images w/o managing storage accounts in Azure. I really appreciate the direct, quick help here, thanks so much.
(Thanks for the guidance, but the SAS token was only valid for an hour (or at least it was supposed to be, based on the arguments I passed to
grant-access
) so it is very long since expired. Anyway, it was just 50GB of zeros anyhow… 😄 . I do take care to wait for them to expire or scrub the SAS/access tokens generally when pasting here and elsewhere.)