nodejs-storage: getSignedURL() in Google Cloud Function produces link that works for several days, then returns "SignatureDoesNotMatch" error

Environment details

  • OS: Heroku dyno (Ubuntu 16.04)
  • Node.js version: 8.6
  • npm version: 6.0.0
  • @google-cloud/storage version: 1.6.0

Steps to reproduce

Repo of relevant code: https://github.com/colinjstief/getSignedUrl-example

Lots of this code is copied/pasted straight from official example (link here). Code in my repo runs as a Google Cloud Function and produces a link that works for a few days (maybe 7?), and then seems to expire with “SignatureDoesNotMatch” 403 response. Full message reads:

“The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method.”

URL seems valid, including this query parameter: ‘…&Expires=16730323200&Signature=…’

Both the initial request and the later request (after apparent expiration) are performed the same ways: either through an axios.get() or simply placing in browser bar. Both work at first, then both fail later.

Same issue seems to be reported here GoogleCloudPlatform/google-cloud-node#1976, here googleapis/nodejs-storage#144, and here https://github.com/firebase/functions-samples/issues/360

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 12
  • Comments: 88 (17 by maintainers)

Commits related to this issue

Most upvoted comments

@frankyn @stephenplusplus This whole thing is messy because what Firebase devs want in these cases are permanent web urls to supply to clients for downloading from GCS. Signed URLs are a very powerful tool but what I think would be beneficial is to expose the simpler “firebase flavour” permanent web URLs (based on metadata tokens) to the Firebase Admin SDK. Because currently the storage SDK for Firebase Admin == GCS SDK, the only option is to use signed URLS which, as we’ve found out, are sensitive to auth accounts [+ have been problematic in the past with auth changes in GCP architecture) and aren’t really permanent in the core by design (yes you can set them to year 2500 but that’s exactly my point). As Firebase uses its built in auth mechanism and storage rules for security and privacy on the client side, providing the firebase download URLs on the server side would be really helpful in achieving what we really want. Hope you’ll consider it.

After some googling, I found that files without read restrictions can be downloaded with the URL pattern:

https://firebasestorage.googleapis.com/v0/b/<bucketName>/o/<urlEncodedFilePath>?alt=media

Are there any known limitations ? I just wanted to use this for publicly-accessible images*. And I plan to use this with manual versioning (no overwrite or reuse same name).

*publicly-accessible images such as product image on e-commerce website, seller’s display picture, etc.

Hi @frankyn,

Firebase SDK provides an API to create permanent download URLs for objects on GCS. This is done using the following api (example in Swift): func downloadURL(completion: @escaping (URL?, Error?) -> Void) The URL it returns looks like this: https://firebasestorage.googleapis.com/v0/b/[bucket_name]/o/[file_name]?alt=media&token=[some_uuid_token]

Behind the scenes Firebase creates a token for each file on storage and saves it to the file’s metadata (when browsing GCS using console you can easily view the token). I [safely] assume that the firebase service (at https://firebasestorage.googleapis.com) compares the token supplied by the client to the file’s metadata and accepts/declines downloads requests. [PS: you can revoke a token if required to prevent further access to clients]

One major issue is that Firebase do not offer this, or any of their proprietary storage APIs through the Node JS Firebase Admin SDK (see here) that can be used server side (GAE,GCE) or on Cloud Functions. The only option for developers on server/function side is to use GCS Node JS SDK which only provides the signed URL method - which isn’t really for true permanent urls (but is great for temporary use cases). Of course GCS is completely oblivious to what Firebase is doing with the tokens and such but Firebase Admin SDK should provide the Firebase custom storage APIs, including the downloadURL() method, like they do on clients. It is not uncommon for devs/applications to need/want to create download urls to supply to clients after server processing and its really a shame Firebase based apps need to go around the bush by using the signed url methods on server/functions. Hope this is clear. Shai

@shaibt I think I’m the right person to blame 😉

I currently own IAM in GCF, (including service identity), and I built Cloud Storage for Firebase.

On URLs, Firebase, and GCS

Your interpretation of our “unguessable public URL schema” in https://github.com/googleapis/nodejs-storage/issues/244#issuecomment-404816235 is correct. My word of caution is that the implementation details are exactly that: they may change at any point. We may stop storing them in metadata, or may swap them for long lived signed URLs instead of UUIDs.

Cloud Storage for Firebase was originally designed to be truly “serverless” GCS, and the API surface was optimized for Firebase Realtime Database customers (who had asked for blob storage for years), hence why the naming is different. Unlike Cloud Firestore, which was “built from the ground up” over the course of years, Cloud Storage for Firebase was built in six months. We simply couldn’t build this functionality into GCS in that timeline. The obvious cost is sometimes painful differences like this.

I have always viewed the differences between Cloud Storage for Firebase and GCS to be a bug and my long term goal has been convergence. This is why the Firebase Admin SDKs are aligned with GCS (they’re designed to live next to other GCP client libraries, and should be interoperable: e.g. you should be able to take a GCS object reference and drop it into the Cloud Vision API), rather than the Firebase API conventions. At some point in the future, I anticipate we’ll rename the Firebase SDKs to use the bucket() and object() terminology to keep moving in this direction (though we recognize that renames for the sake of renames aren’t always in the developer’s best interest).

Signed URLs

Firebase picked the format above because:

  • we wanted infinite duration URLs, primarily for web apps (so they could cache these URLs in the database and not have to perform two requests to get an object)
    • we needed the URLs to be revokable because they were infinite duration
  • our system is stateless: we couldn’t use a database (besides GCS metadata) [lots of reasons that I’m happy to explain but this post is already long]

GCS has other constraints in mind (it’s a far more generic service), notable compatibility with AWS S3, as well as other tools at their disposal, such as access to service accounts for signing/validating and lots of state.

Currently, GCS documents v2 signed URLs. These URLs theoretically offer “infinite” expiration in that you can set the expiration to be 2099 or 2500 or whenever you think your app will no longer need the data 😉

Recently, GCS has already started supporting v4 URLs. For example, gstuil generates v4 URLs by default. These URLs have a max expiry of 7 days.

AWS has regions that only support v4 URLs, but I haven’t seen an explicit v2 deprecation, so I’m not sure what the future holds in this space for either AWS or GCP.

I think the general guidance from most of the object storage teams on all Cloud providers is, “signed URLs are great for short term uploads and downloads but you should use a CDN or other media specific serving solution for long term content.”

Key rotation

A few words on key rotation in Functions: this is the desired behavior and unlikely to change. Fully managed, rotating keys are far better than “download a private key and manage storage and rotation yourself.” It is unlikely that we’ll surface per-function (or even per-project) rotation configurability, though it’s something we’ve considered.

That said, I recognize that the currently answer is the non-ideal case mentioned above. Here’s an alternative that I think might make this easier.

Signing URLs without downloading keys

@colinjstief @frankyn instead of downloading the service account key, you could grant the GCF default service account the ability to sign blobs with other service accounts (node client) and use that API. Here’s my interpretation (currently untested 😅):

const stringToSign = HTTP_Verb + "\n" +
               Content_MD5 + "\n" +
               Content_Type + "\n" +
               Expiration + "\n" +
               Canonicalized_Extension_Headers +
               Canonicalized_Resource;

const params = {
  name: "projects/-/serviceAccounts/your-long-lived-account@your-project-id.iam.gserviceaccount.com",
  resource: stringToSign
};

iam.signBlobRequest(params).then((signBlobResponse) => {
  const signature = signBlobResponse.signature;
  //... do things with the signature
});

Note that the GCF service account doesn’t have the signBlob permission by default (it’s considered dangerous to give by default), so you’ll have to add it manually:

# Grant the GCF Runtime Service Account the ability to sign blobs and JWTs
gcloud projects add-iam-policy-binding [PROJECT_ID] \
  --member="[PROJECT_ID]@appspot.gserviceaccount.com" \
  --role="roles/iam.serviceAccountTokenCreator"

The best practice here is to only add the signBlob permission (not to also add signJWT), so you should create a custom role for your Runtime Service Account.

There are some other issues (e.g. one service account is able to impersonate another), but it eliminates the threat of leaking a private key by accidentally pushing code to a public repo.


This turned into a much longer post than I originally intended, and it may be revised in the future. Hopefully it gives some insight into the product design process and the constraints, as well as how we’re working to provide developers the most secure way to build apps that meet their requirements.

Let me know if you have additional questions, I’ll do my best to answer them!

Hello, I have the exact same problem (signed URLs expire after some time, even though ‘expires’ property is set to a date far in the future). This is the relevant code (initialization is slightly different than the example @colinjstief provided):

admin.initializeApp(functions.config().firebase);

exports.onFileUpload = functions.storage.object().onFinalize(data => {
    const bucket = data.bucket;
    const filePath = data.name;
    const destBucket = admin.storage().bucket(bucket);
    const file = destBucket.file(filePath);

    return file.getSignedUrl({
        action: 'read',
        expires: '03-09-2491'
    })
    .then(url => {
        return url[0];
    })
    .then(
        // save url to db and do other stuff ...
    )
};

@google-cloud/storage version is the same: 1.6.0

These are two generated URLs for the exact same file (just re-uploaded and overwritten couple days later). Currently both links work (even though the signatures are different), but the first one will probably stop working a week from now approximately, the second one after two weeks. That estimation is based on previous observations.

https://storage.googleapis.com/vennfx-web-ng.appspot.com/1524742000212_ForexAdvantages.jpg?GoogleAccessId=vennfx-web-ng%40appspot.gserviceaccount.com&Expires=16447017600&Signature=FDo8YPo0TkkNqhe60Igdf5YNSY%2FDDtijImLE3Ah0W2AP8JOxhABBFQAnUCFyvsvupZ4hOJhKb48QZzvKakVplTLAolZswrrmNaWe9OOSH9vnDm5cvkd0J7QCOd%2F7Bo7XrQmCtKrdDQdtc9tnVp6Hp5Bs0vQgmOucfaMD0UU0utn4uy0m5KRzsZiagHPzzBqkx9jiF%2BwdqQBAFttZEV4dQUQNB6g4OnuuwN7%2FALutR%2FZSzMlPmftMKxccntCpu2FO0goeXOMfTFyn6SaFRokVrssddzzcSbHcH%2B%2BvA74yWzWeiFWcy3k0kvmn4YPotWlDpr1SJ3PC8J4qYI05MMVsCg%3D%3D

https://storage.googleapis.com/vennfx-web-ng.appspot.com/1524742000212_ForexAdvantages.jpg?GoogleAccessId=vennfx-web-ng%40appspot.gserviceaccount.com&Expires=16447017600&Signature=t2OiQH4ulovSDNBy88EccKuczuHf3pcO4wvDnYmYIux0tnJoP92w4wKYCL6RTuLf8B93bwSZMgERVg5edCUns8kdDt7Ira1OjhjDWw%2BiBlyzdUXCibbeMiv8V44BBLQH5q%2Br32DA7rgX%2BcSWrolgl03pKAZaJoYfSy95TmhyT8eQAYvk8LZRlLwYRt8Q247eYGcWGlLX9gSpgSzrhLvDOQZTkqcqKuOmpvLdSoU73O3OuJf7y2txSOsLU7d%2FG7O%2B7u4eQxg2XZdJbUlJcZRNCZ8AxWKGkDzEZMnAYcaGZ1%2F%2BYlVXF6q60WjYEFB7UAbsi0hEKgl2V2aWyY3ji6pTIw%3D%3D

@mcdonamp - thanks for the detailed explanation. Understand the motivations and decisions being made on both GCS/Firebase convergence and key rotating and all make sense.

I’ll just add that I feel there’s a Firebase-based app use case that isn’t being served properly - when you want to create http urls to provide to web/mobile clients for long lived content that isn’t suitable for a full-blown CDN.

Example: photo sharing app that allows users to share a photo to a specific list of other users/contacts based on certain app rules, or a rules-based file/image distribution list (usually based on data in Realtime DB or Firestore). You want to create this URL once in the lifetime of the image (+ have the ability to revoke).

  • The signedUrl GCS method isn’t suitable because the app can’t rely on the generated url to actually live to its expiration date (this exact issue thread - pending results from @colinjstief).
  • The “Firebase style” metadata-based solution: which is perfect for the use-case but unavailable on the Firebase admin SDK.

I say this is a problem because I’ve witnessed devs trying to work around it in various ways, depending on their specifics and the level of security/privacy they require:

  1. I’ve seen cases where apps add a metadata key for every user uid that can access an image and then define storage rules such as: allow read: if resource.metadata[request.auth.uid] == request.auth.uid;
  2. I’ve seen cases where apps replicate the “firebase download token” mechanism completely by adding their own app token to objects and building an image severing end-point (in GCF/GAE) to validate the token provided by the client (in request uri or query string) before serving the file.
  3. I’ve seen cases where apps build an image serving end-point that checks requester access auth in the Realtime DB before serving the image.

My point is that I think that creating infinite duration URLs on server-side is not uncommon in Firebase apps and I think there’s an opportunity for Firebase to provide one clear and simple solution (like you do in most other parts of your service and SDK)

@shaibt @colinjstief Hey guys, since I was not really satisfied with the outcome of this, I kind of looked for a “clean” way to handle this (from my point of view), and here is what I came up with:

I am using the firebase CLI to switch between my environment, with the firebase use command. I am then using firebase functions:config:set sa.key='<YOUR-SERVICE-ACCOUNT.JSON-DATA>'

Variables set this way are only relative to the current used environment, so it’s good to switch between different configurations easily. This way, I also do not have any files stored anywhere on my disk, or on my version control, no need to upload a file to cloud function either.

I then initialize my firebase admin SDK this way:

import * as functions from 'firebase-functions';
import * as admin from 'firebase-admin';

const serviceAccount = functions.config().sa.key;

admin.initializeApp({
  credential: admin.credential.cert(serviceAccount),
  databaseURL: JSON.parse(process.env.FIREBASE_CONFIG).databaseURL, 
  storageBucket: JSON.parse(process.env.FIREBASE_CONFIG).storageBucket, 
});

// And then later in my code when I need it...
const storage = admin.storage();

And it works like a charm ! 😃

This is the cleanest way I found to deal with this environment / private key storage safety concern.

Hi all,

Apologies for the delay: I created a new service account and then generated a signed URL for a GET request. After deleting the service account key from the Google Cloud Console, I received the same error message you all have been seeing:

<Error>
  <Code>SignatureDoesNotMatch</Code>
  <Message>
    The request signature we calculated does not match the signature you provided. Check your Google 
    secret key and signing method.
  </Message>
  <StringToSign>
    GET 1531165081 /coderfrank/51f7251dc8561126be000027_736.jpg
  </StringToSign>
</Error>

I’ll follow-up internally with the Cloud Functions team. My current guess is the service account key used to provision your Cloud Function environment is deleted after a certain amount of time (5-7 days). The service account is used to generate a signedURL and when deleted renders the URL useless (SignatureDoesNotMatch).

I appreciate everyone’s patience on this issue.

After 4 weeks, my signed URLs are still live. Having verified that providing an explicit service account solves the problem, I’m going to close out this issue.

Have the exact same problem. In a Firebase function, using GCS node js API ver 1.7.0, uploading an image to GCS and immediately asking for a signed URL to expire in the future (~7 years): let gcsBucket = gcs.bucket(bucketname); let options = { destination: gcsFilename, metadata: { metadata: metadata, contentType: 'image/jpeg' } }; gcsBucket.upload(mainImageLocalUrl, options).then((data) => { let file = data[0]; let config = { action: 'read', expires: '01-01-2025' }; return file.getSignedUrl(config);

Everything is working fine (url is accessible by clients) but at some point (after few days, ~10 days) - it “expires” and returns: <Error> <Code>SignatureDoesNotMatch</Code> <Message> The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method. </Message> <StringToSign> GET 1735689600 /XXXXXXXXX.appspot.com/some_directory%2FFAD0E0EA-61AE-4689-AB22-189E02FD2710.jpg </StringToSign> </Error>

URLs created more recently are available and working as expected. Random examples (as of 8am GMT on July 8th): URLs created on June 22nd are returning the above error URLs created on June 29th are still working and allowing access to files

Any update here ? I have the same problem for more than a month now and it is the third time that there are no photo displayed on my app after 10-12 days. Every time it happens, I try to regenerate signed urls but that takes a while for 50 000 images 😕 Is there a fix to make the signed url expiration longer than 2 weeks ?

Please let me know if I missed answering your question.

I’m asking around to find out if there’s someone working on a better solution for long-term service accounts in production. I suspect from the original example, this is a known issue when wanting to signedURLs for longer than a week and that’s why it explicitly sets a user-managed service account.

@colinjstief IIUC from the original sample the difference I see is your sample uses a different way to provide credentials. Not sure if this is a Google OAuth2 client id json file or Google service account. Compared to the original example, it’s expecting a user-managed service account.

@mparpaillon, given that Cloud Functions has a temporary service account, if your URLs stop functioning as mentioned above, then you’ll need to generate a user-managed service account and use it to initialize the client library. You’ll then have control of how long the public/private keys exist for.

@shaibt could you link me to more information on “firebase flavour” permanent web URLs? w.r.t Second part of your feedback: You’d like to have Firebase manage these download URLs with Storage Rules instead of using GCS signed URLs. Is this what you’re asking for?

I really appreciate everyones patience, trying to get better answers for y’all.

@schmidt-sebastian In order to avoid the collateral damage discussed in the later parts of this thread, documentation improvements could be made. For example, guidance to avoid the serviceAccountId / IAM method of initializing firebase apps in Functions if you are using Firebase Storage. It is not at all clear that this issue exists, and the cost of discovering late that your links are all invalidated before their expiryDate can be high. The likelihood of discovering this late is also high: if you use short expiration dates (1 day) and you never run a test at exactly the unknown window of that 1 day inside the window when server keys are re-rolled, then you’ll never hit this. Totally possible to get to production and then have customer-facing red-face issues.

We’ve just run into the same issue. We stopped providing an explicit service account file and used the behind-the-scenes service account authentication that was recently introduced. Starting from a couple of days ago, none of our images can be viewed in the browser anymore due to the “SignatureDoesNotMatch” error. I am just glad I found this thread that pointed me in the right direction. Please, dear GCS-team, mention the drawbacks of the auto-rotation in the documentation and/or find a workaround for already deployed content. Re-generating download links every other week cannot be the solution. For the time being, we’ll also switch back to using an explicit service account file.

Part follow-up: GCF team pointed me to documentation located at Docs

GCP-managed keys. These keys are used by Cloud Platform services such as App Engine and Compute Engine. These keys cannot be downloaded. Google will keep the keys and automatically rotate them on an approximately weekly basis.

So the Google Managed key is rotated on a weekly basis and makes sense given the behavior everyone has encountered. Looking into a best practice next, for now as @shaibt mentioned, a user managed service account would be a workaround for this behavior.

I’m disappointed that we’re being bitten by this issue near the end of 2020, more than two years after it was reported (and we’re using v4 urls that expire after only one day: we were bitten by this inside of a day.)

That this issue still exists makes it feel like the different Google teams are not working well with each other. I understand the reasons given by @asciimike and that’s understandable for a small period of time, but not a year, and certainly not as many years as it has been. Assuming that Google wants projects to use Firebase in production, then this issue needs to be solved. Why is it closed?

Suggestion in the meantime: Firebase documentation should be amended to avoid the serviceAccountId / IAM method of authentication for Firebase Functions, as this issue is going to continue to cost many people hours and hours of their time, and serious embarrassment / public loss of confidence.

Hey @apanagar

getSignedUrl({ action: “read”, expire: “03-09-2200” })

maybe because it’s { action: “read”, expireS: “03-09-2200” }

Ok but that’s a workaround right ? This is an actual bug and it’s getting fixed, right ? I have a thousand signed URLs that don’t work anymore. Can I expect a fix in the following days or should I already regenerate them and use the serviceAccountKey file ? Or did I miss something ? Thanks

This is a closed issue, so probably best to open a new issue and refer back to this one.

@frankyn here is what I ended up doing:

const functions = require("firebase-functions");
const gcs = require("@google-cloud/storage")({
  keyFilename: "firebase-sdk-production.json"
});
const admin = require("firebase-admin");
admin.initializeApp();

exports.processUploads = functions.storage.object().onFinalize((object, context) => {

  // ...other things...

  const bucketName = object.bucket;
  const bucket = gcs.bucket(bucketName);
  const fullPhoto = bucket.file(pathToPhoto); // Name defined elsewhere

  const CONFIG = {
      action: "read",
      expires: "03-01-2500",
  };
  fullPhoto.getSignedUrl(CONFIG)

  // ...other things...

So yes, I uploaded a service account file with the function.

Thanks @shaibt! IIUC this issue may be better for given the context. I’d like to follow-up there to get someone from Firebase in the nodejs context to review this question.

@colinjstief, at the moment this is the “practice” to move forward with. The on-going work I thought may be related to this issue is not focused on expiration of the private keys in production environments.

Please close after you’ve verified signedURLs still function properly after a week. I’ll follow-up with an update when I get more information, but for now uploading a service account with your GCF is the only way to have a long-term signedURL.

re:@BorisBeast, at the moment I’m only guessing* until I get a better understanding of the Cloud Functions environment. When I do I’ll have an answer to your question.

re: @shaibt, If my guess is correct, then potentially yes including the service account in your deployed Cloud Function could be a workaround, but I would not recommend it as a best practice yet.

@schmidt-sebastian In order to avoid the collateral damage discussed in the later parts of this thread, documentation improvements could be made. For example, guidance to avoid the serviceAccountId / IAM method of initializing firebase apps in Functions if you are using Firebase Storage. It is not at all clear that this issue exists, and the cost of discovering late that your links are all invalidated before their expiryDate can be high. The likelihood of discovering this late is also high: if you use short expiration dates (1 day) and you never run a test at exactly the unknown window of that 1 day inside the window when server keys are re-rolled, then you’ll never hit this. Totally possible to get to production and then have customer-facing red-face issues.

We’re squarely in this position now, and finding out about this honestly sort of angers me. We’ve been chasing our tails on this forever assuming our users were crazy. Never did we realize that ya know, our links given an expiry time still expire.

I’m having the same issue, my Firebase Storage signed download URLs expire after about three days. I see that this discussion is closed but I don’t see any answer marked as solving the problem?

I have the content_type set to audio/mp3. I’ve read this entire thread but I don’t understand if there’s a solution to this problem? Here’s my code, which is mostly copied from the documentation:

const {Storage} = require('@google-cloud/storage');
const storage = new Storage();
const bucket = storage.bucket('my-app.appspot.com');
var file = bucket.file('Audio/' + longLanguage + '/' + pronunciation + '/' + wordFileType);

const config = {
    action: 'read',
    expires: '03-17-2025',
    content_type: 'audio/mp3'
  };

  function oedPromise() {
    return new Promise(function(resolve, reject) {
      http.get(oedAudioURL, function(response) {
        response.pipe(file.createWriteStream(options))
        .on('error', function(error) {
          console.error(error);
          reject(error);
        })
        .on('finish', function() {
          file.getSignedUrl(config, function(err, url) {
            if (err) {
              console.error(err);
              return;
            } else {
              resolve(url)
            }
          });
        });
      });
    });
  }

Hi @colinjstief,

By any chance have you found an elegant solution for managing the service account credentials json file between different environments (dev, pre-prod, prod, etc)? I hate that I have to upload the credential file itself with my functions (which is unavoidable for solving this issue) but also hate to manage which file I have to upload that matches the environment being deployed.

@colinjstief In practical terms what I meant is to replace (in your example code) explicit auth of GCS to explicit auth of firebase admin: const functions = require('firebase-functions'); const admin = require('firebase-admin'); const serviceAccount = require('./serviceAccountKey.json'); const firebaseAdmin = firebase.initializeApp({ credential: firebase.credential.cert(serviceAccount), databaseURL: 'https://XXXX.firebaseio.com', // for realtime database if you need it storageBucket: 'XXXX.appspot.com' // default firebase storage bucket }); const database = firebaseAdmin.firestore(); const gcs = firebaseAdmin.storage();

hi all having the exact same problem here but I don’t really understand the workaround @shaibt and @frankyn mentioned Could you please give some more information on it? how to implement it or where can I find it?

and the main question is, will those images work again?

Hi @frankyn,

So would a possible workaround (for now) be to use an explicit service account in cloud functions (using an account key JSON file) instead of the default GCP authentication for the GCS SDK and the cloud function running it?

@frankyn, I’m sorry to bother you, but I don’t understand one thing. Is a new service account key generated each time I call getSignedURL()? If not, why is the signature different every time, but the links still work?

@frankyn Both of the URLs created for this thread are now broken and throwing the same error.

Away from computer right now, will post this evening.