amplify-cli: Cannot perform more than one GSI creation or deletion in a single update; ConnectionStack failed to update

** Which Category is your question related to? ** graphql transformer

** What AWS Services are you utilizing? ** Amplify

** Provide additional details e.g. code snippets ** I change my annotated schema.graphql:

type Item @model {
   price: Int
   status: String
   measurement: String
+  inventory: Inventory @connection(name: "InventoryItems")
 }

 type Inventory @model {
   id: ID!
   name: String!
-  items: [Item] @connection
+  items: [Item] @connection(name: "InventoryItems")
   users: [InventoryMembership] @connection(name: "InventoryMembership_Inventory")
 }

which causes change to compiled schema.graphql:

type Item {
   price: Int
   status: String
   measurement: String
+  inventory: Inventory
 }

input CreateItemInput {
   price: Int
   status: String
   measurement: String
-  inventoryItemsId: ID
+  itemInventoryId: ID
 }

input UpdateItemInput {
   price: Int
   status: String
   measurement: String
-  inventoryItemsId: ID
+  itemInventoryId: ID
 }

amplify push results in below errors. See full log.

Cannot perform more than one GSI creation or deletion in a single update"

From https://github.com/aws-amplify/amplify-cli/issues/82#issuecomment-434016373, I should be good since my change only involves 1 connection (item-inventory). Why the problem and how to resolve this? @mikeparisstuff Thank you!

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 12
  • Comments: 43 (12 by maintainers)

Most upvoted comments

why is this marked as a feature-request and not a bug?

From our perspective, this is indeed a critical bug. It makes working with more than one person on a real-world project using @connection and CI/CD impossible and rendering the whole framework potentially unfit for production usage. Shame, we really like the general approach.

Cannot wait too long 😦 So I simply amplify api remove && amplify push to delete the whole api and its data, and restart from scratch.

This is not ideal but since I am still in dev so it is fine for now. Look forward to having mature solution

In agreement with @mhrisse , after having developed our application around Amplify, we are terrified to launch our production environment because the framework is proving to be terribly unreliable, hard to change, underdocumented, and there is a push to add new functionality instead of fixing critical bugs in existing / core functionality. It feels like a beta product that is being presented as production-ready.

Amplify abstracts away a lot of complexity. However, Amplify does not prevent users from making mistakes with respect to underlying aws resources (eg. attempting to add another GSI to a dynamo table) – so, people who chose Amplify as a way to avoid working with underlying AWS resources end up avoiding AWS resources until something breaks, at which point they need to have a great understanding of the underlying AWS resources that they’ve been dealing with. Things break frequently. So, it ends up being the dangerous appearance of abstraction, but users still need to have a fairly intimate understanding of the underlying resources being used

If there is a workaround for this issue (and the amplify framework doesn’t prevent the user from making this mistake), it should be provided in actual documentation instead of in the issues section of Github, IMO.

same issue. using the amplify api remove && amplify push solution, can we get some eyes on this bug?

@mikeparisstuff I am facing the same problem as described here, may be it possible to detail more how to ā€œmanually remove the GSI that you no longer need.ā€ ?

My issue is, if I understand that I can only make one change using @key or @connection per update, that’s fine. but then I push my feature into a production branch for feature deploy. then my amplify CI is going to try more than one change and it’ll get this error.

I think this is unworkable in it’s present state!

I’m still seeing the error message. Reason: Cannot perform more than one GSI creation or deletion in a single update Is this fixed?

why is this marked as a feature-request and not a bug? I’m having the same issue where I wanted to add a sortField to my connection, and having done a few changes then the push will always failed. I had to to the same as @YikSanChan and remove the api and push it again losing all my data.

I would vote for this as a critical bug, considering the following use-case I am currently experiencing:

Making the following schema additions:

type Organization{
subscriptions: [Subscription] @connection(name: "OrganizationSubscriptions")
...
}

type Subscription{
organization: Organization @connection(name: "OrganizationSubscriptions")
...
}

If I try and add 1 connection at a time, it fails to compile the schema (missing the associated named-connection on the other object).

If I try and add them both at the same time, I get the Resource is not in the state stackUpdateComplete

I am totally blocked by this bug, with my only other option being deleting and recreating my API.


Update 5-6-2019


After many trial and error amplify pushes, the following order of operations was successful.

Push #1

type Organization{
subscriptions: [Subscription] @connection
...
}

type Subscription{
organization: Organization
...
}

Push #2

type Organization{
subscriptions: [Subscription] @connection(name: "OrganizationSubscriptions")
...
}

type Subscription{
organization: Organization @connection(name: "OrganizationSubscriptions")
...
}

Results in success.

@jordanranz @mikeparisstuff This is a major issue for us this changes our deployment time from 8 minutes to n number of connections times 8 minutes then multiplied by the number of environments we have. This is a massive time sink for us. Is there at least a script we can run to manage the schema deployment without a refactoring on Amplify’s end?

This is a limit in DynamoDB that we must work around. In general, we will be providing new ways to customize what indexes exist on a table and enabling you to use these indexes with @connection. This way you can change the name of the connection fields without impacting the underlying GSI itself.

For now, when you making changes to @connection fields that will impact the underlying index, make the changes in two steps. First remove the existing @connection field and push your project. Then add back the connection field with the new configuration and push your project again.

If you are stuck in a situation where a GSI cannot be created nor destroyed via CloudFormation, I would suggest going to the CloudFormation console to manually remove the GSI that you no longer need.

I think this should only prevent it from ever happening. If it’s happened, then I think there’s still no solution.

We’ve been actively looking into this issue and will have a short term solution out for it in 1-2 weeks. As @houmark mentioned in his ā€œShort termā€ solution section, we’ll be adding a sanity check parser to check if any changes to your schema (basically a diff between whatever is deployed to the cloud and the changes in your local schema) would cause a cloudformation push error and fail fast - before the cloudformation push. This check would clearly mention why the sanity check failed and what should be your path forward from there. Here are the cases we’ve identified for the sanity check so far:

  1. You cannot change an existing GSI for a DynamoDB Table
  2. You cannot add and remove a GSI on a DynamoDB table at the same time
  3. Protect against the duplicate resolvers
  4. You cannot add more than one GSI at a time

Please let us know if we’ve missed any other use-cases/scenarios.

This really needs a solution or better documentation on how to manually work around these types of issues.

I’ve hit this now twice in a few days and it got me stuck for hours. A simple change becomes a major headache and you just get into this slow death spiral where you feel like deleting the entire project. It takes 10-25 minutes for every attempt you make to do fail and then the rollback.

If it’s not simple to solve this in an automated way, then the CLI should either warn (for example when compiling the schema, it could warn and wait for confirmation if a named connection is changed, it’s a simple line diff?). And if it fails, more debug information could be provided to know where to go and try a manual fix.

The platform I am working on is not live yet (it will be in a week) and I dread that this will happen once it’s live, without really knowing what to do and not having the option to delete. Doing amplify delete api is not that simple anymore as I believe it will stop when it detects that other resources depend on it (which is a good thing) and once you re-create it the dynamic name is changed, which may affect code in the project.

Hey guys, we published the sanity check feature out. This would basically prevent pushing of invalid schema changes by giving a reason and provide a way to go around the change.

I’m on the latest version and I just got the rollback error… again.

this issue just got me with out any prior warning when pushing updates to the schema…fortunately dev. Amplify v.4.16.1

Hi everyone, wanted to update on this issue. We got the sanity check/validations working as mentioned out here - https://github.com/aws-amplify/amplify-cli/issues/922#issuecomment-509418653 and we merged the PR for it (#1815). We’ll be publishing a new version of the CLI (1.8.6) early next week with this change.

Update: We are adding tooling that will fail prior to the push when you try to push a migration that is known to fail. E.G. changing a GSI, adding and removing a GSI at the same time, changing a key schema, adding LSIs after table creation, etc. We are also adding a suite of tests that cover these migration scenarios that will prevent these issues moving forward. You may track the progress here if interested https://github.com/aws-amplify/amplify-cli/pull/1815.

If you are stuck in a situation where a GSI cannot be created nor destroyed via CloudFormation, I would suggest going to the CloudFormation console to manually remove the GSI that you no longer need.

@mikeparisstuff Is this as simple as going to the nested stack template for each of the respective models and removing the corresponding item in the GlobalSecondaryIndexes array? I would assume that’s accompanied by the removal of the @connection directive for each model within your repository’s schema file.

Could you confirm / elaborate on this?

Doing the same as @YikSanChan, encountered issue when went from an Unnamed Connection to a Named Connection. Good that we are just in dev, this would be really frustrating if was encountered later on.

+1. Need a alternative way as well since I cannot remove my api

Isn’t the solution to this to this is if it happens to a table…then delete them all and then add them back one at a time…doesn’t seem complicated to me…it would be nice if there was an amplify push --resetGSI that did this automatically…

That would lead to downtime on production where no data can be returned because it’s depending on the index, while the indexes are backfilling, and for large setups backfilling can take some time, especially if you have many indexes on that table.

I am still hitting situations where the tooling does not detect the multiple GSI error before starting the push, leading to a rollback and it can be really hard to estimate how many GSI changes a fairly simple change results. In addition, it’s normal to make multiple changes in dev and then push all that to production, which then results in the same error on production. This means you manually have to outcomment changes and ensure you do one GSI update per push on production which is a stupid slow process prone to errors and downtime on production.

I don’t understand why the CLI can do several passes on GSI’s by detecting the amount needed to be done, do one, wait for it to finish, do another and so on… then finish the entire push.

This is still failing. It failed when changing my table columns names to camelCase. This should not be closed.

Hey guys, we published the sanity check feature out. This would basically prevent pushing of invalid schema changes by giving a reason and provide a way to go around the change.

Doesn’t this also mean that any table with >1 GSI keeps you from being able to create a new environment with multienv?

This only happens when updating a resource. When you create a new environment, it creates new resources for those.