amplify-cli: Error: Only one resolver is allowed per field
The following GraphQL-Schema leads to the Error “Only one resolver is allowed per field” when pushed to the AWS Cloud.
type TypeA @model {
id: ID!
typeB: [TypeB!]! @connection(name: "myCon")
}
type TypeB @model {
id: ID!
typeA: TypeA! @connection(name: "myCon")
}
The generated Resolver and CloudFormation looks good to me. The Resolvers seems to be unique per Type and Field.
We had to rename the non-collection property to something else so the resolver could be deployed:
type TypeA @model {
id: ID!
typeB: [TypeB!]! @connection(name: "myCon")
}
type TypeB @model {
id: ID!
typeBTypeA: TypeA! @connection(name: "myCon")
}
I don’t know if this is a CloudFormation or a Amplify CLI Bug. There is a Forum Post (https://forums.aws.amazon.com/thread.jspa?messageID=884492󗼌) which I replied to because this can be that CloudFormation Bug. But since I use the Amplify CLI, I wanted to bring this up, even if it’s just for documentation.
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 24
- Comments: 47 (5 by maintainers)
It seems there is some path that leaves APIs in a state where a resolver is left dangling even after the field is removed from the schema. When this happens you can get around the issue by going to the AppSync console, adding a field to the schema with the same name as the supposed conflict, delete the resolver record by selecting it from the right half of the schema page and clicking “Delete Resolver” on the resolver page.
I will try to reproduce and identify the underlying issue but it appears that it is not within the Amplify CLI itself.
I found a solution for this error without losing the data.
Do the following actions for the affected types:
type Name @model(queries: null, mutations: null, subscriptions: null) {...}amplify pushtype Name @model {...}amplify pushGood luck!
ABSOLUTELY DO NOT DO THIS IF YOU HAVE DATA IN YOUR DYNAMODB DATABASE.
This will delete all tables and remove all data. This can only work if your project is in a very early stage and does not have data already.
That’s the second time I ran into this issue (7 months after). A lot of time lost trying to find the correct resolver to delete (by pushing and waiting for rollbacks). @mikeparisstuff I guess this should probably be handled directly by the AppSync team, but could we expect some workaround in Amplify (like cleaning dangling resolvers before running amplify push)? Or any ETA for the AppSync fix? In addition to the time it takes to figure out the right thing to do, this is really scary when thinking about CD and if this happens during a deployment in production…
This is definitely the kind of issues that prevent me from advising Amplify to non-CloudFormation “experts”.
I ran into a similar situation
Only one resolver is allowed per field. Maybe this will help other developers out there. It seems like AppSync has a chicken before the egg problem and this should be reported to the AWS AppSync development team.We had a GraphQL type named
ChangeInCondition, which had resolvers wired up. Somehow CloudFormation rolled back a deploy and removed the GraphQL typeChangeInConditionfrom AppSync, yet kept its resolver data somehow.So upon the next deploy, it would throw the error
Only one resolver is allowed per field, due to the fact that it thoughtChangeInConditionhad a few resolvers bound and CF was trying to re-create them.Oddly, I wasn’t able to remove the resolvers through the
aws appsyncCLI since the type didn’t exist…So then I decided to try and create the missing type ChangeInCondition and see if I could then remove the resolver. Yep, that did the trick. Funny how creating the “missing type” worked. I thought it would throw an error at the API level. Here it is in Python using boto3…
I opted for more brute force approach. To solve same problem “only one resolver is allowed per field.”
amplify remove apiamplify pushamplify add apiamplify pushworks 100% of the time. don’t forget to copy the schema.@I encountered the same problem. I was trying to replace autogenerated resolvers into pipeline resolvers. For example my schema creates below queries and attach a autogenerated resolvers to this. type Query { getFactory(pk: ID!, sk: ID!): Factory listFactorys( pk: ID, sk: ModelIDKeyConditionInput, filter: ModelFactoryFilterInput, limit: Int, nextToken: String, sortDirection: ModelSortDirection ): ModelFactoryConnection } Instead of having normal autogenerated resolvers I wanted to convert them into pipeline resolvers by adding a new function to each query. Mynewfunction resolver–>autogenerated resolver–>result What I tried I changed the Customresource.json file and added my function resolver into that and then added autogenerated resolver to be pipeline.
` “GetBusinessPermissions”: { “Type”: “AWS::AppSync::FunctionConfiguration”,
“Properties”: {
“ApiId”: { “Ref”: “AppSyncApiId”
}, “Name”: “getBusinessPermissions”,
“DataSourceName”: “FactoryTable”,
“FunctionVersion”: “2018-05-29”, “RequestMappingTemplateS3Location”: {
“Fn::Sub”: [ “s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.getBusinessPermissions.req.vtl”,
{
“S3DeploymentBucket”: { “Ref”: “S3DeploymentBucket”
}, “S3DeploymentRootKey”: { “Ref”: “S3DeploymentRootKey”
}
}
]
}, “ResponseMappingTemplateS3Location”: {
“Fn::Sub”: [ “s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.getBusinessPermissions.res.vtl”,
{
“S3DeploymentBucket”: { “Ref”: “S3DeploymentBucket”
}, “S3DeploymentRootKey”: { “Ref”: “S3DeploymentRootKey”
}
}
]
}
}
}, “GetFactoryResolver”: { —> I copied it from buid/stacks and converted into pipeline “Type”: “AWS::AppSync::Resolver”, “Properties”: { “ApiId”: { “Ref”: “AppSyncApiId” },
“Kind”: “PIPELINE”, “FieldName”: “getFactory”, “TypeName”: “Query”, “PipelineConfig”: { “Functions”: [ { “Fn::GetAtt”: [ “GetBusinessPermissions”, “FunctionId” ]
}
]
},
“RequestMappingTemplateS3Location”: { “Fn::Sub”: [ “s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/${ResolverFileName}”, { “S3DeploymentBucket”: { “Ref”: “S3DeploymentBucket” },
“S3DeploymentRootKey”: { “Ref”: “S3DeploymentRootKey” },
“ResolverFileName”: { “Fn::Join”: [ “.”, [ “Query”, “getFactory”, “req”, “vtl” ]
]
}
}
]
},
“ResponseMappingTemplateS3Location”: { “Fn::Sub”: [ “s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/${ResolverFileName}”, { “S3DeploymentBucket”: { “Ref”: “S3DeploymentBucket” },
“S3DeploymentRootKey”: { “Ref”: “S3DeploymentRootKey” },
“ResolverFileName”: { “Fn::Join”: [ “.”, [ “Query”, “getFactory”, “res”, “vtl” ]
]
}
}
]
}
}
}
` I get the error:
CREATE_FAILED GetFactoryResolver AWS::AppSync::Resolver Fri Nov 15 2019 16:20:20 GMT+0100 (Central European Standard Time) Only one resolver is allowed per field. (Service: AWSAppSync; Status Code: 400; Error Code: BadRequestException;)
Is there anyway to achieve the same. I am trying to protect autogenerated queries and mutations with some access permissions rules before they get invoked.
This happened to me on two different scenarios:
When I’m trying to create a many to many relationship
I simple followed the docs.
When I updated the
schema.graphql.schema.graphqland didamplify pushwhich I expected should have removed the data sources and the resolvers.amplify push, at this point I was expecting the API to not contain anything, basically it should have started from “nothing”.The weird part is that when I went to the AppSync dashboard > schema and looked for the resolver, the resolver doesn’t exist there.
I have had this problem when I create models with keys defined and connections within them.
If you leave the connections out and first deploy (amplify push), then once thats done, add your connections and it works.
If you are facing issues about “Only one resolver is allowed per field…” when pushing an update relating to a connection in your schema, then maybe try the following:
Worked for me after that. Good luck!
I have the same error of @c0dingarchit3ct 😦 This is not the first time and usually the best way to avoid the problem was rebuild another API.
Anyway, my project is too big now and I would to know if is it possible to solve this error once and for all!
This time, I’ve encountered this error when I was experimenting on the graphql.schema , with
@keydirective. In specific, I was trying to change the primary key… but ups! I didn’t think it was so risky ^^‘’ (I think that an error by the CLI on the push operation, before all the operations start, is really necessary).I’ve just tried to push a clean version of graphql schema but doesn’t solve the problem. I can’t found anything clear in the documentation and here on GitHub there are sooo many partial solutions but I don’t know what’s the best (if it exist).
EDIT: I solved in this way: push all tables without connections and relations. After that I push one by one all connections. Now is ok, but if exist a better solution… well, it would be great.
I had the same issues. Carefully read the log to see which resolver creation fail. Go to your Appsync panel to schema and on the right side look up the resolver in question and delete it.
Appsync is not always perfectly in sync I guess 😛
@yonatanganot @regenrek
My solution for my problem… without delete data… was:
Other better solution: 1.- Comment all connections or relationships with “#” 2.- Amplify push api 3.- Uncomment all above commented 4.- Amplify push api
@mikeparisstuff i tried to go to the AppSync console, but happens that the specific resolver not even exists there, which make me think that this resolver is in some kind of cache or something. I cannot manage to get a pattern here as well as every time a different resolver name appears.
In my case, i deleted old models that were no longer needed in our codebase, but we found out that afterall we need to have them until we release a new version of our services after running some tests. We start to run into this kind of problems when we reverted our models and pushed again.
I am facing the same issue and not sure how any of these solutions apply to me?
I am not trying to replace resolvers, I am trying to override the autogenerated resolver?
How in heaven do I keep getting this error?
I am hoping @mikeparisstuff answer does not apply here? as the basic definition of ‘overriding’ autogenerated resolvers is just that! ‘override’!
@joebri I have to create a different type query to make it work. In my case that is not the ideal solution I am aiming for but its okay for now.
@adamup928 Can you provide more details on what you did to cause this? When replacing resolvers in CFN, it is generally recommended to create new fields & resolvers before removing old ones. This falls in line with GraphQL API evolution practices and prevents older clients from breaking (e.g. you might have native apps that user’s don’t update as often) while allowing newer clients to target the new fields.
As an example, one way to cause this issue would be if I were trying to move from a stack with a resolver like this:
to a stack with the same resolver but a different logical resource id:
Since the logical id of the resource blocks changed, CFN considers these distinct resources and it is not guaranteed that the stack will delete the resource with resource id “ResolverLogicalIdA” before creating a resource with “ResolverLogicalIdB”. Normally you could use a CFN dependsOn to specify that a resource depends on another but since you are removing “ResolverLogicalIdA” there is no way to depend on it. If the stack tries to create the “ResolverLogicalIdB” before deleting resolver “ResolverLogicalIdA” then you will see the error as this clashes with AppSync’s guarantee that a field has at most one resolver.
If you were to instead, create a stack with a new logical id on a new field then you would not have this issue. E.G. adding a resolver like this would not result in the clash.
You would also not see this clash if the resource logical id were not being updated.
Since the logical resource id is the same, CFN will perform an UpdateResolver operation and the operation will succeed.
@pierremarieB wrote:
We’ve run into the same on more than one occasion when renaming a resolver. Attempting to deploy both the old and newly named resolvers, by pointing the old resolver to a blackhole field name, also did not work; CF appears to process creations before deletes and updates. Our current CI/CD approach is to deploy a delete and then deploy the creation of the resolver again under the new name.
I faced the problem too. I was trying to overwrite a resolver that got generated by Amplify for a@connectiondirective. Furthermore, I couldn’t fix it using Mike’s fix.Update: Fixed it. I created an entry in
CustomResources.json, which is only needed when you create new resolvers, and not for overwriting them.Same problem here as well, it happened to me when I tried to rename a Resolver. It seems like the renamed resolver is created by CF before the old resolver is removed and therefore raises this exception.
@sprucify solution works but this needs to be fixed as it’s not viable in the long run
Got the same error today, also renamed my connection and got the error
Only one resolver is allowed per field.Only way I could correct it was to remove the two models which had the connection (first made a backup of data in DynamoDB), push it so all was removed and then push the model without connection in it so it recreated the schema, resolvers and table and after that re-made the connection