landing-zone-accelerator-on-aws: [v1.5 REGRESSION] new IdentityCenterAssignmentConfig principal syntax blows up OperationsStack
Describe the bug
With the change to the IdentityCenterAssignmentConfig, where principalId and principalType got deprecated in favour of principals, there are serious practical regressions.
For one, when just replacing the old style syntax with the equivalent new style syntax on an existing assignment, deployment fails due to resource conflicts.
For another, the new style syntax creates a ton of extra resources that seem to have to do with lookups (as now principals can be referenced by name instead of having to be referenced by ID). This doesn’t seem to happen per principal,but instead every permutation of principal/permission set/permission set assignment.
This very very quickly blows up an OperationsStack to much more than 500 resources, even in a relatively new setup with few principals and role assignments.
In my case the functionally same assignments with the old style syntax create 95 resources in my Audit account’s OperationsStack, whereas with the new style syntax deployment errors out because it tries to deploy a CFN stack with over 600 resources. (example below is from a later deployment where I removed assignments from the config to try and get it below 500 again)
Worst part is that both errors only materialize during the Operations deployment step.
Unless this gets resolved, this will block us from upgrading to a future version, where the now deprecated syntax has been removed.
To Reproduce Set up an LZ with some accounts accounts which have about half a dozen permission set assignments with different permission sets across 3 or 4 principals using the old config syntax. Then convert to the new config syntax.
Expected behavior The deployment stays unchanged and doesn’t require more resources than before!
Please complete the following information about the solution:
- Version: v1.5.1
- Region: eu-central-1
- Was the solution modified from the version published on this repository?: NO
- Have you checked your service quotas for the services this solution uses?: YESN’T… the 500 resources per cloudformation stack is a hard limit afaik but that stack is synthesized by LZA, so 🤷🏼♀️
“Screenshots”
2023-11-27 19:01:24.997 | error | accelerator | Number of resources in stack
'AWSAccelerator-OperationsStack-ACCOUNTID-REGION':
519 is greater than allowed maximum of 500: AWS::SSM::Parameter (2),
Custom::SsmGetParameterValue (1), AWS::IAM::Role (110), AWS::Lambda::Function (107),
AWS::Logs::LogGroup (54), AWS::SSO::PermissionSet (10), AWS::IAM::Policy (107),
AWS::CloudFormation::CustomResource (53), AWS::SSO::Assignment (75)
About this issue
- Original URL
- State: open
- Created 7 months ago
- Reactions: 5
- Comments: 16
Hi @hemanth-m19 and @xp-versent ! We do have a fix for this that will essentially consolidate the account assignments and move these assignments over to another stack that will be available in our 1.7.0 release (Which could be available early next month, but the release window is subject to change). As a short-term workaround, these can be created in the customizations layer of the LZA solution. We will keep you up to date on when the 1.7 release is official and provide an update in this thread.
@richardkeit When you look at the L1 construct for the account assignment the
targetIdproperty only takes in a single account ID. So for an organization that needs to have an assignment for a particular User/Group to 50 accounts, there will be 50 resource objects alone just for that one User/Group. If they’re using the dynamic lookup functionality (removing the old syntax of having to manually lookup the principal ID and relying on LZA to do that lookup), there will be multiple Lambda look ups as well added to the stack.With the example I provided earlier, that one assignment block that handled both a USER and a GROUP would result in just one custom resource object in the CFN stack. We’ve looked into nested stacks, but it gets difficult managing downstream dependencies for your awareness.
Hi Dominik, thank you for your patience. With the behavior you outlined, I’ve gone ahead and filed a bug report for this particular issue. I’ll keep this thread open to keep you up to date on when this is resolved. Please let us know if you have any other questions or concerns in the meantime.