Implementing a Multi-Account Environment with AWS Organizations and the IAM Identity Center.

Andres Solorzano
19 min readDec 13, 2022
Image from AWS Organizations Site

My previous tutorial discussed implementing a Single Sign-On (SSO) using Amazon Cognito as an Identity Provider (IdP) for our Task Service. This service is implemented using the Java/Quarkus Framework for the backend and the Amplify with the Ionic/Angular Framework for the frontend. All these components were implemented in my previous tutorials using a single AWS account. The best practice suggests using different AWS accounts for each SDLC/Prod environment. So in this tutorial, we are creating 5 Organizational Units (OUs): Security, Sandbox, Deployments, Workloads (for development, testing, and production), and Suspended. Then, we’ll apply security restrictions to those OUs and their associated accounts. Finally, we’re deploying the IdP Service in the Security account and the Tasks Service in the Sandbox account.

To complete this guide, you’ll need the following tools:

NOTE 1: You can download the project’s code base from my GitHub account to review the latest changes. Also, you can pull the docker image for the API service (backend) from my DockerHub and deploy it using Docker Compose.

NOTE 2: I’ve also created the “hiperium-sso-managementGit repository, which contains a helpful set of scripts to automate the tasks shown in the “Organization SCPs” and “IAM Identity Center” sections in this tutorial.

Project Improvements

The following are the significant changes I made in the Tasks Service project.

Using ARM64 as the default Arch Type.

I find it challenging to deploy the Tasks Service in AWS using an AMD64 arch when using an ARM64 on my computer. I did this initially because I found some AWS drawbacks when configuring the CI/CD Pipeline using the Copilot CLI (as mentioned in my previous tutorial). But it didn’t result productively in the project’s development. So decided to return to the ARM64 architecture using the supported tools (more of this in the next section).

If you have an AMD64 chip, you only need to replace the word “-arm64” with a blank text in the configuration files, except for the Copilot Pipeline configuration, where I put some comments when you need to change the default architecture type to use x86_64.

Using Java 11 as the default Project Version.

As I showed in my previous tutorial, the AWS CodeBuild service has the following supported image versions for the Java language:

So as I decided to use the ARM64 for the project architecture type, I needed to move the Java version to “correto11,” which supports the AL2 ARM64 in versions 1.0 and 2.0. Furthermore, the Tasks Service hasn’t a specific Java 17 feature, so the transition to Java 11 was transparent, and the CI/CD Pipeline configured with the Copilot CLI will be in a more supportive way.

Using Aurora Serverless 2 as a default Project Data Store.

A new feature with Copilot version 1.23.0 is the support for Aurora Serverless version 2 as a default DB Store.

So, I don’t need to produce a custom CloudFormation template to manually create the Aurora Serverless version 2 as we did with the API Gateway.

Using an App Load Balancer (ALB) for the Tasks Service API.

Initially, I deployed the Tasks Service API inside the ECS cluster. Each ECS task is registered in the AWS Cloud Map service for later discovery using the AWS API Gateway. There is no way to increase or decrease (scaling) the number of ECS tasks based on the transactional load. For this reason, I configured an internet-facing AWS ELB using the AWS Copilot.

You can read about this exciting topic in my article “Configuring an Application Load Balancer for an ECS cluster using the AWS Copilot CLI” for more details ;).

AWS Copilot Issues

When using the 1.24.0 version of Copilot (the latest at this moment), the init command didn’t recognize the platform version declared in the “manifest.yml” file, which is “linux/arm64”. The command shows a warning message like the following:

The subsequent commands, “copilot env init” and “copilot env deploy,” work without problems. But when I executed the “copilot deploy” command, the ECS service never deployed the Tasks Service task, and it appeared as a “pending” state for a long time. I think this has to be with the architecture type misconfiguration shown previously. For this reason, I use version 1.23.0 of the Copilot CLI.

The excellent news about version 1.23.0 is that it now supports the Amazon Aurora Serverless version 2 as a default relational Data Store using the Copilot CLI.

With all these in mind, we can continue with the tutorial.

1. AWS Organizations

Following the best practice mentioned in this AWS official article, it’s better to use the following organizational units for our projects:

Security: Used for security services. Create accounts for log archives, security read-only access, security tooling, and break-glass.

Sandbox: Holds AWS accounts that individual developers can use to experiment with AWS Services. Ensure that these accounts can be detached from internal networks and set up a process to cap spend to prevent overuse.

Workloads: Contains AWS accounts that host your external-facing application services. You should structure OU’s under SDLC and Prod environments (similar to the foundational OU’s) to isolate and tightly control production workloads.

Deployments: Contains AWS accounts meant for CI/CD deployments. You can create this OU if you have a different governance and operational model for CI/CD deployments as compared to accounts in the Workloads OUs (Prod and SDLC). Distribution of CI/CD helps reduce the organizational dependency on a shared CI/CD environment operated by a central team. For each set of SDLC/Prod AWS accounts for an application in the Workloads OU, create an account for CI/CD under Deployments OU.

Suspended: Contains AWS accounts that have been closed and are waiting to be deleted from the organization. Attach an SCP to this OU that denies all actions. Ensure that the accounts are tagged with details for traceability if they need to be restored.

As you can see at the end of the Deployments description, it’s better to create the same SDLC/Prod accounts structure made in the Workloads OU but in the Deployments OU. We’re following this best practice too. So let’s go to the AWS Organizations console and create that OUs.

Then, create the required AWS accounts. So in the Organizations console, click on the “Add an AWS account” button:

Create the suggested Sandbox account on the SandBox OU. Enter the corresponding input data in the first two fields and let the 3rd with the proposed IAM role name. Later you can create the rest of the accounts for your OUs.

IMPORTANT: I’ll use the Sandbox account to perform all the initial tasks. We’re validating the corresponding use cases with this account. Later in the tutorial, we’ll apply all these configurations to the rest of the accounts created in this section.

The structure of the OUs could be something like the following:

When creating AWS accounts, you must use different email addresses. Every time you request a new account, you must accept a verification email sent by the AWS Organizations service:

When you try to login into the AWS console for the first time, click on the “Forgot Password” link to specify a new password. Then, you can access the home page of the AWS account.

All the accounts you create will appear below the Root account in your Organization console. So, move these accounts to the corresponding OU. Remember that only the production accounts will be placed in the Prod OU and the remaining ones in the Pre-Prod OU.

NOTE: If you want to delete accounts from the Organizations console, you may get an error indicating a missing step in the signup process of that account. The workaround is logging into each account and deleting them manually from the Account settings section. Finally, move all these accounts to the Suspended OU.

The next step is to limit service operations in the accounts we created. Therefore, we need to create a policy that controls the permissions over the Organization’s accounts.

2. Organization SCPs (Service Control Policies)

We can configure service control policies (SCPs) for your OUs using 2 types of approaches: deny list or allow list. The suggested configuration for AWS is using a “deny list”:

Deny statements require less maintenance, because you don’t need to update them when AWS adds new services. Deny statements usually use less space, thus making it easier to stay within the maximum size for SCPs. In a statement where the Effect element has a value of Deny, you can also restrict access to specific resources, or define conditions for when SCPs are in effect.

I like this point of view so we will follow this approach. Thus, I created the following JSON policy that must be applied to the required OU through the SCP:

NOTE: You can find the complete SCP in the “hiperium-sso-management” project in the “hiperium-scp-policy.json” file. You can also follow many examples from AWS using deny lists using this link. You can copy the ones that make more sense to you.

Now, it’s time to use the CLI to create our organization SCP:

$ aws organizations create-policy                                         \
--name hiperium-access-policy \
--description "Deny list of services for the Hiperium organization." \
--content file://hiperium-scp-policy.json \
--type SERVICE_CONTROL_POLICY

In the IAM console, SCP section, you must see our recently created Organization’s SCP:

Now, go to the Organizations console, and you will notice that the SCP is disabled by default:

So let’s enable it using the following command:

$ aws organizations enable-policy-type    \
--root-id "<org_root_id>" \
--policy-type SERVICE_CONTROL_POLICY

Now, in the Policies section, you will see that SCPs are enabled now:

Finally, we must attach our SCP to the Organizational Units we desire:

$ aws organizations attach-policy     \
--policy-id "<scp_id>" \
--target-id "<org_id>"

If you go to the “Service control policies” section on the Organizations console, then you must see in the “Targets” tap of our created SCP the selected OUs that you decided to attach the SCP:

So far, with the Service Control Policy (SCPs), we’re indicating to our AWS accounts (associated with OUs) a DENY list of services they don’t have permission to access. This is not the same as an IAM policy or role we assign users to access a specific service. We’re talking about this topic in the following section.

3. IAM Identity Center (AWS SSO successor)

After you create the required AWS accounts into the OUs, it’s time to make the workforce of users. From now, we’ll be creating users in a company-oriented form. So go to the IAM Identity Center service and create a user:

After completing this information, your user will be created:

As usual, you will receive a confirmation email to complete the signup for your user. After that, you can access the login interface for your Organization:

Then, we need to create a group:

So the user is now assigned to a group. The following section shows how to create a policy that allows users to perform service operations in the Organization’s accounts.

3.1. Creating a Permission-Set

As cited in the IAM Identity Center official documentation:

A permission set is a template that you create and maintain that defines a collection of one or more IAM policies. Permission sets simplify the assignment of AWS account access for users and groups in your organization. For example, you can create a Database Admin permission set that includes policies for administering AWS RDS, DynamoDB, and Aurora services, and use that single permission set to grant access to a list of target AWS accounts within your AWS Organization for your database administrators.

The idea is to create a Permission Set with distinct IAM policies that our users must assume as an IAM Role (created by the Permission Set) when accessing the different accounts in the Organization. Remember I was using the following IAM permissions to provision the Tasks Service:

With this in mind, I created a JSON policy file in the “/utils/aws/iam” directory with the values of the policies shown in the previous image. The objective is to use this file as an inline policy. The Permission Sets service establishes four types of custom policies:

I decided to use the Inline Policies approach given the following characteristics, as AWS mentioned:

When you deploy a permission set with an inline policy, IAM Identity Center creates an IAM policy in the AWS accounts where you assign your permission set. IAM Identity Center creates the policy when you assign the permission set to the account. The policy is then attached to the IAM role in your AWS account that your user assumes.

When you create an inline policy and assign your permission set, IAM Identity Center configures the policies in your AWS accounts for you. When you build your permission set with Customer managed policies, you must create the policies in your AWS accounts yourself before you assign the permission set.

So I don’t need to deploy the IAM access policy to each of our accounts in the AWS Organization; the IAM Identity Center creates them for us. So, go to the Permission-Set service to make our permission set.

In step 1, select “Custom permission set” and click next:

In step 2, copy and paste the content of our custom IAM access policy in the “Inline policy” section and click next:

In step 4, set the details for our Permission Set and then click next:

In the final step, click the “Create” button to finalize the revision step. Our permission set must appear on the Permission Set home page:

So far, we have created the permission set for the Provisioners, but we’ve not yet assigned it to any account or group. So, let’s do it.

3.2 Assigning Permissions Sets

Please go to the “AWS accounts” section in the IAM Identity Center and select the accounts you want to assign our permission set:

Click on the “Assign users or groups” button, and go to the “Group” tab on the next page. Select our Provisioners group and click next:

In the next step, select the “provisioners-permission-set” and click next:

Finally, review and submit the form in the next step. AWS redirects you to the “AWS accounts” home page with a successful message:

All the users assigned to the “Provisioners” group will have the Permissions Set we’ve created. Remember that we gave the permission set to the Sandbox account only. Later we’ll add to the rest of the accounts.

Let’s verify the configurations we’ve made using our Identity user created at the beginning of this section. So, please go to the Dashboard page of the IAM Identity Center and click on the “AWS access portal URL.” Try to log in with your Identity User credentials to access your home page:

The previous image showed the Sandbox account I assigned to the Permission Set. So click on the “Management console” link to see what happens:

It would be best if you were redirected to the home page of the Sandbox account. Notice in the upper right corner that we assumed the “provisioners-permission-set” role created by the Permissions Set service.

So far, so good. The next step is configuring the AWS CLI with SSO support to access our accounts, which is the Sandbox in my case.

3.3 SSO Login with AWS CLI

So now, it’s the moment to access some of the accounts we assigned the Provisioner’s Permission Set, but this time using the command line. So, let’s modify the “~/.aws/config” file by adding the following content:

In the “sso_account_id” parameter, write the account ID of some of your Organization’s accounts identifier. In my case, I’m using the ID of my Sandbox account environment. In the same way, write the “provisioners-permission-set” value in the “sso_role_name” parameter. Then, execute the following command:

$ aws sso login --profile sandbox

The “- -profile” parameter indicates to AWS CLI to use the named profile we defined previously. A new browser tab will be opened in your default browser, where you must authenticate your organization user credentials and authorize your device’s access request:

Then try to create an S3 Bucket to verify if the command is using the correct account and if you have the proper permissions:

$ aws s3api create-bucket                  \
--bucket sandbox-test-20221116-bucket \
--region us-east-1 \
--profile sandbox

If the output message is successful, go to the S3 console of your selected profile account to see your newly created bucket:

If this is working as expected, try to logout using the following command:

$ aws sso logout

The next time you try to log in, you must be asked for the user’s credentials and a device request authorization. Then you must use the regular CLI commands but remember to use the “- -profile” parameter to use the correct AWS account. Also, you can export the AWS_PROFILE variable.

In the next section, we’ll try to deploy the Tasks Service Application using an Organizational Unit account to see if the Permission-Set policy was correctly set.

3.4 Aliasing the Login Script (Optional)

Inside the “hiperium-sso-management” project, I created a file called login.sh, which allows us to automate some SSO configurations like setting the AWS CLI credentials for our Organization’s accounts. You can add a Linux Alias in your “.bashrc” or “.zshrc” files as follows:

alias hiperium-login="~/hiperium-sso-management/identity-center/login.sh"

Then, execute the following command to update the previous config:

$ source .zshrc          # Or .bashrc depending on your configuration.

With this last command, we can run our alias in any terminal window:

$ hiperium-login

As we saw in the previous section, you must be asked to allow the SSO Login. So after you approve your device connection, the shell script configures your AWS credentials, including the access and secret keys.

4. Tasks Service App

Now, it’s time to deploy our Tasks Service App into the new structure of Organizational Units. For this purpose, I’ll be using the Sandbox and Security OUs. Remember that we’re using an SSO configured with Cognito as an Identity Provider for the Tasks service App. My previous article talks about that. For this reason, it’s a good idea to use the Security OU for the SSO and the Sandbox OU for the Tasks Service App.

IMPORTANT: Don’t forget to add the needed profiles to the “~/.aws/config” file and assign them the “Provisioners” Permission-Set.

4.1 Identity Provider (IdP)

I’ve created a Git repo called “hiperium-city-idp,where you can find the Amplify commands to configure the IdP service. Remember that I’m not sharing my “.amplify” directory, so you must execute the following commands.

First, login into the SSO service using our helper shell script. Use your profile for the Identity Provider account:

$ hiperium-login

Then, navigate to the cloned “hiperium-city-idp” directory and create a new branch for the pre-production environment:

$ git checkout -b pre

Now, initialize the Amplify project for this new environment:

$ amplify init

These are the initial config values that I’m using:

Notice that “pre” is the Amplify environment name, and “idp-pre” is the AWS profile I’m using in this configuration. You can use your naming convention, but remember that this Amplify project is for the IdP “Pre-Production” account.

Now it’s time to configure our Auth module. I enabled Multi-Factor Authentication (MFA) for our IdP. If you want more details about this topic, please review a previous article where I write about this.

$ amplify add auth

These are the initial config values that I’m using:

NOTE: For the OAuth redirection, I’m using the “localhost” URI for our local environment testing. In the next section, we must update this property list to add the URI for our Amplify App deployed in the “Sandbox” account.

Finally, push these configurations to the “IdP-Pre” account on AWS:

$ amplify push

So far, we have deployed our IdP service on AWS. The next step is to deploy the Tasks Service.

4.2 Tasks Service App

Pull the source code of our Tasks Service and navigate to the root directory:

$ git pull https://github.com/hiperium/hiperium-city-tasks.git
$ cd hiperium-city-tasks

Log in to your Identity Store but use a different AWS Profile this time. I’m using my Sandbox profile to deploy the required infra to this OU account.

$ hiperium-login

I created an automated shell script to deploy our Tasks Service resources on AWS. So execute the following command:

$ ./run-scripts.sh

This shell script contains a menu with the ordered steps (1 to 5) instructed to deploy the required infra on AWS:

You can commit step 2 because it creates a CI/CD Pipeline that is only required if you want to make some tests using this service. The rest of the steps are needed to deploy the Task Service as a Cloud Native app.

At the end of the 1st step, you must see the successful rollout of the Tasks Service in the ECS service:

For step 3rd, I used the following configuration:

In step 4th, enter the AWS profile used to deploy the IdP Service. The shell script gets the required parameters to config and deploys the API Gateway:

Step 5th is similar to step 4th at the beginning because it needs to update the “environment.dev.ts” file of the Ionic/Angular application, which contains the backend API and IdP endpoints to configure the application. Until this point, the Ionic/Angular app didn’t know these values:

And after that, step 5th also needs to configure the Amplify Hosting in the AWS console:

Since it’s a manual process, you need to configure the Git repository connection to deploy each change that the Amplify service detects through a CI/CD Pipeline using the “dev” branch (in this case):

After this configuration, the CI/CD Pipeline must run and must end successfully:

At the end of step 5th, when the Pipeline runs successfully, the script shows you the Amplify URL that you must register in the IdP Service:

Remember that the first time we configured the IdP, we entered the “http://localhost:8100/home/” as the OAuth redirection param. So now, we need to add this new URL to the IdP Service configuration:

$ amplify update auth

Select the “Add/Edit signin and signout redirect URIs” option and press enter:

Then add/update the Amplify Hosting endpoint with the Amplify App URL:

Finally, push these changes to the AWS Amplify service:

$ amplify push

You can go to the Cognito console on the IdP-Pre account on AWS, and in the Client ID Web section, you must see the recent changes:

Now, open the Amplify Hosting endpoint in a new web browser tab to be redirected to the “home” page of the Tasks Service app:

Click the “Login” button to be redirected to the IdP service:

Create a user if you don’t have anyone yet. As we configured the MFA support in our User Pool ID, we must configure this feature using our Authenticator App:

After that, try to log in to access the authenticated home page of our app:

As usual, try to create a new task:

Then, execute the following command to get an active connection and see the last logs in the ECS service:

$ copilot svc logs            \
--app hiperium-city-tasks \
--name api \
--env dev \
--since 30m \
--follow

As you can see, the last logs show the creation of our task. After a few minutes (in my case), the console will print the logs corresponding to the execution of the task:

So the Tasks Service is now deployed on the Sandbox account, and the Idp is deployed on the Security IdP-Pre account.

And that’s it!!! In the following article, I’ll write about deploying CI/CD Pipelines for each SDLC account inside the Workloads OU. So we’ll continue implementing software development best practices using our Tasks Service on AWS.

I hope this article was helpful, and I’ll see you in my next article.

References:

--

--