Epic Terraform AWS Provider for your Amazon Infrastructure

Published:7 September 2021 - 12 min. read

Azure Cloud Labs: these FREE, on‑demand Azure Cloud Labs will get you into a real‑world environment and account, walking you through step‑by‑step how to best protect, secure, and recover Azure data.

If you plan to manage and work with Amazon Web Services (AWS), using the Terraform AWS provider is a must. It lets you interact with the many resources supported by AWS, such as Amazon S3, Elastic Beanstalk, Lambda, and many more.

In this ultimate guide, you’re going to learn, step-by-step, just about everything you need to know about the AWS provider and how to use this provider with Terraform to manage your Amazon Infrastructure.

Let’s do it!

Prerequisites

If you’d like to follow along in this tutorial, ensure you have the following in place:

  • A code editor – Even though you can use any text editor to work with Terraform configuration files, you should have one that understands the HCL Terraform language. Try out Visual Studio (VS) Code.

What is Terraform AWS Provider?

Terraform depends on plugins to interact with cloud providers such as AWS, Google Cloud Platform (GCP), and Oracle. One of the most widely used providers in the AWS provider. This provider interacts with many resources supported by AWS, such as Amazon S3, Elastic Beanstalk, Lambda, and many more.

Terraform uses AWS Provider with proper credentials to connect with Amazon to manage or deploy/update dozens of AWS services.

AWS Provider is declared within the Terraform configuration file and includes various parameters such as version, endpoint URLs or cloud regions, etc.

Declaring the AWS Provider

When you need to refer to a provider’s name, you must define a local name. The local name is the provider’s name that is assigned inside of the required_providers block. Local names are assigned when requiring a provider and must be unique per module.

When declaring the required_providers, you also need to declare the source parameter in Terraform 0.13 and later versions. The source parameter sets the source address where Terraform can download plugins.

Source addresses consist of three parts, as follows:

  • Hostname – The hostname of the Terraform registry that distributes the provider. The default hostname is registry.terraform.io.
  • Namespace – An organizational namespace within the specified registry.
  • Type – A type is a short name that you provide for the platform or system the provider manages, and it must be unique.

Below, you can see the source parameter’s declaration syntax, where the three parts of a source address are delimited by slashes (/).

## <HOSTNAME>/]<NAMESPACE>/<TYPE>

# EXAMPLE USAGE OF SOURCE PARAMETER SYNTAX

# Declaring the source location/address where Terraform can download plugins
# The official AWS provider belongs to the hashicorp namespace on the 
# registry.terraform.io registry. So, hashicorp's source address id hashicorp/aws
source  = "hashicorp/aws"

Now the Terraform configuration below declares the required provider’s name (aws), along with the source address, AWS provider version, and configures the provider’s region (us-east-2).

# Declaring the Provider Requirements when Terraform 0.13 and later is installed
terraform {
  # A provider requirement consists of a local name (aws), 
  # source location, and a version constraint. 
  required_providers {
    aws = {     
      # Declaring the source location/address where Terraform can download plugins
      source  = "hashicorp/aws"
      # Declaring the version of aws provider as greater than 3.0
      version = "~> 3.0"  
    }
  }
}

# Configuring the AWS Provider in us-east-2 region
provider "aws" {
  region = "us-east-2"
}

Authenticating an AWS Account with Hard-Coded Credentials

Now that you have a basic understanding of how to declare the AWS provider let’s go over how to authenticate an AWS account.

You can authenticate the AWS provider via a few different methods, such as declaring environment variables and storing credentials in a named profile. But the quickest way to authenticate to an AWS account is by hard-coding the credentials within your AWS Provider.

Although hard-coded credentials are not recommended as they are prone to leak, you can still declare hard-coded credentials in Terraform configuration to test any AWS resource quickly. But read on and you’ll later learn a better way to authenticate an AWS account by declaring environment variables.

The configuration below declares a local name (aws) along with the provider region (us-east-2), as shown below. You can see the AWS provider block also declares an access key and secret key to authenticate an AWS account.

# Declaring an AWS provider named aws
provider "aws" {
  # Declaring the provider region
  region = "us-east-2"
  # Declaring the access_key and secret_key
  access_key = "access-key"
  secret_key = "secret-key"
}

Securing Credentials by Declaring Environment Variables

You just learned that hard-coding static credentials to authenticate AWS cloud service with Terraform is possible. But, hard coding is unsafe and only recommended while deploying it in a test environment to quickly test code.

Is there another way to secure credentials? Yes, by declaring them as environment variables, there’s no limit to how many you can declare. Environment variables are variables whose values are set outside terraform configuration file and made up of a name/value pair.

Run the series of export commands below to export each environment variable. Exporting environment variables make them available throughout the program or until Terraform executes.

# Exporting variable AWS_ACCESS_KEY_ID
export AWS_ACCESS_KEY_ID="access-key"
# Exporting AWS_SECRET_ACCESS_KEY
export AWS_SECRET_ACCESS_KEY="secret-key"
# Exporting AWS_DEFAULT_REGION
export AWS_DEFAULT_REGION="us-east-2"

Storing Multiple Credentials in a Named Profile

Both hard-coding and declaring credentials as environment variables let you authenticate an AWS account one at a time. But what if you need to store multiple credentials and use them when needed? Storing credentials in a named profile is the ideal option.

The code below creates a named profile (Myprofile) that contains an access key and secret key.

The named profiles’ default location is at $HOME/.aws/credentials/Myprofile on Linux and macOS, or %USERPROFILE%\.aws\credentials\Myprofile on Windows. Replace Myprofile with the actual name of the named profile.

# Creating the Named Profile called 'Myprofile'
[Myprofile]
aws_access_key_id = AKIAVWOJMI5836154yRW31
aws_secret_accesss_key = vIaGmx2bJCAK90hQbpNhPV2k5wlW7JsVrP1bm9Ft

Once you’ve created a named profile in ~/.aws/credentials, you can then reference that profile in your Terraform configuration using the profile attribute. Below, you’re referencing the named profile called Myprofile.

# Configuring the AWS Provider named 'aws' in us-east-2 region
provider "aws" {
  region = "us-east-2"
  # Declaring named profile (Myprofile)
  profile = "Myprofile"
}

Declaring the Assume Role in AWS Provider

You’ve just learned to configure an AWS Provider by declaring hard-coded credentials before running Terraform. But perhaps, you want to declare credentials at run-time. If so, the AssumeRole API is what you need. AssumeRole provides temporary credentials containing an access key ID, secret access key, and a security token. These credentials allow you to connect to AWS.

The code below to declares the provider named aws and an assume_role that contains a role name and a session name. To configure AssumeRole access, you must define an IAM role that specifies the privileges that it grants and which entities can assume it.

# Declaring AWS Provider named 'aws'
provider "aws" {
  # Declaring AssumeRole
  assume_role {
		# Declaring a resource name. 
    # The role_arn is the Amazon Resource Name (ARN) of the IAM Role to assume. 
    # ARN is a unique number that is aligned to all the resources in the AWS account.
    role_arn     = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
    # Declaring a session name
    session_name = "SESSION_NAME"
  }
}

Declaring Multiple AWS Providers

By now, you’ve learned how to declare and configure AWS Provider in Terraform that works with a single region. But what if you need to manage your infrastructure or AWS services in multiple cloud regions? In that case, you will need to declare the keyword alias.

The alias allows to define multiple configurations for the same provider and select which one to use on a per-resource or per-module basis or support multiple regions.

The code below declares the default AWS Provider named aws with the region set to us-east-2. And then declares an additional AWS Provider with the same name, but with a declared alias named west and a region set to us-west-2.

Declaring an alias allows you to create resources in us-east-2 by default or in the us-west-2 region if you choose the provider aws.west depending upon the requirement.

# The default provider configuration resources that being with aws.
provider "aws" {
  # Declaring us-east-2 region for AWS provider named 'aws'
  region = "us-east-2" 
}

# Additional provider configuration for west region. 
# Resources can reference this as aws.west.
provider "aws" {
  alias  = "west"
  # Declaring us-west-2 region for AWS provider referenced as 'west'
  region = "us-west-2"
}

# Declaring the resource using an additional provider in the west region
resource "aws_instance" "west-region" {
  # Declaring provider as aws.west
  # aws.west is referenced from the AWS provider named 'aws' and with the alias as 'west'
  provider = aws.west
}

# Declaring the resource using the default provider
resource "aws_instance" "east-region" 
}

Customizing the AWS Provider’s Endpoint Configuration

Customizing endpoint configuration is handy when connecting to non-default AWS service endpoints, such as AWS Snowball or performing local testing.

Configure Terraform AWS Provider to use customized endpoints. Do so by declaring the endpoints configuration block within the provider’s block, as shown below.

The Below configuration allows you to access the AWS S3 service on local port 4572 as if you are actually accessing the AWS S3 service on an AWS account. Similarly, the configuration lets you access dynamodb locally on port 4569. DynamoDB is a NoSQL database service that provides fast performance with seamless scalability.

Check the list of customized endpoints that Terraform AWS Provider allows.

# Declaring AWS provider named 'aws'
provider "aws" {
  # Declaring endpoints
  endpoints { 
    # Declaring the dynamodb on the localhost with port 4569 
    dynamodb = "<http://localhost:4569>"  
    # Declaring the S3 on the localhost with port 4572
    s3 = "<http://localhost:4572>"  
  }
}

Adding Tags

Earlier, you learned how an AWS provider is declared with configurations such as region, source location, etc. But to better manage your resources, you need to add tags at the provider level.

Tags are labels consisting of user-defined keys and values. Tags are handy when you need to check the billing, ownership, automation, access control, and many other use cases in the AWS account.

Instead of adding tags to all resources individually, let’s learn how to add tags to all resources at the provider level that will help save a lot of code and time.

The code below configures an AWS provider with tags defined inside the default_tags. The benefit of adding tags within the provider is that specified tags will automatically get added when you create any resource with this provider.

# Configuring the AWS Provider named 'aws'
provider "aws" {
  # Adding the defaults tags Environment and Owner at the Provider Level
  default_tags {
    # Declaring tags value
    tags = {
      Environment = "Production"
      Owner       = "shanky"
    }
  }
}
# Creating the Resource 'aws_vpc' tagged with Name=MyVPC
resource "aws_vpc" "myinstance" {
  tags = {
    Name = "MyVPC"
  }
}

Ignoring Tags

Enabling tags at a provider level helps apply tags throughout the environment. But at times, you need to ignore the tags, such as if you don’t wish to add a default tag to an EC2 instance and rather apply to the rest of the resources in the AWS account. Lets checkout!

The code below configures an AWS provider with ignore_tags defined inside the provider. The benefit of using the ignore tag is when you don’t wish to add default tags to certain resources and apply to the rest of the resources.

In the below code, when you create any resource using the aws provider, all resources will ignore the LastScannedand kubernetes.io tags.

# Configuring the AWS Provider named aws
provider "aws" {
	# Ignore tag key_prefixes and keys across all resources under the AWS provider (aws) 
  ignore_tags {
    key_prefixes = ["kubernetes.io"]
  }
  ignore_tags {
    keys = ["LastScanned"]
  }
}

Creating an AWS S3 Bucket

By now, you have learned all about how to declare and configure the AWS providers in-depth. But just declaring the AWS provider is doing nothing until you manage AWS resources such as provisioning an AWS S3 bucket or deleting an Ec2 instance etc. So, Let learn how to create an AWS S3 bucket!

1. Create a folder named ~/terraform-s3-demo, then change (cd) the working directory to that folder. The ~/terraform-s3-demo folder will contain your configuration file and all associated files that Terraform will create.

mkdir ~/terraform-s3-demo
cd ~/terraform-s3-demo

2. Copy and paste the configuration below in your favorite code editor, and save it as main.tf in the ~/terraform-s3-demo directory.

The main.tf file creates a few necessary resources:

  • Provider requirement: A provider requirement consists of a local name, source location, and a version constraint.
  • Encryption key: The Amazon S3 Encryption key helps the S3 bucket so that all new objects are encrypted when stored in the bucket. Encryption keys are created using aws_kms_key in Terraform.
  • Configuring AWS Provider: Declaring the provider name (aws) along with the region us-east-2.
  • Bucket: This Terraform module creates a bucket named terraformdemobucket. Terraform cannot destroy this bucket as it contains a flag force_destroy.

Versioning: Versioning in Amazon S3 means keeping multiple versions of an object in the same bucke

# Configuring the AWS Provider named aws
provider "aws" {
#  Ignore tag key_prefixes and keys across all resources under a provider aws. 
  ignore_tags {
    key_prefixes = ["kubernetes.io/"]
  }
  ignore_tags {
    keys = ["LastScanned"]
  }
}

# Declaring the Provider Requirements
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "~> 3.0"
    }
  }
}

# Configuring the AWS Provider (aws) with region set to 'us-east-2'
provider "aws" {
  region = "us-east-2"
}

# Granting the Bucket Access
resource "aws_s3_bucket_public_access_block" "publicaccess" {
  bucket = aws_s3_bucket.demobucket.id
  block_public_acls = false
  block_public_policy = false
}

# Creating the encryption key which will encrypt the bucket objects
resource "aws_kms_key" "mykey" {
  deletion_window_in_days = "20"
}

# Creating the bucket named terraformdemobucket
resource "aws_s3_bucket" "demobucket" {
  bucket = terraformdemobucket
  force_destroy = false
  server_side_encryption_configuration {
    rule {
        apply_server_side_encryption_by_default {
        kms_master_key_id = aws_kms_key.mykey.arn
        sse_algorithm = "aws:kms"
      }
    }
  }
  # Keeping multiple versions of an object in the same bucket
  versioning {
    enabled = true
  }
}

3. Now, run the commands below to navigate to the ~\\terraform-s3-demo directory and initiate Terraform. Terraform initializes the plugins and providers which are required to work with resources.

Terraform typically uses a three-command approach in sequential order terraform init, terraform plan, and terraform apply.

cd ~\terraform-s3-demo       # Change to ~\terraform-s3-demo directory
terraform init               # Initiate Terraform
terraform plan               # Ensure your configuration’s syntax is correct
terraform apply -auto-approve # Provision the AWS S3 bucket

Creating AWS EC2 Instances and IAM Users

In the previous section, you learned how to create a single object (AWS S3 bucket) using Terraform with AWS Provider. But in fact, you can create multiple objects of the same kind using Terraform with AWS Provider.

1. Create a folder named ~/terraform-ec2-iam-demo, then navigate into it

2. Open your favorite code editor, copy/paste the configuration below, and save the file as main.tf in the ~/terraform-ec2-iam-demo directory.

The code below creates two EC2 instances ec21a and ec21a, with t2.micro and t2.medium instance types, then creates IAM users with four different names. The ami declared in the code is an Amazon Machine Image (AMI), which provides the information required to launch an instance, such as the type of OS, which software to install, etc.

You can find Linux AMIs using the Amazon EC2 console.

# Creating the instance with the instance_type t2.micro and t2.medium
resource "aws_instance" "my-machine" {
# Declarling the AMI 
  ami = "ami-0a91cd140a1fc148a"
  for_each  = {
      key1 = "t2.micro"
	    key2 = "t2.medium"
   }
  instance_type  = each.value
	key_name       = each.key
    tags =  {
	   Name  = each.value
	}
}
# Creating the IAM users with four different names
resource "aws_iam_user" "accounts" {
  for_each = toset( ["Account1", "Account2", "Account3", "Account4"] )
  name     = each.key
}

3. Next, create another file, copy/paste the code below, and save the file as vars.tf in the ~/terraform-ec2-iam-demo directory.

The code below declares all the variables, which are referred to in the main.tf file. After you run execute the terraform code the variable tag_ec2 with ec21a and ec21b values gets assigned to the two EC2 instance defined in the main.tf file.

variable "tag_ec2" {
  type = list(string)
  default = ["ec21a","ec21b"]
}

4. Create another Terraform configuration file called output.tf in the ~/terraform-ec2-iam-demo directory, then copy/paste the code below to the output.tf file.

After successful execution of the terraform apply command, you should see the values of ${aws_instance.my-machine.*.id} and ${aws_iam_user.accounts.*.name} at the end of the output of the command.

The code below tells Terraform to reference the aws_instance and aws_iam_user resources defined in the main.tf configuration file.

output "aws_instance" {
   value = "${aws_instance.my-machine.*.id}"
}
output "aws_iam_user" {
   value = "${aws_iam_user.accounts.*.name}"
}

5. Create one more configuration file in the ~/terraform-ec2-iam-demo directory called provider.tf, and paste the code below into the provider.tf file. The below provider.tf defines the Terraform AWS provider so that Terraform knows how to interact with all of the AWS resources you’ve defined in the earlier steps.

provider "aws" {
   region = "us-east-2"
 }

6. Now, verify all of the required files below are contained in the ~/terraform-ec2-iam-demo folder by running the tree command.

 Showing all the Terraform configuration files required
Showing all the Terraform configuration files required

7. Run the commands below in sequential order to initialize Terraform and create AWS EC2 Instances and IAM Users.

terraform init
terraform plan
terraform apply
Terraform apply command executed successfully.
Terraform apply command executed successfully.

Finally, navigate to the AWS Management Console, and then hop over to the AWS EC2 service and IAM console.

In the following screenshots, you can verify that the EC2 instances and IAM users exist.

Verifying the two EC2 instances that got created using Terraform
Verifying the two EC2 instances that got created using Terraform
Verifying the four IAM users that got created using Terraform
Verifying the four IAM users that got created using Terraform

Conclusion

With this Ultimate guide, you now have the knowledge you need to work with the AWS Provider, from declaring to executing AWS Provider within Terraform. You also learned how AWS Provider allows you to declare credentials in many ways securely.

Now which AWS service do you have in mind to manage with AWS Provider and Terraform?

Hate ads? Want to support the writer? Get many of our tutorials packaged as an ATA Guidebook.

Explore ATA Guidebooks

Looks like you're offline!