How to configure IaC
This document is intended for System Integrators as an overview of the env.hcl
file used to configure the IaC for a SmoothGlue environment. It will also outline how to modify the env.hcl
file by defining common variables for the specific environment configuration.
Prerequisites / Requirements
The IaC should be deployed on a Bastion Host following tools present in the PATH:
- Terraform
- Terragrunt
- AWS CLI
This document assumes the Setup Directory Structure has been followed to generate the necessary config files for the SmoothGlue IaC
Using IaC to configure the cluster
The behavior of Terraform can be modified by providing different variables to Terraform modules and inputs to Terragrunt. The SmoothGlue IaC is set up so that most of the required environment-specific configuration changes can be made in a single env.hcl
file. Example configuration files for SmoothGlue Build environments and SmoothGlue Run environments can be found in the infra-iac/envs/
directory of the SmoothGlue IaC bundle.
A env.hcl
file can be provided to Terragrunt by setting the TERRAGRUNT_ENV_FILE
environment variable. Note that if a relative path is used, it must be the relative path from each Terragrunt module, so for best results, an absolute path should be used. If the TERRAGRUNT_ENV_FILE
environment variable is not set, then Terragrunt will look recursively through the module's parent directories for a file called env-default.hcl
and source it. In the IaC repository, there is a symlink located at infra-iac/env-default.hcl
and may be re-pointed to the desired environment file.
The locals and inputs within the env.hcl
file are sourced in each Terragrunt module's terragrunt.hcl
file. When a variable under locals
is also within inputs
, it is passed on to all Terraform modules as an input variable (e.g., compatibility_mode = false
set within locals
section of the env.hcl
will be passed to all Terraform modules when inputs = {compatibility_mode = "${local.compatibility_mode}"}
is specified).
Example ENV.HCL
The env.hcl for the SmoothGlue IaC consists of three main sections/blocks as follows:
-
Locals: are named values that can be used throughout your Terraform configuration, promoting code reusability and readability. They are only accessible within the module where they are defined. Locals help to avoid repeating the same values or expressions multiple times in a configuration, making it easier maintain.
-
Remote State: Stores Terraform state (which tracks your infrastructure) in a shared, remote location instead of locally. The state file can be stored securely and access controlled. Remote backends provide state locking mechanisms to prevent conflicts during concurrent operations
-
Inputs: allow values to be passed into a module from outside, making the module reusable and configurable. Inputs are accessible within the module where they are defined, but their values are set from outside the module. Inputs can be set from command-line arguments, environment variables, or variable file
When creating a new cluster, the locals and remote_state sections in the env.hcl
file should be reviewed and updated using values appropriate to your AWS account and VPC.
Locals
locals {
k8s_distro = "eks-cluster"
# Specifies the type of cluster (run or build)
cluster_type = "run"
# Required AWS Configuration
aws_region = "us-east-2"
vpc_id = "vpc-123456789abcdef01"
vpc_subnet_ids = [
"subnet-123456789abcdef01",
"subnet-123456789abcdef02",
"subnet-123456789abcdef03"
]
allowed_security_group_ids = [
"sg-123456789abcdef01"
]
# If true, this flag disables some AWS features which are not available in all AWS partitions/regions.
compatibility_mode = false
# If enabled, this flag blocks all public access to S3 buckets
block_public_access = true
# Optionally toggle add-on modules on/off
# cluster_type = "build" automatically toggles all of these on
# modules = {
# confluence = false
# console = false
# gitlab = false
# jira = false
# keycloak = true
# loki = true
# mattermost = true
# sonarqube = true
# velero = true
# }
# Optionally provide Add-on specific configs directly
# See each add-ons documentation for available values
# confluence_inputs = {}
# console_inputs = {}
# gitlab_inputs = {}
# jira_inputs = {}
# keycloak_inputs = {}
# loki_inputs = {}
# mattermost_inputs = {}
# sonarqube_inputs = {}
# velero_inputs = {}
# Cluster specific configuration
# See eks module documentation for available values
cluster_inputs = {
cluster_name = "run-cluster"
# If you want to provide a cluster IAM role, define `cluster_iam_role` with the existing IAM role name.
# If it is left empty, SmoothGlue will create the role to be used.
cluster_iam_role = ""
# Install Custom Root CAs
root_cas = [
{
name = "Amazon Root CA 1"
cert = "aws_root_cert_goes_here"
},
{
name = "Amazon RSA 2048 M01"
cert = "aws_rsa_cert_goes_here"
}
]
}
}
Remote State
# Terragrunt will create the buckets and dynamodb tables if they do not exist
remote_state {
config = {
# Ensure you provide a globally unique bucket name
bucket = "smoothglue-terraform-state"
dynamodb_table = "smoothglue-terraform-lock"
accesslogging_bucket_name = "smoothglue-logging"
}
...
}
Inputs
# This is the global config, compatibility_mode and persistent should be set here to protect tool resoruces
inputs = {
aws_region = "${local.aws_region}"
compatibility_mode = "${local.compatibility_mode}"
block_public_access = "${local.block_public_access}"
vpc_id = "${local.vpc_id}"
}
Helpful Reference
For more information on recommended values for the env.hcl
, particularly values suitable for a high-availability production cluster, please refer to Config Reference documentation.
Configure Terragrunt S3 Backend
Terragrunt will typically be configured to use an S3 backend to store its state information. Generally, this will be configured in the env.hcl
file for your environment using a block, such as the following:
remote_state {
backend = "s3"
config = {
bucket = "BUCKET_NAME"
dynamodb_table = "DYNAMODB_TABLE"
key = "${basename(get_terragrunt_dir())}.tfstate"
}
}
Each Terragrunt module will source this remote_state
configuration from the env.hcl
file so that they share a common configuration, but because of the dynamically-generated key
name, if multiple Terragrunt modules are run, they will each have independent states.
Terragrunt is able to configure the backend S3 bucket and DynamoDB table for you automatically if they do not yet exist, and you may see the following message when you run terragrunt init
or terragrunt run-all init
:
Remote state S3 bucket BUCKET_NAME does not exist or you don't have permissions to access it. Would you like Terragrunt to create it? (y/n)
This can be expected if this is the first time Terragrunt was run with the configured backend settings; if this is the case, select y
to have Terragrunt automatically create and configure the S3 bucket and DynamoDB table for the backend.
If the bucket already exists or the AWS IAM Role being used doesn't have permission to create buckets, a permission error will be presented. Ensure the bucket doesn't already exist or that appropriate permissions to access the bucket are provided to the IAM Role.
Execute the Terragrunt
Depending upon whether your cluster is destined to become a SmoothGlue Run environment or a SmoothGlue Build environment, the procedure for executing the Terragrunt is somewhat different; the build environment add-ons require some additional infrastructure (RDS databases, S3 buckets, etc.), which are contained in individual Terragrunt modules so that they can be independently deployed. For a SmoothGlue Run environment, the only IaC component that needs to be deployed is the EKS cluster itself; the following example will show the steps required to deploy the infrastructure for a SmoothGlue Run environment:
cd /path/to/example-org-run
source .env
terragrunt init
terragrunt workspace select -or-create $WORKSPACE_NAME
terragrunt plan
terragrunt apply
For a more complete deployment guide. Please see SmoothGlue Run Deploy Guide or SmoothGlue Build Deploy Guide.
Outputs
Terragrunt will store several useful values as outputs; these can be viewed using the terragrunt output command. For example, to obtain the generated EKS cluster name and use it to configure your workstation's kubeconfig for local access, you may run the following:
cd REPO_PATH/infra-iac/
CLUSTER_NAME=$(terragrunt run-all output -json | jq -nr 'reduce inputs as $i ({}; . * $i) | .cluster_name.value')
aws eks update-kubeconfig --name ${CLUSTER_NAME}
This will allow you to run kubectl
commands in the context of the cluster.
If multiple Terragrunt modules have been deployed, as for a build environment, the outputs for all of them can be combined using a command, such as the following:
cd REPO_PATH/infra
terragrunt run-all output -json | jq -n 'reduce inputs as $i ({}; . * $i)' >
outputs.json
Additionally, Terragrunt will also create files (by default in the infra-iac/outputs
directory) matching several of the Terraform outputs. These will be consumed during the deployment of the SmoothGlue package.