Skip to main content
Version: Next

SmoothGlue Run Deploy Guide

This document is intended to guide the System Integrator through the process of deploying a SmoothGlue Run into their AWS account.

Before starting, please review the SmoothGlue Prerequisites and ensure they have all been met.

danger

The following requires valid AWS credentials. If session-based AWS credentials are being used, please ensure the session duration is at least an hour long. It is recommended to obtain fresh AWS credentials before deploying SmoothGlue packages. If the AWS credentials expire during deployment of the SmoothGlue IaC package, the Terraform state files will become locked and may require manual intervention to unlock, potentially leading to orphaned cloud resources.

Initial Setup

It is important to have a common pattern for deploying SmoothGlue. A common pattern will simplify installing and maintaining multiple SmoothGlue environments.

Please complete the steps in Setup Directory Structure before continuing. After completing the steps, you should have a directory with the SmoothGlue Artifacts and all the configuration files necessary for installing and maintaining a SmoothGlue environment.

note

For the remainder of this guide, all referenced files and example commands will be in the context of the setup directory.

Configuring SmoothGlue IaC for Target AWS Account

Before deploying the SmoothGlue IaC, the System Integrator should first configure the IaC for the target AWS account. Within env.hcl, the System Integrator should update the following with details based on the SmoothGlue Prerequisites:

note

Most of these options should already be defined in env.hcl. However, some options will need to be added manually. Please verify all of the keys defined in the example below are updated for your environment. Perform a sanity check on all other values that exist in the example env.hcl staged in previous steps.

locals {
aws_region = "us-east-2"
vpc_id = "vpc-011e4e60acb44853d"
# VPC Private Subnets
vpc_subnet_ids = [
"subnet-0574f69f186bcdda5",
"subnet-06f3fa7a8ba3ae686",
"subnet-0272f89562d5c7a3d"
]
domain = "example.com"
cluster_inputs = {
# Make Load Balancer Public-Facing
application_nlb_internal = false
# VPC Public Subnets
application_nlb_subnets = [
"subnet-0a5446ced834f266a",
"subnet-0dd33191ce13213e0",
"subnet-0cbe9e89835b3baa3"
]
# Configure Public-Facing Ingress Rules
application_nlb_ingress_rules = {
"http_web_traffic" = {
from_port = 80
to_port = 80
ip_protocol = "tcp"
description = "HTTP traffic from anywhere"
cidr_ipv4 = "0.0.0.0/0" # This could be a single IP for a VPN; alternatively it could be the System Integrator's public IP during testing
}
"https_web_traffic" = {
from_port = 443
to_port = 443
ip_protocol = "tcp"
description = "HTTPS traffic from anywhere"
cidr_ipv4 = "0.0.0.0/0" # This could be a single IP for a VPN; alternatively it could be the System Integrator's public IP during testing
}
}
access_entries = {
admin = {
# Update to Bastion Host's IAM Role
principal_arn = "arn:aws:iam::171179903432:role/aws-reserved/sso.amazonaws.com/us-east-2/AWSReservedSSO_AdministratorAccess-4hr_c7d56a998f605e6d"
policy_associations = {
cluster_admin = {
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
access_scope = {
type = "cluster"
}
}
}
}
}
}

remote_state {
backend = "s3"
config = {
bucket = "example-aws-iac-terraform-state" # Replace; Terragrunt will create this if it doesn't exist
dynamodb_table = "example-aws-iac-terraform-lock" # Replace; Terragrunt will create this if it doesn't exist
encrypt = true
key = "${basename(get_terragrunt_dir())}.tfstate"
region = "${local.aws_region}"

# Optional
# accesslogging_bucket_name = "your-logging-bucket"
# accesslogging_target_prefix = "s3logs/TFStateLogs/"
}
disable_init = tobool(get_env("TERRAGRUNT_DISABLE_INIT", "false"))
generate = {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
}
}

Deploying SmoothGlue IaC

The SmoothGlue IaC uses Terragrunt and Terraform to manage the deployed cloud resources. Each SmoothGlue environment should keep track of certain variables to differentiate one SmoothGlue environment from another. Those environment variables should be stored in .env. When following the Setup Directory Structure guide, a .env should have been created. Start by loading the environment variables for the SmoothGlue environment:

source .env
danger

Now is a good time to refresh AWS credentials if using session-based credentials

Before deploying the SmoothGlue IaC, the IaC modules need to be initialized:

terragrunt run-all init

Terraform keeps track of all the resources it creates and associates them to a workspace. The .env includes a workspace name that should be unique for this SmoothGlue environment. Select the workspace:

terragrunt run-all workspace select -or-create $WORKSPACE_NAME

Now the SmoothGlue IaC can be deployed.

terragrunt run-all apply -input=false

The SmoothGlue IaC generates config files for Zarf and Big Bang. These config files are important as they will configure SmoothGlue to use the cloud resources created by the SmoothGlue IaC. The config files will be located in the infra-iac/outputs directory. Config files that start with zarf or bigbang will be used later during the install. Feel free to review what options are being configured.

Configure DNS to Route Traffic

In order to access applications on the cluster, DNS must be configured to route users to the public-facing load balancer.

Load balancers in AWS get their own DNS name. Run the following command to retrieve it from the IaC state:

terragrunt --working-dir="infra-iac/eks-cluster" output app_nlb_dns_name

Example output:

"dt-my-run-deployme-zpl-apps-9f4b8e6829a6001f.elb.us-east-2.amazonaws.com"

With the load balancer DNS name, please configure a wildcard record, *.example.com for example, in your DNS provider for routing public traffic to the load balancer.

Verify that your domain name resolves to the IP addresses of the load balancer. For the following example, dig will be used resolve DNS names to IP addresses. Start by retrieving the IP addresses for the load balancer:

dig +noall +answer dt-my-run-deployme-zpl-apps-9f4b8e6829a6001f.elb.us-east-2.amazonaws.com

Example output:

dt-my-run-deployme-zpl-apps-9f4b8e6829a6001f.elb.us-east-2.amazonaws.com. 60 IN A 3.14.48.245
dt-my-run-deployme-zpl-apps-9f4b8e6829a6001f.elb.us-east-2.amazonaws.com. 60 IN A 18.188.105.108
dt-my-run-deployme-zpl-apps-9f4b8e6829a6001f.elb.us-east-2.amazonaws.com. 60 IN A 3.21.115.163

Ensure that your domain resolves to the same IP addresses:

dig +noall +answer argocd.example.com
argocd.example.com. 60 IN A 3.14.48.245
argocd.example.com. 60 IN A 18.188.105.108
argocd.example.com. 60 IN A 3.21.115.163

Connecting EKS Cluster

In order to install SmoothGlue into the cluster, the System Integrator will need access to the cluster. Getting access is simple. Start by getting the cluster name:

export CLUSTER_NAME=$(terragrunt --working-dir="infra-iac/eks-cluster" output -raw cluster_name)

Then configure the kubeconfig with the cluster credentials:

aws eks update-kubeconfig --name $CLUSTER_NAME

Test access to the cluster by checking for nodes in the cluster:

zarf tools kubectl get nodes

Example output:

NAME                                         STATUS   ROLES    AGE   VERSION
ip-10-32-14-126.us-east-2.compute.internal Ready <none> 30m v1.30.8-eks-aeac579
ip-10-32-14-64.us-east-2.compute.internal Ready <none> 30m v1.30.8-eks-aeac579
ip-10-32-17-191.us-east-2.compute.internal Ready <none> 30m v1.30.8-eks-aeac579
ip-10-32-19-58.us-east-2.compute.internal Ready <none> 30m v1.30.8-eks-aeac579
ip-10-32-34-148.us-east-2.compute.internal Ready <none> 30m v1.30.8-eks-aeac579
ip-10-32-45-0.us-east-2.compute.internal Ready <none> 30m v1.30.8-eks-aeac579

Initializing Zarf on SmoothGlue

Since SmoothGlue is built for air-gap environments, SmoothGlue uses a custom Zarf package for deploying all required containers for the applications in the cluster. As a result, Zarf will need to be installed and initialized into the cluster before the SmoothGlue package can be deployed.

warning

Please ensure a compatible version of Zarf is being used. SmoothGlue Release Notes will indicate what version of Zarf should be used with a given SmoothGlue version.

The infra-iac/outputs/zarf-init-config.yaml file will be required when initializing Zarf. Run the following command to initialize Zarf into the cluster pressing y when prompted:

ZARF_CONFIG=infra-iac/outputs/zarf-init-config.yaml zarf init --components git-server --architecture=amd64

Configuring SmoothGlue Package for Target AWS Account

There are three configuration files that the System Integrator can use to modify the behavior of the SmoothGlue Package. They are:

  • bigbang-values.yaml
  • bigbang-secrets.yaml
  • zarf-config.yaml

The Big Bang files can be used to configure applications deployed by Big Bang. For the purposes of an initial setup, the System Integrator shouldn't need to worry about these files. However, the System Integrator needs to know they exist in case the want to provide custom configuration to Big Bang.

Some additional configuration is required to incorporate some of the SmoothGlue Prerequisites. The System Integrator should edit the zarf-config.yaml file to update the following options:

package:
deploy:
set:
DOMAIN: example.com
CERT_PATH: /path/to/server-cert.pem
KEY_PATH: /path/to/server-key.pem
CA_CERT_PATH: /path/to/ca-cert.pem
note

If certificate options are not supplied, SmoothGlue will generate self-signed certificates for the configured domain.

info

Please see How To Configure SmoothGlue Package for more information on configuring the SmoothGlue Package.

Several configuration files were generated during the deployment of the SmoothGlue IaC. They configure SmoothGlue to use the cloud resources deployed by the SmoothGlue IaC. They are located in infra-iac/outputs/. The configuration files above are meant to be combined with the SmoothGlue generated ones. The compile-config.sh script will take care of finding the appropriate files and combining them with the user-managed counterparts:

./compile-config.sh

There should now be a bigbang-values.yaml, bigbang-secrets.yaml, and zarf-config.yaml file within the compiled directory.

Deploying SmoothGlue Package

Ensure the environment variables for the SmoothGlue environment have been loaded. An important one for this step sets the ZARF_CONFIG variable to use the compiled/zarf-config.yaml file:

source .env

Deploying the SmoothGlue package is very easy, updating the version tag as needed:

zarf package deploy zarf-package-smoothglue-amd64-v6.10.0.tar.zst --confirm
note

Deploying the package may take some time as it has to upload all of the container images, configure and deploy the applications into the cluster. Ensure /tmp has at least 20 GB of free space.

Accessing Applications on SmoothGlue

Public facing applications within SmoothGlue are registered via Istio Virtual Services. Applications within SmoothGlue come pre-configured with Virtual Services. To list available host names for applications in the cluster use the following command:

zarf tools kubectl get virtualservices -A

Example output:

NAMESPACE    NAME                                      GATEWAYS                  HOSTS                          AGE
argocd argocd-argocd-server ["istio-system/public"] ["argocd.example.com"] 119m
kiali kiali ["istio-system/public"] ["kiali.example.com"] 119m
monitoring monitoring-grafana-grafana ["istio-system/public"] ["grafana.example.com"] 119m
monitoring monitoring-monitoring-kube-alertmanager ["istio-system/public"] ["alertmanager.example.com"] 121m
monitoring monitoring-monitoring-kube-prometheus ["istio-system/public"] ["prometheus.example.com"] 121m
neuvector neuvector-neuvector ["istio-system/public"] ["neuvector.example.com"] 118m

From the output above, note that all of the applications are configured to use the configured example.com domain. To verify access to the cluster is working, try using curl or visiting one of the applications directly in the browser:

# use -k if a self-signed certificate is being used or the Bastion Host doesn't trust the configured certificate
curl -v -k https://argocd.example.com

Example output:

> GET / HTTP/2
> Host: argocd.example.com
> User-Agent: curl/8.7.1
...
< HTTP/2 200
...
<!doctype html><html lang="en"><head><meta charset="UTF-8"><title>Argo CD</title><base href="/"><meta name="viewport" content="width=device-width,initial-scale=1"><link rel="icon" type="image/png" href="assets/favicon/favicon-32x32.png" sizes="32x32"/><link rel="icon" type="image/png" href="assets/favicon/favicon-16x16.png" sizes="16x16"/><link href="assets/fonts.css" rel="stylesheet"><script defer="defer" src="main.67d3d35d60308e91d5f4.js"></script></head><body><noscript><p>Your browser does not support JavaScript. Please enable JavaScript to view the site. Alternatively, Argo CD can be used with the <a href="https://argoproj.github.io/argo-cd/cli_installation/">Argo CD CLI</a>.</p></noscript><div id="app"></div></body><script defer="defer" src="extensions.js"></script></html>
info

After the initial install is complete. Consider looking in Configurations for guides on implementing some common configurations.