Skip to main content
Version: Next

SmoothGlue Build Deploy Guide

This document is intended to guide the System Integrator through the process of deploying a SmoothGlue Build into their AWS account.

Before starting, please review the SmoothGlue Prerequisites and ensure they have all been met.

danger

The following requires valid AWS credentials. If session-based AWS credentials are being used, please ensure the session duration is at least an hour long. It is recommended to obtain fresh AWS credentials before deploying SmoothGlue packages. If the AWS credentials expire during deployment of the SmoothGlue IaC package, the Terraform state files will become locked and may require manual intervention to unlock, potentially leading to orphaned cloud resources.

Initial Setup

It is important to have a common pattern for deploying SmoothGlue. A common pattern will simplify installing and maintaining multiple SmoothGlue environments.

Please complete the steps in Setup Directory Structure and ensure the CLUSTER_TYPE is set to build before continuing. After completing the steps, you should have a directory with the SmoothGlue Artifacts and all the configuration files necessary for installing and maintaining a SmoothGlue environment.

note

For the remainder of this guide, all referenced files and example commands will be in the context of the setup directory.

Configuring SmoothGlue IaC for Target AWS Account

Before deploying the SmoothGlue IaC, the System Integrator should first configure the IaC for the target AWS account. Within env.hcl, the System Integrator should update the following with details based on the SmoothGlue Prerequisites:

note

Most of these options should already be defined in env.hcl. However, some options will need to be added manually.

locals {
aws_region = "us-east-2"
vpc_id = "vpc-011e4e60acb44853d"
# VPC Private Subnets
vpc_subnet_ids = [
"subnet-0574f69f186bcdda5",
"subnet-06f3fa7a8ba3ae686",
"subnet-0272f89562d5c7a3d"
]
domain = "example.com"
cluster_inputs = {
# Make Load Balancer Public-Facing
application_nlb_internal = false
# VPC Public Subnets
application_nlb_subnets = [
"subnet-0a5446ced834f266a",
"subnet-0dd33191ce13213e0",
"subnet-0cbe9e89835b3baa3"
]
# Configure Public-Facing Ingress Rules
application_nlb_ingress_rules = {
"http_web_traffic" = {
from_port = 80
to_port = 80
ip_protocol = "tcp"
description = "HTTP traffic from anywhere"
cidr_ipv4 = "0.0.0.0/0" # This could be a single IP for a VPN; alternatively it could be the System Integrator's public IP during testing
}
"https_web_traffic" = {
from_port = 443
to_port = 443
ip_protocol = "tcp"
description = "HTTPS traffic from anywhere"
cidr_ipv4 = "0.0.0.0/0" # This could be a single IP for a VPN; alternatively it could be the System Integrator's public IP during testing
}
}
access_entries = {
admin = {
# Update to Bastion Host's IAM Role
principal_arn = "arn:aws:iam::171179903432:role/aws-reserved/sso.amazonaws.com/us-east-2/AWSReservedSSO_AdministratorAccess-4hr_c7d56a998f605e6d"
policy_associations = {
cluster_admin = {
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
access_scope = {
type = "cluster"
}
}
}
}
}
}

remote_state {
backend = "s3"
config = {
bucket = "example-aws-iac-terraform-state" # Replace; Terragrunt will create this if it doesn't exist
dynamodb_table = "example-aws-iac-terraform-lock" # Replace; Terragrunt will create this if it doesn't exist
encrypt = true
key = "${basename(get_terragrunt_dir())}.tfstate"
region = "${local.aws_region}"

# Optional
# accesslogging_bucket_name = "your-logging-bucket"
# accesslogging_target_prefix = "s3logs/TFStateLogs/"
}
disable_init = tobool(get_env("TERRAGRUNT_DISABLE_INIT", "false"))
generate = {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
}
}

In addition to the above config, the following is unique config for a SmoothGlue Build environment that will need to configured:

locals {
# This list of security groups is allowed inbound access to the cluster and
# Terragrunt-created resources. If you are using a deploy box or GitLab
# runner, the security group of the runner must be added here in order
# for Terragrunt to be able to connect to the RDS databases (if applicable).
allowed_security_group_ids = [
"sg-08161e26094b7e0c6"
]

# This is a map of VPC availability zone to private subnets.
# These mount targets are used for EFS in the Neuvector, Jira and Confluence modules.
mount_targets = {
"us-east-2a" = {
subnet_id = "subnet-0574f69f186bcdda5"
}
"us-east-2b" = {
subnet_id = "subnet-06f3fa7a8ba3ae686"
}
"us-east-2c" = {
subnet_id = "subnet-0272f89562d5c7a3d"
}
}

modules = {
# Enable or disable terragrunt iac modules explicitly here
nexus = true # If you don't have a license for NexusRepositoryManager, disable this.
}

nexus_inputs = {
nexus_pro_version_enabled = true # If you don't have a license for NexusRepositoryManager, disable this.
nexus_iq_enabled = true # If you don't have a license for NexusIQ, disable this.
}
}

Deploying SmoothGlue IaC

The SmoothGlue IaC uses Terragrunt and Terraform to manage the deployed cloud resources. Each SmoothGlue environment should keep track of certain variables to differentiate one SmoothGlue environment from another. Those environment variables should be stored in .env. When following the Setup Directory Structure guide, a .env should have been created. Start by loading the environment variables for the SmoothGlue environment:

source .env
danger

Now is a good time to refresh AWS credentials if using session-based credentials

Before deploying the SmoothGlue IaC, the IaC modules need to be initialized:

terragrunt run-all init

Terraform keeps track of all the resources it creates and associates them to a workspace. The .env includes a workspace name that should be unique for this SmoothGlue environment. Select the workspace:

terragrunt run-all workspace select -or-create $WORKSPACE_NAME

Now the SmoothGlue IaC can be deployed.

terragrunt run-all apply -input=false
note

If you see the following error when applying the SmoothGlue IaC, please ensure the correct security group is being configured for allowed_security_group_ids.

Error: Error connecting to PostgreSQL server build-example-org-wch-confluence.cluster-cuo12cxwsyp7.us-east-2.rds.amazonaws.com (scheme: postgres): dial tcp 10.32.37.26:5432: connect: connection timed out

The SmoothGlue IaC generates config files for Zarf and Big Bang. These config files are important as they will configure SmoothGlue to use the cloud resources created by the SmoothGlue IaC. The config files will be located in the infra-iac/outputs directory. Config files that start with zarf or bigbang will be used later during the install. Feel free to review what options are being configured.

Configure DNS to Route Traffic

warning

If DNS is not publically resolvable, SmoothGlue will fail to self-configure SSO for many of the applications and will subsequently fail the deployment. Please enable the internal-facing Route53 module in this case.

In order to access applications on the cluster, DNS must be configured to route users to the public-facing load balancer.

Load balancers in AWS get their own DNS name. Build the following command to retrieve it from the IaC state:

terragrunt --working-dir="infra-iac/eks-cluster" output app_nlb_dns_name

Example output:

"build-example-org-wch-apps-a2d50bfcd8494a7c.elb.us-east-2.amazonaws.com"

With the load balancer DNS name, please configure a wildcard record, *.example.com for example, in your DNS provider for routing public traffic to the load balancer.

Verify that your domain name resolves to the IP addresses of the load balancer. For the following example, dig will be used resolve DNS names to IP addresses. Start by retrieving the IP addresses for the load balancer:

dig +noall +answer build-example-org-wch-apps-a2d50bfcd8494a7c.elb.us-east-2.amazonaws.com

Example output:

build-example-org-wch-apps-a2d50bfcd8494a7c.elb.us-east-2.amazonaws.com. 60 IN A 3.14.48.245
build-example-org-wch-apps-a2d50bfcd8494a7c.elb.us-east-2.amazonaws.com. 60 IN A 18.188.105.108
build-example-org-wch-apps-a2d50bfcd8494a7c.elb.us-east-2.amazonaws.com. 60 IN A 3.21.115.163

Ensure that your domain resolves to the same IP addresses:

dig +noall +answer argocd.example.com
argocd.example.com. 60 IN A 3.14.48.245
argocd.example.com. 60 IN A 18.188.105.108
argocd.example.com. 60 IN A 3.21.115.163

Connecting EKS Cluster

In order to install SmoothGlue into the cluster, the System Integrator will need access to the cluster. Getting access is simple. Start by getting the cluster name:

export CLUSTER_NAME=$(terragrunt --working-dir="infra-iac/eks-cluster" output -raw cluster_name)

Then configure the kubeconfig with the cluster credentials:

aws eks update-kubeconfig --name $CLUSTER_NAME

Test access to the cluster by checking for nodes in the cluster:

zarf tools kubectl get nodes

Example output:

NAME                                         STATUS   ROLES    AGE   VERSION
ip-10-32-14-126.us-east-2.compute.internal Ready <none> 30m v1.30.8-eks-aeac579
ip-10-32-14-64.us-east-2.compute.internal Ready <none> 30m v1.30.8-eks-aeac579
ip-10-32-17-191.us-east-2.compute.internal Ready <none> 30m v1.30.8-eks-aeac579
ip-10-32-19-58.us-east-2.compute.internal Ready <none> 30m v1.30.8-eks-aeac579
ip-10-32-34-148.us-east-2.compute.internal Ready <none> 30m v1.30.8-eks-aeac579
ip-10-32-45-0.us-east-2.compute.internal Ready <none> 30m v1.30.8-eks-aeac579

Initializing Zarf on SmoothGlue

Since SmoothGlue is built for air-gap environments, SmoothGlue uses a custom Zarf package for deploying all required containers for the applications in the cluster. As a result, Zarf will need to be installed and initialized into the cluster before the SmoothGlue package can be deployed.

warning

Please ensure a compatible version of Zarf is being used. SmoothGlue Release Notes will indicate what version of Zarf should be used with a given SmoothGlue version.

The infra-iac/outputs/zarf-init-config.yaml file will be required when initializing Zarf. Build the following command to initialize Zarf into the cluster pressing y when prompted:

ZARF_CONFIG=infra-iac/outputs/zarf-init-config.yaml zarf init --components git-server --architecture=amd64

Configuring SmoothGlue Package for Target AWS Account

There are three configuration files that the System Integrator can use to modify the behavior of the SmoothGlue Package. They are:

  • bigbang-values.yaml
  • bigbang-secrets.yaml
  • zarf-config.yaml

The Big Bang files can be used to configure applications deployed by Big Bang. For the purposes of an initial setup, the System Integrator shouldn't need to worry about these files. However, the System Integrator needs to know they exist in case the want to provide custom configuration to Big Bang.

Some additional configuration is required to incorporate some of the SmoothGlue Prerequisites. The System Integrator should edit the zarf-config.yaml file to update the following options:

package:
deploy:
set:
DOMAIN: example.com
CERT_PATH: /path/to/server-cert.pem
KEY_PATH: /path/to/server-key.pem
CA_CERT_PATH: /path/to/ca-cert.pem
note

If certificate options are not supplied, SmoothGlue will generate self-signed certificates for the configured domain.

info

Please see How To Configure SmoothGlue Package for more information on configuring the SmoothGlue Package.

Several configuration files were generated during the deployment of the SmoothGlue IaC. They configure SmoothGlue to use the cloud resources deployed by the SmoothGlue IaC. They are located in infra-iac/outputs/. The configuration files above are meant to be combined with the SmoothGlue generated ones. The compile-config.sh script will take care of finding the appropriate files and combining them with the user-managed counterparts:

./compile-config.sh

There should now be a bigbang-values.yaml, bigbang-secrets.yaml, and zarf-config.yaml file within the compiled directory.

Deploying SmoothGlue Package

Ensure the environment variables for the SmoothGlue environment have been loaded. An important one for this step sets the ZARF_CONFIG variable to use the compiled/zarf-config.yaml file:

source .env

Deploying the SmoothGlue package is very easy:

zarf package deploy zarf-package-smoothglue-amd64-v6.10.0.tar.zst --confirm
note

Deploying the package may take some time as it has to upload all of the container images, configure and deploy the applications into the cluster.

Accessing Applications on SmoothGlue

Public facing applications within SmoothGlue are registered via Istio Virtual Services. Applications within SmoothGlue come pre-configured with Virtual Services. To list available host names for applications in the cluster use the following command:

zarf tools kubectl get virtualservices -A

Example output:

NAMESPACE    NAME                                      GATEWAYS                  HOSTS                          AGE
confluence confluence ["istio-system/public"] ["confluence.example.com"] 102m
console console ["istio-system/public"] ["console.example.com"] 102m
gitlab gitlab ["istio-system/public"] ["gitlab.example.com"] 98m
gitlab gitlab-registry ["istio-system/public"] ["registry.example.com"] 98m
jira jira ["istio-system/public"] ["jira.example.com"] 102m
keycloak keycloak ["istio-system/public"] ["keycloak.example.com"] 99m
kiali kiali ["istio-system/public"] ["kiali.example.com"] 99m
mattermost mattermost ["istio-system/public"] ["chat.example.com"] 96m
monitoring monitoring-grafana-grafana ["istio-system/public"] ["grafana.example.com"] 100m
monitoring monitoring-monitoring-kube-alertmanager ["istio-system/public"] ["alertmanager.example.com"] 101m
monitoring monitoring-monitoring-kube-prometheus ["istio-system/public"] ["prometheus.example.com"] 101m
neuvector neuvector-neuvector ["istio-system/public"] ["neuvector.example.com"] 97m
sonarqube sonarqube-sonarqube ["istio-system/public"] ["sonarqube.example.com"] 97m

From the output above, note that all of the applications are configured to use the configured example.com domain. To verify access to the cluster is working, try using curl or visiting one of the applications directly in the browser:

# use -k if a self-signed certificate is being used or the Bastion Host doesn't trust the configured certificate
curl -v -k https://gitlab.example.com
note

GitLab redirects unauthenticated users to sign in.

Example output:

> GET / HTTP/2
> Host: gitlab.example.com
> User-Agent: curl/8.7.1
...
< HTTP/2 302
...
<html><body>You are being <a href="https://gitlab.example.com/users/sign_in">redirected</a>.</body></html>
info

After the initial install is complete. Consider looking in Configurations for guides on implementing some common configurations.