This is the multi-page printable view of this section. Click here to print.
Cloud Deployment
1 - AWS Deployment
Architecture
Deployment Prerequisites
In order to get started, your Atolio support team will do the following on your behalf:
- Grant your AWS account access to the Client ECR repos (for pulling Docker images).
- Add your Deployment Engineer as a collaborator to the Atolio GitHub repository (lumen-infra), which contains:
- Deployment documentation
- Terraform for the Atolio stack infrastructure
- Configuration files for Atolio services
- Maintenance scripts
The following deployment prerequisites will help streamline your deployment process.
Determine AWS account
You can either choose to deploy Atolio into an existing AWS account or a new account. Atolio also supports deploying to your own AWS Virtual Private Cloud (VPC). When the account is available, share the AWS account number with your Atolio support team.
We recommend:
- Ensuring that Service Quotas within your AWS account allow for a minimum of 64 vCPU for On-Demand Standard instances.
- Raising any other organizational AWS policies / restrictions (e.g. networking, containers) with your Atolio support team ahead of the deployment call.
Determine Deployment Model
We offer both Atolio managed and customer managed deployment models for you to choose from. Please review the comparison and requirements for each approach on our Deployment Model Overview page and inform your Atolio support team which method you’d like to use for your deployment.
Atolio Managed Deployment Prerequisites
The exact permissions being delegated will be presented to the engineer running the script prior to executing. The IAM policies included are:
arn:aws:iam::aws:policy/PowerUserAccess
arn:aws:iam::aws:policy/IAMFullAccess
Access will be limited to a client support machine that is only accessible to Atolio support engineers and that has a static IP of 52.43.209.253
assigned to aid in identifying activity from Atolio’s team.
If you opt to allow Atolio’s deployment support team to manage the deployment on your behalf the steps to enable this for your AWS account are as follows:
Clone our lumen-infra GitHub repository
git clone git@github.com:atolio/lumen-infra.git
Run our AWS support role script against the AWS account you’d like us to deploy into
./lumen-infra/deploy/terraform/aws/scripts/atolio-support-role.sh
Review the output from the script and provide the Role ARN to your Atolio support team
Operation completed successfully. Role ARN: arn:aws:iam::123456789012:role/AtolioDeploymentAccess
Determine Atolio DNS name
Before the deployment call, you may want to decide on your desired Atolio web location. Create a AWS Route 53 hosted zone in the AWS account for hosting the Atolio stack (e.g. search.example.com.
): this will be the DNS name (without the trailing dot) for the Atolio web application (e.g. https://search.example.com
):
aws route53 create-hosted-zone --name search.example.com --caller-reference "atolio-initial-provision"
This hosted zone allows the deployment (i.e. the External DNS controller) to add records to link host names (e.g. search.example.com
, feed.search.example.com
and relay.search.example.com
) to the load balancer as created by the AWS ALB controller.
For the remainder of this document, we will use https://search.example.com
in the examples, but it is expected for you to replace with your own DNS name.
Determine Cloud Networking Options
By default, Atolio’s Terraform code will create a VPC. However, you may choose to use an existing VPC and subnets within your AWS account. In this case, set create_vpc
to false
.
Then, configure all VPC related variables. See below sample:
// Uncomment these lines and update the values in case you want to deploy in a
// pre-existing VPC (by default a new VPC will be created).
//
// Note that automatic subnet discovery for the ALB controller will only work
// if the subnets are tagged correctly as documented here:
// https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.7/deploy/subnet_discovery/
// create_vpc = false
// vpc_id = "vpc-000"
// vpc_cidr_block = "10.0.0.0/16"
// vpc_private_subnet_ids = ["subnet-1111", "subnet-2222"]
// vpc_public_subnet_ids = ["subnet-3333", "subnet-4444"]
// vespa_az = "us-west-2a"
// vespa_private_subnet_id = "subnet-1111"
Additional notes regarding existing VPC usage:
- As per above sample, subnets must be tagged corectly as documented in subnet discovery.
- When specifying
vespa_private_subnet_id
, the referenced subnet ID must also be in thevpc_private_subnet_ids
array. - In terms of VPC sizing, the default (
10.0.0/16
) is currently oversized. For reference, VPC Subnet IP addresses are primarily allocated to the EKS cluster and ALB, with AWS reserving several for internal services. We recommend a subnet of /24 (256 IPs) as the minimum to ensure enough available IP addresses for Kubernetes to assign to pods. - Ensure specified subnets have available IPv4 Addresses.
If custom networking configuration will be necessary be sure to provide these details to the engineer performing the deployment.
Delegate responsibility for Atolio subdomain
The customer’s parent domain (e.g. example.com
) needs to delegate traffic to the new Atolio subdomain (search.example.com
). This is achieved by adding an NS record to the parent domain with the 4 name servers copied from the new subdomain (similar to what is described here).
Setup authentication
Atolio supports single sign-on (SSO) authentication through Okta, Microsoft Entra ID, and Google using the OpenID Connect (OIDC) protocol.
Refer to Configuring Authentication for more details on the steps to complete in your desired SSO provider in order to obtain the necessary OIDC configuration values.
Local environment setup
Finally, if an engineer from your team will be performing the deployment, ensure they have the following utilities installed:
- Setup Terraform on your local machine as described on the HashiCorp docs site - we require v1.5.0 at a minimum.
- Install the AWS Command Line Interface
- Install kubectl
- Install Helm
- Install atolioctl
Provide Deployment Engineer with Configuration
At this point if you’re proceeding with an Atolio managed deployment you’ll need to provide the information from these prerequisite steps to your Atolio deployment support team. Otherwise please ensure the information is provided to the engineer from your organization who will be performing the deployment with Atolio’s support.
To recap, provide these details:
If you’re opting to have Atolio manage the deployment you can disregard the remainder of this documentation as these steps will be performed by your Atolio deployment engineer. Otherwise be sure to share this documentation with the engineer from your organization that will be performing the deployment so they can familarize theirself with the steps required.
Create Cloud Infrastructure
The Terraform configuration requires an external (S3) bucket to store state. A script is available to automate the whole process (including running Terraform). Before running the script, create a config.hcl
file based on the provided config.hcl.template
:
cd deploy/terraform/aws
cp ./config.hcl.template config.hcl
Update the copied file with appropriate values. At a minimum, it should look something like this:
# Domain name for Atolio stack (same as hosted zone name without trailing ".")
lumen_domain_name = "search.example.com"
Then copy the Helm template and update the values with the appropriate OIDC settings. You will also likely modify lumenImageTag
to specify the version of Atolio you’d like to deploy. Note: the OIDC settings are necessary for the Helm release to succeed (the Marvin
service is dependent on these settings for validating authentication).
cp ./templates/values-lumen-admin.yaml values-lumen.yaml
lumenImageTag: "4.9.0"
# Path to your company logo to be shown in the Atolio UI
searchUi:
publicLogoPath: "https://search.example.com/yourLogo.svg"
jwtSecretKey: "256-bit-secret-key-for-sign-jwts"
# See also scripts/config-oidc.sh helper script to obtain some of the values below
oidc:
provider: "add-your-provider-here"
endpoint: "add-your-endpoint-here"
clientId: "add-your-id-here"
clientSecret: "add-your-secret-here"
# If running behind a reverse proxy, this should be set to the URL the end user will
# use to access the product.
reverseProxyUrl: ""
For the jwt_secret_key
any 256 bit (32 character) string can be used. It is used to sign JWT tokens used by the web application and atolioctl
tool. It should be a well guarded secret that is unique to the deployment.
If your users will be accessing the web interface via a reverse proxy (e.g. such as StrongDM), then be sure to set the reverseProxyUrl
field to reflect the URL they will actually enter into their browser to access Atolio, which will be different to the hostname defined in lumen_domain_name
. Leave this field empty if not using a reverse proxy.
You should have all variables within the OIDC block configured. Now you can create the infrastructure and deploy the k8s cluster. From the ’terraform/aws’ directory:
./scripts/create-infra.sh --name=deployment-name
This will create the infrastructure in the us-west-2
AWS region. If you want to deploy in another region parameter (e.g. us-east-1) an additional parameter can be provided:
./scripts/create-infra.sh --name=deployment-name --region=us-east-1
The deployment-name
argument is used to generate a deployment name for e.g. tagging resources and naming e.g. the kubernetes cluster and S3 buckets. So make sure it is unique across all deployments. (i.e. using a globally unique deployment name). Typically this is named after the customer for which the Atolio app is deployed or a particular deployment flavour (e.g. acmecorp or engtest).
The script automates the following steps (parameterized based on the provided deployment name):
- Create an S3 bucket to store Terraform state
- Create a terraform.tfvars file for Terraform
- Run
terraform init
- Run
terraform apply
(using input variables in generated terraform.tfvars)
With the infrastructure created, you’ll want to update-kubeconfig
so an updated context can be added to your local configuration:
aws --profile {atolio profile} eks update-kubeconfig --region us-west-2 --name lumen-{deployment-name}
At this point you should be able to interact with the kubernetes cluster, e.g.
kubectl get po -n atolio-svc
Note, Atolio specific services run on the following namespaces:
- atolio-svc (Services)
- atolio-db (Database)
When you have validated that the infrastructure is available, the next step is to configure sources.
2 - Azure Deployment
Architecture
Deployment Prerequisites
In order to get started, your Atolio support team will do the following on your behalf:
- Grant access to Client ACR repos (for pulling Docker images) to your Azure subscription and provide image pull secrets.
- Add your Deployment Engineer as a collaborator to the Atolio GitHub repository (lumen-infra), which contains:
- Deployment documentation
- Terraform for the Atolio stack infrastructure
- Configuration files for Atolio services
- Maintenance scripts
The following deployment prerequisites will help streamline your deployment process.
Determine Azure subscription
You can either choose to deploy Atolio into an existing Azure subscription or an existing one. Atolio will deploy into a new Azure Resource Group (RG), with another RG created automatically by Azure Kubernetes Service (AKS) for the cluster. When the subscription & RG are available, share the details with your Atolio support team.
We recommend:
- Ensuring that Service Quotas within your Azure subscription allow for a minimum of 64 vCPU under the Total Regional vCPUs quota.
- Raising any other organizational Azure policies / restrictions (e.g. networking, containers) with your Atolio support team ahead of the deployment call.
Determine Deployment Model
We offer both Atolio managed and customer managed deployment models for you to choose from. Please review the comparison and requirements for each approach on our Deployment Model Overview page and inform your Atolio support team which method you’d like to use for your deployment.
Atolio Managed Deployment Prerequisites
The Microsoft Role policy document we provision with our support role script is as follows:
{
"Name": "Atolio Support Access",
"Description": "Custom role for Atolio support engineers with minimal required permissions including AKS credential access",
"AssignableScopes": [
"/subscriptions/your-subscription-id"
],
"Actions": [
"Microsoft.ContainerService/managedClusters/read",
"Microsoft.ContainerService/managedClusters/write",
"Microsoft.ContainerService/managedClusters/delete",
"Microsoft.ContainerService/managedClusters/agentPools/read",
"Microsoft.ContainerService/managedClusters/agentPools/write",
"Microsoft.ContainerService/managedClusters/agentPools/delete",
"Microsoft.ContainerService/managedClusters/listClusterAdminCredential/action",
"Microsoft.ContainerService/managedClusters/listClusterUserCredential/action",
"Microsoft.ContainerService/managedClusters/accessProfiles/listCredential/action",
"Microsoft.Network/virtualNetworks/read",
"Microsoft.Network/virtualNetworks/write",
"Microsoft.Network/virtualNetworks/delete",
"Microsoft.Network/virtualNetworks/subnets/read",
"Microsoft.Network/virtualNetworks/subnets/write",
"Microsoft.Network/virtualNetworks/subnets/delete",
"Microsoft.Network/virtualNetworks/subnets/join/action",
"Microsoft.Network/publicIPAddresses/read",
"Microsoft.Network/publicIPAddresses/write",
"Microsoft.Network/publicIPAddresses/delete",
"Microsoft.Network/applicationGateways/read",
"Microsoft.Network/applicationGateways/write",
"Microsoft.Network/applicationGateways/delete",
"Microsoft.Network/dnsZones/read",
"Microsoft.Network/dnsZones/write",
"Microsoft.Network/dnsZones/delete",
"Microsoft.Network/dnsZones/SOA/read",
"Microsoft.Storage/storageAccounts/read",
"Microsoft.Storage/storageAccounts/write",
"Microsoft.Storage/storageAccounts/delete",
"Microsoft.Storage/storageAccounts/listKeys/action",
"Microsoft.Storage/storageAccounts/fileServices/read",
"Microsoft.Storage/storageAccounts/fileServices/shares/read",
"Microsoft.Storage/storageAccounts/blobServices/read",
"Microsoft.Storage/storageAccounts/blobServices/write",
"Microsoft.Storage/storageAccounts/blobServices/containers/read",
"Microsoft.Storage/storageAccounts/blobServices/containers/write",
"Microsoft.Storage/storageAccounts/blobServices/containers/delete",
"Microsoft.Compute/disks/read",
"Microsoft.Compute/disks/write",
"Microsoft.Compute/disks/delete",
"Microsoft.ManagedIdentity/userAssignedIdentities/read",
"Microsoft.ManagedIdentity/userAssignedIdentities/write",
"Microsoft.ManagedIdentity/userAssignedIdentities/delete",
"Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read",
"Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write",
"Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete",
"Microsoft.Authorization/roleAssignments/read",
"Microsoft.Authorization/roleAssignments/write",
"Microsoft.Authorization/roleAssignments/delete",
"Microsoft.Resources/subscriptions/resourceGroups/read",
"Microsoft.Resources/subscriptions/resourcegroups/write",
"Microsoft.Resources/subscriptions/resourcegroups/delete"
],
"NotActions": [
"Microsoft.Authorization/elevateAccess/Action",
"Microsoft.Authorization/roleDefinitions/write",
"Microsoft.Authorization/roleDefinitions/delete"
],
"DataActions": [],
"NotDataActions": [],
"condition": "@iPAddress() matches '^52.43.209.253$'",
"conditionVersion": "2.0"
}
Access will be limited to a client support machine that is only accessible to Atolio support engineers and that has a static IP of 52.43.209.253
assigned to aid in identifying activity from Atolio’s team.
If you opt to allow Atolio’s deployment support team to manage the deployment on your behalf the steps to enable this for your Azure subscription are as follows:
Clone our lumen-infra GitHub repository
git clone git@github.com:atolio/lumen-infra.git
Run our Azure support role script against the Azure subscription you’d like us to deploy into
./lumen-infra/deploy/terraform/azure/scripts/atolio-support-role.sh
Review the output from the script and provide the details to your Atolio support team
Operation completed successfully. Please securely share the following details with your Atolio support engineer: Tenant ID: <tenant ID> Subscription ID: <subsciption ID> Application (Client) ID: <client app ID> Client Secret: <client secret> Secret Expiry Date: <expiry date> These credentials grant access to the specified Azure subscription with the custom role 'Atolio Support Access'. The client secret will expire on <expiry date>.
Determine Atolio DNS name
Before the deployment call, you may want to decide on your desired Atolio web location. Create an Azure DNS Zone in the Azure subscription for hosting the Atolio stack (e.g. search.example.com.
): this will be the DNS name (without the trailing dot) for the Atolio Web application (e.g. https://search.example.com
).
For the remainder of this document, we will use https://search.example.com
in the examples, but it is expected for you to replace with your own DNS name.
Obtain a certificate for SSL
For the previously defined DNS name, you will need to obtain a certificate that can be used for SSL. This certificate will need to be installed in the application gateway in a later step.
Setup Authentication
Atolio supports single sign-on (SSO) authentication through Okta, Microsoft Entra ID, and Google using the OpenID Connect (OIDC) protocol.
Refer to Configuring Authentication for more details on the steps to complete in your desired SSO provider in order to obtain the necessary OIDC configuration values.
The oidc_client_id
and oidc_client_secret
will be the respective values created and saved during Azure AD - Create New App Registration.
Determine Cloud Networking Options
By default, Atolio’s Terraform code will create a VNet in your Azure subscription. However, you may choose to use an existing VNet and subnets within your Azure subscription. In this case, set create_vnet
to false
in the config.hcl
file during the deployment.
Then, configure all VPC related variables. See this example from config.hcl
:
// Uncomment these lines and update the values in case you want to deploy in a
// pre-existing VNet (by default a new VNet will be created).
//
// create_vnet = false
// vnet_id = "/subscriptions/{tenantID}/resourceGroups/{rgID}/providers/Microsoft.Network/virtualNetworks/{vnetName}"
// vnet_private_subnet_id = "/subscriptions/{tenantID}/resourceGroups/{rgID}/providers/Microsoft.Network/virtualNetworks/{vnetName}/subnets/{privateSubnetName}"
// vnet_public_subnet_id = "/subscriptions/{tenantID}/resourceGroups/{rgID}/providers/Microsoft.Network/virtualNetworks/{vnetName}/subnets/{publicSubnetName}"
Optionally, when create_vnet
is not set to false
you can customize the default VNet and Subnet CIDR blocks if you have specific requirements. See this example from config.hcl
:
// Uncomment these lines and update the values if you want to customize the
// default CIDR blocks for the VNet and Subnets created automatically.
//
// Note: this only applies for when create_vnet hasn't been set to false and
// any values modified here will be ignored when create_vnet=false.
//
// vnet_cidr = "10.42.0.0/20"
// pod_subnet_block = "10.42.0.0/22"
// appgw_subnet_block = "10.42.4.0/24"
If custom networking configuration will be necessary be sure to provide these details to the engineer performing the deployment.
Setup local environment
Finally, if an engineer from your team will be performing the deployment, ensure they have the following utilities installed:
- Setup Terraform on your local machine as described on the HashiCorp docs site - we require v1.5.0 at a minimum.
- Install the Azure Command Line Interface
- Install kubectl
- Install Helm
- Install atolioctl
Note: If you are running on Windows, you may also need to install the Windows Subsystem for Linux.
Provide Deployment Engineer with Configuration
At this point if you’re proceeding with an Atolio managed deployment you’ll need to provide the information from these prerequisite steps to your Atolio deployment support team. Otherwise please ensure the information is provided to the engineer from your organization who will be performing the deployment with Atolio’s support.
To recap, provide these details:
If you’re opting to have Atolio manage the deployment you can disregard the remainder of this documentation as these steps will be performed by your Atolio deployment engineer. Otherwise be sure to share this documentation with the engineer from your organization that will be performing the deployment so they can familarize theirself with the steps required.
Create Cloud Infrastructure
Note: Atolio requires an Azure region with 3 availability zones. You can check which regions include support for multiple availability zones here.
The Terraform configuration requires an Azure storage account to store state. A script is available to automate the whole process (including running Terraform). Before running the script, create a config.hcl
file based on the provided config.hcl.template
:
cd deploy/terraform/azure
cp ./config.hcl.template config.hcl
Atolio Domain Name and Image Pull Secrets
Update the copied file with appropriate values. At a minimum, it should look something like this:
// Domain name for Atolio stack (same as hosted zone name without trailing ".")
lumen_domain_name = "search.example.com"
// The registry in which to obtain containers for services
container_registry = "atolioimages.azurecr.io"
image_pull_username = "provided-by-atolio"
image_pull_password = "provided-by-atolio"
Your Atolio support team will share the appropriate values for the image_pull_username
and image_pull_password
values.
Application Helm Value Options
Then copy the Helm template and update the values with the appropriate OIDC settings and repository values. You will also likely modify lumenImageTag
to specify the version of Atolio you’d like to deploy. Note: the OIDC settings are necessary for the Helm release to succeed (the Marvin
service is dependent on these settings for validating authentication).
cp ./templates/values-lumen-admin.yaml values-lumen.yaml
cp ./templates/values-vespa-admin.yaml values-vespa.yaml
lumenImageTag: "4.11.2"
# Path to your company logo to be shown in the Atolio UI
searchUi:
publicLogoPath: "https://search.example.com/yourLogo.svg"
jwtSecretKey: "256-bit-secret-key-for-sign-jwts"
# See also scripts/config-oidc.sh helper script to obtain some of the values below
oidc:
provider: "add-your-provider-here"
endpoint: "add-your-endpoint-here"
clientId: "add-your-id-here"
clientSecret: "add-your-secret-here"
# If running behind a reverse proxy, this should be set to the URL the end user will
# use to access the product.
reverseProxyUrl: ""
For the jwt_secret_key
any 256 bit (32 character) string can be used. It is used to sign JWT tokens used by the web application and atolioctl
tool. It should be a well guarded secret that is unique to the deployment.
If your users will be accessing the web interface via a reverse proxy (e.g. such as StrongDM), then be sure to set the reverseProxyUrl
field to reflect the URL they will actually enter into their browser to access Atolio, which will be different to the hostname defined in lumen_domain_name
. Leave this field empty if not using a reverse proxy.
Deployment with create-infra.sh script
Once you have all variables configured to your environment’s requirements you can create the infrastructure and deploy the k8s cluster. From the ‘deploy/terraform/azure’ directory:
./scripts/create-infra.sh --name=deployment-name
This will create the infrastructure in your default Azure region. If you want to deploy in another region parameter (e.g. eastus) an additional parameter can be provided:
./scripts/create-infra.sh --name=deployment-name --region=eastus
The deployment-name
argument is used to define a deployment name for collecting resources into an Azure Resource Group containing the kubernetes cluster, networking, storage, etc. We recommend making it unique across all deployments, i.e. using a globally unique deployment name. Typically this is named after the customer for which the Atolio app is deployed or a particular deployment flavour (e.g. acmecorp or engtest).
The script automates the following steps (parameterized based on the provided deployment name):
- Create an Azure Blob Storage to store Terraform state
- Create a terraform.tfvars file for Terraform
- Run
terraform init
- Run
terraform apply
(using input variables in generated terraform.tfvars)
With the infrastructure created, you’ll want to retrieve an updated context using (this is also output via Terraform as update_kubeconfig_command
):
az aks get-credentials --name=lumen-{deployment-name} --resource-group lumen-{deployment-name}
At this point you should be able to interact with the kubernetes cluster, e.g.
kubectl get po -n atolio-svc
Note, Atolio specific services run on the following namespaces:
- atolio-svc (Services)
- atolio-db (Database)
When you have validated that the infrastructure is available, the next step is to configure sources.
3 - Atolio Managed vs. Customer Managed Deployment Models
Atolio Managed vs. Customer Managed Deployment
Atolio offers two deployment models to best suit your organization’s needs and security requirements. Regardless of which deployment model you choose, your data always remains within your cloud environment and under your complete control. This ensures maximum security and compliance while still allowing you to benefit from Atolio’s expertise and support.
Atolio Managed Deployment
In this model, an Atolio support engineer handles the initial infrastructure deployment on your behalf, working exclusively within your cloud environment. This option is recommended if you want to:
- Accelerate the initial deployment process
- Leverage Atolio’s expertise with the infrastructure stack
- Focus your team’s efforts on configuration and usage rather than deployment
To enable Atolio managed deployment:
- Have a cloud engineer with sufficient privileges run our support role script for your Cloud Provider
- The script will create the IAM resource with the necessary permissions and output details
- Provide these details to your Atolio deployment support team. They will use this delegated role to:
- Deploy the initial infrastructure
- Configure core services
- Validate the deployment
- Hand off the deployment to your team
After the initial deployment is complete and validated, you can either:
- Remove the Atolio support role to revoke access
- Maintain the role for future support needs
- Transfer infrastructure management responsibilities to your internal team
Customer Managed Deployment
In this model, your organization’s engineers handle the deployment using the documentation we provide and with the guidance of an Atolio deployment engineer via screensharing. This option is recommended if you want to:
- Maintain complete control over the deployment process
- Integrate the deployment into your existing infrastructure-as-code practices
- Have your team gain in-depth knowledge of the infrastructure stack
To proceed with a customer managed deployment:
- Follow the prerequisites and setup instructions in the section specific to your cloud provider
- Use the provided Terraform configurations and scripts to deploy the infrastructure
- Work with Atolio support for guidance and troubleshooting as needed via screensharing
Regardless of the deployment model chosen, Atolio provides ongoing support for the application itself and can assist with infrastructure-related questions and issues. In both models, your data and infrastructure remain securely within your cloud environment, with all processing and storage occurring within your controlled perimeter.
4 - Operations Best Practices
Troubleshooting
Current configuration and service status can be monitored in the admin: https://search.example.com/admin.
With the appropriate Kubernetes context set, port forwarding to particular pods is a common case to query select APIs. For example, you may wish to query the Vespa document cluster directly. You’d do this by port forwarding the container node (with valid AWS profile set in context):
kubectl port-forward -n atolio-db pod/vespa-container-0 8080
Additionally, the Feeder service provides gRPC APIs which are used by various services and tools. As the port name is not sticky, it is recommended to port forward the service:
kubectl port-forward -n atolio-svc service/feeder 8889
To observe the possible APIs, use grpcurl to describe and explore:
grpcurl -plaintext 127.0.0.1:8889 describe
Note there are two namespaces used in an Atolio deployment. They are atolio-svc
(for all services) and atolio-db
for Vespa (database and search).
Storing Deployment Artifacts
This completes the initial deployment of the Atolio stack. Please make sure to store the following artifacts created by the deployment process in a safe place for future use:
- Deployment specific Terraform settings (
terraform.tfvars
andvalues.yaml
) - Initial configuration (
config.hcl
) which is needed to generate redeploy from scratch (this generatesterraform.tfvars
) - Google credential files (Client OAuth and Directory API keys)
These will be needed to make future changes and provide access to the Atolio stack for maintenance.
Additionally there is a hidden .terraform
directory with Terraform internal state that is needed to re-run Terraform without the need for reconfiguration.
Deploying Updates
The Atolio micro services of the Atolio stack (i.e. Marvin, Search UI, Source Fleet, and Feeder) will be updated by Atolio. This is done by pushing updated Docker images to the Docker Repositories (ECR) hosted by Atolio.
Atolio, under normal circumstances, will not replace pushed images. We follow a typical major/minor/patch versioning model and any changes, including hot fixes, will be pushed under their relevant version.
This means that to update services, simply amend lumenImageTag
in both values-lumen.yaml
and values-vespa.yaml
files with the desired version. If using the image tag for a lumen-infra release, then you do not need to update this value.