Configuring AI Services
Overview
Atolio lets users ask natural-language questions against your organization’s knowledge using retrieval‑augmented generation (RAG). Atolio supports multiple LLM providers, starting with OpenAI and Azure OpenAI.
Supported providers
Prerequisites
- A healthy Atolio deployment
kubectl
access to the clusteratolioctl
downloaded from the latest releasejq
installed (for extracting values from Kubernetes secrets)
High‑level configuration workflow
1) Port‑forward the Feeder service
kubectl -n atolio-db port-forward service/feeder 8889
Leave this running in a separate terminal while you configure keys.
2) Generate a JWT token for KV operations
# Get the JWT Secret (used to sign the token)
export jwtSecret=$(kubectl -n atolio-svc get secret lumen-secrets -o json | jq -r .data.jwtSecretKey | base64 -d)
# Change this to reflect your Atolio deployment's domain name
export domainName="https://search.example.com"
export JWT_TOKEN=$(atolioctl connector create-jwt --raw \
--jwt-audience-sdk=${domainName} \
--jwt-issuer-sdk=${domainName} \
--jwt-secret-sdk=${jwtSecret} \
"atolio:*:*:*")
Confirm JWT_TOKEN
is set to a JWT value.
Follow your provider guide to set the model slug and credentials in the KV store.
- OpenAI: set
/lumen/system/ask_model_slug
to openai-gpt-4o
and store the OpenAI API key. - Azure OpenAI: set
/lumen/system/ask_model_slug
to azure-openai
and store the API key, endpoint, and deployment name.
4) Restart the Marvin service
kubectl -n atolio-svc delete pod -l app=marvin
Kubernetes will recreate the pod with the new configuration.
Troubleshooting
- Startup errors: re-check the stored keys and endpoint URLs.
- Rate limits (Azure): increase Tokens per Minute (TPM) on the deployment.
- Connection issues: ensure port‑forwarding is active during KV operations.
1 - OpenAI
Overview
Use OpenAI as the LLM provider for Atolio.
Prerequisites
- A healthy Atolio deployment
kubectl
access to the clusteratolioctl
downloaded from the latest release- An OpenAI API key
jq
installed (used below)
Step 1: Create an OpenAI API key
Step 2: Port‑forward Atolio Feeder (for KV operations)
kubectl -n atolio-db port-forward service/feeder 8889
Leave this running in a separate terminal while you configure keys.
Step 3: Generate a JWT token
# Get the JWT Secret. Needed for JWT generation
export jwtSecret=$(kubectl -n atolio-svc get secret lumen-secrets -o json | jq -r .data.jwtSecretKey | base64 -d)
# Change this to reflect your Atolio deployment's domain name
export domainName="https://search.example.com"
export JWT_TOKEN=$(atolioctl connector create-jwt --raw \
--jwt-audience-sdk=${domainName} \
--jwt-issuer-sdk=${domainName} \
--jwt-secret-sdk=${jwtSecret} \
"atolio:*:*:*")
Confirm JWT_TOKEN
is set to a JWT value.
atolioctl --feeder-address :8889 --jwt-token-sdk ${JWT_TOKEN} kv set /lumen/system/ask_model_slug openai-gpt-4o
atolioctl --feeder-address :8889 --jwt-token-sdk ${JWT_TOKEN} kv set /lumen/system/ask_api_key_openai {OPENAI_API_KEY}
Replace {OPENAI_API_KEY}
with your key.
Step 5: Restart the Marvin service
kubectl -n atolio-svc delete pod -l app=marvin
Kubernetes will recreate the pod with the new configuration.
Verification
- In your Atolio UI, open the homepage and run a test question.
- If you see errors, re-check the KV entries and that the correct model slug is set.
2 - Azure OpenAI
Overview
Use Azure OpenAI as the LLM provider for Atolio.
Prerequisites
- A healthy Atolio deployment
kubectl
access to the clusteratolioctl
downloaded from the latest release- Azure OpenAI: API key, Endpoint URL, and Deployment Name
jq
installed (used below)
Prepare Azure OpenAI resources (if needed)
Create an Azure OpenAI service
- Navigate to the Azure AI Foundry Portal at https://ai.azure.com/ and create a new Azure AI Foundry resource.
- Choose subscription, resource group, region, and resource name.
- Create the resource.
Create a model deployment
- From the service homepage, open Deployments from the left-hand menu.
- Create a new base model deployment for a supported model (e.g.,
gpt-4o
). - Note the Deployment Name.
Copy connection details
- On the model deployment page, under Endpoint settings, copy the API key and the base Endpoint URL.
- The base Endpoint URL will be in the format
https://<your-resource>.openai.azure.com/
- From the deployment info, copy the Deployment Name.
Step 1: Port‑forward Atolio Feeder (for KV operations)
kubectl -n atolio-db port-forward service/feeder 8889
Leave this running in a separate terminal while you configure keys.
Step 2: Generate a JWT token
# Get the JWT Secret. Needed for JWT generation
export jwtSecret=$(kubectl -n atolio-svc get secret lumen-secrets -o json | jq -r .data.jwtSecretKey | base64 -d)
# Change this to reflect your Atolio deployment's domain name
export domainName="https://search.example.com"
export JWT_TOKEN=$(atolioctl connector create-jwt --raw \
--jwt-audience-sdk=${domainName} \
--jwt-issuer-sdk=${domainName} \
--jwt-secret-sdk=${jwtSecret} \
"atolio:*:*:*")
Confirm JWT_TOKEN
is set to a JWT value.
Step 3: Set provider and credentials in the KV store
atolioctl --feeder-address :8889 --jwt-token-sdk ${JWT_TOKEN} kv set /lumen/system/ask_model_slug azure-openai
atolioctl --feeder-address :8889 --jwt-token-sdk ${JWT_TOKEN} kv set /lumen/system/ask_api_key_azure {AZURE_OPENAI_API_KEY}
atolioctl --feeder-address :8889 --jwt-token-sdk ${JWT_TOKEN} kv set /lumen/system/ask_azure_base_uri {AZURE_OPENAI_ENDPOINT}
atolioctl --feeder-address :8889 --jwt-token-sdk ${JWT_TOKEN} kv set /lumen/system/ask_azure_deployment_name {AZURE_OPENAI_DEPLOYMENT_NAME}
Replace placeholders with your values, for example:
{AZURE_OPENAI_ENDPOINT}
like https://<your-resource>.openai.azure.com/
{AZURE_OPENAI_DEPLOYMENT_NAME}
is the exact deployment name you created{AZURE_OPENAI_API_KEY}
is the API key you copied from the deployment page
Step 4: Restart the Marvin service
kubectl -n atolio-svc delete pod -l app=marvin
Kubernetes will recreate the pod with the new configuration.
Verification
- In your Atolio UI, open the homepage and run a test question.
- If rate-limited, adjust your Azure OpenAI deployment’s TPM limits.