OpenAI

Overview

Use OpenAI as the LLM provider for Atolio.

Prerequisites

  • A healthy Atolio deployment
  • kubectl access to the cluster
  • atolioctl downloaded from the latest release
  • An OpenAI API key
  • jq installed (used below)

Step 1: Create an OpenAI API key

Step 2: Port‑forward Atolio Feeder (for KV operations)

kubectl -n atolio-db port-forward service/feeder 8889

Leave this running in a separate terminal while you configure keys.

Step 3: Generate a JWT token

# Get the JWT Secret. Needed for JWT generation
export jwtSecret=$(kubectl -n atolio-svc get secret lumen-secrets -o json | jq -r .data.jwtSecretKey | base64 -d)

# Change this to reflect your Atolio deployment's domain name
export domainName="https://search.example.com"

export JWT_TOKEN=$(atolioctl connector create-jwt --raw \
  --jwt-audience-sdk=${domainName} \
  --jwt-issuer-sdk=${domainName} \
  --jwt-secret-sdk=${jwtSecret} \
  "atolio:*:*:*")

Confirm JWT_TOKEN is set to a JWT value.

Step 4: Configure Atolio to use OpenAI

atolioctl --feeder-address :8889 --jwt-token-sdk ${JWT_TOKEN} kv set /lumen/system/ask_model_slug openai-gpt-4o
atolioctl --feeder-address :8889 --jwt-token-sdk ${JWT_TOKEN} kv set /lumen/system/ask_api_key_openai {OPENAI_API_KEY}

Replace {OPENAI_API_KEY} with your key.

Step 5: Restart the Marvin service

kubectl -n atolio-svc delete pod -l app=marvin

Kubernetes will recreate the pod with the new configuration.

Verification

  • In your Atolio UI, open the homepage and run a test question.
  • If you see errors, re-check the KV entries and that the correct model slug is set.