Skip to main content

Helm platform chart

The platform chart provisions resources that surround the Bulwark Mail application: secrets injected via external-secrets.io, and observability plumbing (ServiceMonitor, Grafana dashboards, PrometheusRules). It is optional — install the base chart alone for a minimal deployment.

Values reference: helm/platform/values.yaml

Install

helm install bulwark-mail-platform ./helm/platform \
--set externalSecrets.enabled=true \
--set observability.serviceMonitor.enabled=true

Everything is disabled by default so the chart is safe to install into clusters that don't have external-secrets, Prometheus, or Grafana operators present. Enabling a feature without its required CRDs causes helm install to fail early with a clear error, rather than silently creating Custom Resources nothing is reconciling.

Matching the base chart

The ServiceMonitor below needs to select the base chart's pods. Set global.baseInstance to the base chart's release name so the selector matches both app.kubernetes.io/name and app.kubernetes.io/instance:

global:
baseInstance: bulwark-mail # release name used for `helm install <name> ./helm/base`

Leave it empty to match by name only — acceptable when only one release of the app runs in the cluster.

PodDisruptionBudget lives in the base chart, not here — it's tightly coupled to the Deployment's lifecycle.

ExternalSecrets

Populates a Kubernetes Secret from an upstream store (Vault, AWS Secrets Manager, 1Password, etc.) via an ExternalSecret resource. The base chart's envFrom then pulls credentials from this Secret.

Prerequisites

Configuration

externalSecrets:
enabled: true
refreshInterval: 1h
storeRef:
name: cluster-secret-store
kind: ClusterSecretStore
data:
- secretKey: SESSION_SECRET
remoteKey: bulwark-mail/session
property: secret
- secretKey: OAUTH_CLIENT_ID
remoteKey: bulwark-mail/oauth
property: client-id
- secretKey: OAUTH_CLIENT_SECRET
remoteKey: bulwark-mail/oauth
property: client-secret

To pull every key under a remote path instead of mapping them individually, use dataFrom:

externalSecrets:
enabled: true
dataFrom:
- extract:
key: bulwark-mail/config

Value transformation

Use target.template to synthesize new secret values from the retrieved ones (e.g. compose a URL from parts). See the ESO templating guide:

externalSecrets:
target:
template:
type: Opaque
data:
JMAP_URL: "https://{{ `{{ .host }}` }}/.well-known/jmap"

The double-brace escape ({{ ... }}) is needed because Helm processes the values file first; the inner braces reach ESO untouched.

Wiring back into the base chart

The generated Secret is named after the application (bulwark-mail). Reference it from the base chart's envFrom:

# helm/base values override
envFrom:
- secretRef:
name: bulwark-mail

See the base chart injecting-secrets section for which variables to map.

Observability

ServiceMonitor

Scrapes the container's /metrics endpoint via the Prometheus Operator.

observability:
serviceMonitor:
enabled: true
additionalLabels:
release: kube-prometheus-stack # match your Prometheus instance selector
port: http
path: /metrics
interval: 30s
scrapeTimeout: 10s

Requires the Prometheus Operator CRDs (monitoring.coreos.com/v1).

Grafana dashboard

Deploys dashboards as a ConfigMap with the Grafana sidecar label. The sidecar picks them up and imports them into Grafana automatically.

observability:
grafanaDashboard:
enabled: true
folderLabel: "Applications"

Dashboards are loaded from helm/platform/dashboards/*.json. Each JSON file becomes a key in the ConfigMap and is run through Helm's tpl function, so you can reference chart helpers inside the JSON with backtick-quoted args, since JSON escapes break inside Go template actions). Drop additional dashboards into the directory and they ship alongside the default.

Requires Grafana running with the dashboard sidecar enabled.

PrometheusRules

Ships alert rules for error rate and latency. The group definitions live under observability.prometheusRules.groups and are passed through tpl, so you can reference chart helpers and release context inside expressions:

observability:
prometheusRules:
enabled: true
groups:
- name: '{{ include "platform.name" . }}.rules'
rules:
- alert: HighErrorRate
expr: |
sum(rate(http_requests_total{service="{{ include "platform.name" . }}", status=~"5.."}[5m])) > 0.05
for: 5m
labels:
severity: warning
annotations:
summary: "High error rate"

Override the whole list to replace the built-in alerts, or append to extend them. Requires the Prometheus Operator CRDs.

Heads up: the built-in alerts assume http_requests_total / http_request_duration_seconds_bucket metric names. If Bulwark Mail's upstream doesn't expose those (or exposes OpenTelemetry-naming variants), the alerts won't fire — replace groups with alerts based on metrics the app actually emits, or switch to kube-state-metrics based checks (kube_pod_container_status_restarts_total, up == 0) for a generic baseline.

What this chart does NOT install

  • The parent Gateway resource — provided by a gateway chart (istio-gateway or similar)
  • TLS certificates — expected to be attached to the Gateway's HTTPS listener
  • The mail server itself — Bulwark Mail only provides the webmail UI; Stalwart runs separately