This guide explains how to self-host Stellarbridge on a Kubernetes cluster using the official Helm chart. You will install the application in your own cluster and connect it to your PostgreSQL, Redis, and S3-compatible storage (or use in-cluster/minimal options where supported).
Prerequisites
- Kubernetes cluster: 1.26 or later (any conformant cluster: on-premises, AWS EKS, Google GKE, DigitalOcean DOKS, etc.)
- kubectl: Configured for your cluster
- Helm: 3.12 or later
- Image access: Ability to pull the Stellarbridge app image (e.g. from a private registry or the image credentials provided to you)
- External dependencies (recommended for production): PostgreSQL, Redis, and S3-compatible object storage; or use in-cluster Postgres/Redis/MinIO if your distribution supports it
Architecture Overview
The Helm chart deploys:
- Stellarbridge app: Deployment and Service (port 8080)
- ConfigMap / Secrets: Application configuration and credentials
- Optional: Ingress, NetworkPolicies, PodDisruptionBudget, and optional sidecars or portal (if included in your chart)
The application expects:
- PostgreSQL: Connection string or host/port/user/password/database
- Redis: Connection string for caching and sessions
- S3-compatible storage: AWS or MinIO-style endpoint and credentials
Step 1: Obtain the Helm Chart
Use the Stellarbridge Helm chart provided for self-hosting. It may be delivered as a tarball or a Git repository. Typical layout:
stllr-app/
Chart.yaml
values.yaml
values-dev.yaml
templates/
charts/ # optional dependenciesIf the chart has dependencies (e.g. subcharts), update them:
cd stllr-app
helm dependency updateStep 2: Configure Values
Create a custom values file (e.g. my-values.yaml) so you do not edit the default values.yaml directly. Override at least the following.
Image and pull secrets
Use the image and tag provided to you. If the image is in a private registry, create a Kubernetes secret and reference it in the chart:
# my-values.yaml
image:
repository: ghcr.io/epyklab/stellarbridge/app # or your registry
tag: v2.2.0
pullPolicy: IfNotPresent
imageCredentials:
name: stllr-registry-secret
# content: base64-encoded Docker config JSON (set via --set or secrets manager)Create the pull secret (example for Docker config):
kubectl create namespace stllr
kubectl create secret docker-registry stllr-registry-secret \
--namespace stllr \
--docker-server=ghcr.io \
--docker-username=YOUR_USER \
--docker-password=YOUR_TOKENIf the chart expects imageCredentials.content (base64 Docker config), set it at install time and do not commit it:
helm upgrade --install stllr ./stllr-app \
-n stllr --create-namespace \
-f my-values.yaml \
--set imageCredentials.content="$(cat $HOME/.docker/config.json | jq -c '{ auths: .auths }' | base64 -w 0)"Application configuration
Chart values typically map to environment variables for the app. Configure database, Redis, S3, and app identity.
Database (external):
# my-values.yaml
app:
stllrDbHost: your-postgres-host
stllrDbPort: 5432
stllrDbDatabase: stellarbridge
stllrDbUsername: stellarbridge
stllrDbSchema: public
stllrDbSslMode: "require"
# Or provide connection string via secret (see chart templates)Sensitive values (passwords, connection strings) are usually provided via Secrets. The chart may expect a secret key such as STLLR_POSTGRES_CONN_STRING or separate DB fields; check the chart’s values.yaml and templates/ for exact key names.
Redis:
redis:
enabled: true
connString: "rediss://default:YOUR_PASSWORD@your-redis-host:25061"
healthCheck:
enabled: trueOr set Redis URL in the same secret as the app env.
S3-compatible storage:
Configure endpoint and region; credentials should come from a secret. Example pattern:
app:
stllrAwsRegion: us-east-1
stllrAwsEndpointUrlS3: https://your-s3-or-minio-endpoint
# stllrAwsEndpointUrlIam if required by your providerEnsure the app container receives STLLR_AWS_ACCESS_KEY_ID, STLLR_AWS_SECRET_ACCESS_KEY, and optionally STLLR_S3_BUCKET via the chart’s secret/config mechanism.
App identity and domain:
app:
stllrAppEnv: prod
stllrDomain: "https://files.yourcompany.com"
stllrSupportEmail: "support@yourcompany.com"Secrets: Store STLLR_COOKIE_SECRET, STLLR_JWT_HMAC_SECRET, database password, Redis password, and S3 credentials in a Kubernetes Secret. The chart may use a generic secret or an external-secrets integration; reference the chart’s README and templates/ for the exact secret name and keys.
Example (generic secret):
kubectl create secret generic stllr-app-secrets -n stllr \
--from-literal=STLLR_POSTGRES_CONN_STRING='postgresql://user:pass@host:5432/db?sslmode=require' \
--from-literal=STLLR_REDIS_CONN_STRING='rediss://default:pass@host:25061' \
--from-literal=STLLR_COOKIE_SECRET='...' \
--from-literal=STLLR_JWT_HMAC_SECRET='...' \
--from-literal=STLLR_AWS_ACCESS_KEY_ID='...' \
--from-literal=STLLR_AWS_SECRET_ACCESS_KEY='...'Then in my-values.yaml (or equivalent), ensure the deployment uses this secret as env or envFrom.
Step 3: Ingress and TLS
To expose the app over HTTPS, enable Ingress and configure your ingress class and TLS. We recommend using Caddy as the ingress controller (e.g. the Caddy Ingress controller) for automatic TLS; if your cluster uses the NGINX Ingress controller, use the following pattern:
Example (Caddy Ingress controller):
# my-values.yaml (if chart supports these keys)
ingress:
enabled: true
className: caddy
host: files.yourcompany.com
tls:
enabled: true
secretName: stllr-tls
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prodAlternative (NGINX Ingress controller):
ingress:
enabled: true
className: nginx
host: files.yourcompany.com
tls:
enabled: true
secretName: stllr-tls
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/ssl-redirect: "true"Create the TLS secret with cert-manager (or let Caddy manage certificates) or by uploading your own certificate. Ensure STLLR_DOMAIN matches the public URL (e.g. https://files.yourcompany.com).
Step 4: Install or Upgrade
Install the release in a dedicated namespace:
helm upgrade --install stllr ./stllr-app \
-n stllr \
--create-namespace \
-f my-values.yaml \
--set imageCredentials.content="$(cat $HOME/.docker/config.json | jq -c '{ auths: .auths }' | base64 -w 0)" \
--wait \
--timeout 10mFor upgrades:
helm upgrade stllr ./stllr-app -n stllr -f my-values.yaml --waitStep 5: Verify Deployment
Pods: All app pods should be Running and Ready.
kubectl get pods -n stllr kubectl get svc -n stllrLogs: Check application logs for startup and connection errors.
kubectl logs -n stllr -l app=stllr --tail=100Ingress: Resolve the ingress host and open it in a browser; confirm TLS and that the app loads.
Health: The chart usually configures HTTP liveness/readiness on port 8080 (e.g. path
/). Usekubectl describe podor your monitoring to confirm probes are passing.
Scaling and High Availability
- Replicas: Increase replica count in values (e.g.
scaling.count: 3) for multiple app pods. Ensure your PostgreSQL and Redis support concurrent connections. - PodDisruptionBudget: The chart may define a PDB; if not, add one so that voluntary disruptions do not take down all replicas at once.
- Resource limits: Set
resources.requestsandresources.limitsin the chart values to avoid overcommit and ensure scheduling.
Optional: Monitoring and Observability
If the chart supports OpenTelemetry or Prometheus, you can enable tracing and metrics by setting the appropriate endpoints and labels in values. This is optional for self-hosting.
Troubleshooting
- ImagePullBackOff: Verify
imageCredentialsand that the secret exists in the same namespace as the release. Check registry authentication and network policies. - CrashLoopBackOff: Inspect
kubectl logsandkubectl describe pod. Common causes: wrong database or Redis connection string, missing or incorrect S3 credentials, or invalidSTLLR_COOKIE_SECRET/STLLR_JWT_HMAC_SECRET. - Not ready / 503: Confirm readiness probe path and port; ensure DB and Redis are reachable from the cluster and that migrations have run (usually on first app startup).
- Network policy blocking traffic: If the chart enables network policies, allow egress to your PostgreSQL, Redis, and S3 endpoints and to DNS (UDP/TCP 53).
For more on security and operations, see Security and Managing Your Organization.