Security
This how-to shows administrators how to forward Stellarbridge logs to a SIEM using a small Python shipper. You will learn which logs are available, how to read them, and how to send them to a generic HTTP endpoint as well as Splunk HEC and Elasticsearch/OpenSearch.
Notes up front:
- Cloud-hosted customers: contact support if you need an export. We manage retention and can provide organization-scoped exports on request.
- Self-hosted customers: the steps below show how to stream container stdout access logs and other app log lines to your SIEM.
- HTTP access logs (stdout)
- Format: one JSON object per line with fields: timestamp (UTC), status, latency, method, path, ip, port, bytesSent, bytesReceived.
- Example line:
{"timestamp":"2025-09-09T13:30:00-0000","status":200,"latency":"1.2ms","method":"GET","path":"/api/ping","ip":"203.0.113.10","port":"443","bytesSent":123,"bytesReceived":0}
- Security/audit signals (structured app logs)
- Login success/failure events are recorded with timestamp, actor (email), result, IP, and organization context.
- Transfer receipts and error signals are also logged asynchronously.
Tip: See overview details in Audit logging, Security at Stellarbridge, and Security architecture.
Make sure that:
- You can read the application logs (stdout) from your deployment:
- Kubernetes: kubectl logs -f deployment/stellarbridge -c app
- Docker/Compose: docker logs -f stellarbridge
- Systemd or process manager: ensure logs are written to a file or journal you can read
- You have Python 3.10+ and pip
- Your SIEM provides an HTTPS ingestion endpoint and token (examples below). Allow egress to it from where you run the shipper
- Time on your nodes is synchronized (UTC recommended) for accurate correlation
Use this when you can pipe logs (stdin) or read from a local log file. The shipper batches lines and sends JSON to a generic HTTPS endpoint.
- Install dependencies
pip install requests
- Save the shipper as shipper.py
import os
import sys
import time
import json
import threading
import queue
import requests
BATCH_SIZE = int(os.getenv("BATCH_SIZE", "100"))
FLUSH_INTERVAL = float(os.getenv("FLUSH_INTERVAL", "2.0"))
SIEM_URL = os.getenv("SIEM_URL", "")
SIEM_TOKEN = os.getenv("SIEM_TOKEN", "")
SIEM_KIND = os.getenv("SIEM_KIND", "generic").lower() # generic|splunk|elastic
SIEM_INDEX = os.getenv("SIEM_INDEX", "stellarbridge-logs") # elastic/openSearch index
VERIFY_TLS = os.getenv("VERIFY_TLS", "true").lower() != "false"
SOURCE = os.getenv("SOURCE", "stellarbridge-access")
APP = os.getenv("APP", "stellarbridge")
ENV = os.getenv("ENV", "prod")
qlines = queue.Queue(maxsize=10000)
def reader_from_stdin():
for line in sys.stdin:
if line.strip():
qlines.put(line.rstrip())
def reader_from_file(path):
with open(path, 'r') as f:
f.seek(0, os.SEEK_END) # start at end like tail -f
while True:
line = f.readline()
if not line:
time.sleep(0.2)
continue
qlines.put(line.rstrip())
def fmt_generic(batch):
# send as a JSON array of objects
return json.dumps(batch), {"Content-Type": "application/json"}
def fmt_splunk(batch):
# Splunk HEC: one JSON event per line in the request body
lines = []
now = int(time.time())
for rec in batch:
lines.append(json.dumps({
"time": now,
"sourcetype": "_json",
"source": SOURCE,
"event": rec,
}))
body = "\n".join(lines)
headers = {"Content-Type": "application/json", "Authorization": f"Splunk {SIEM_TOKEN}"}
return body, headers
def fmt_elastic(batch):
# Elastic/OpenSearch bulk API: action/meta line then source line
lines = []
idx = SIEM_INDEX
for rec in batch:
lines.append(json.dumps({"index": {"_index": idx}}))
lines.append(json.dumps(rec))
body = "\n".join(lines) + "\n"
headers = {"Content-Type": "application/x-ndjson"}
return body, headers
def make_record(raw):
try:
rec = json.loads(raw)
except Exception:
rec = {"message": raw}
rec.setdefault("source", SOURCE)
rec.setdefault("app", APP)
rec.setdefault("env", ENV)
return rec
def send_batch(batch):
if not SIEM_URL:
print("ERROR: SIEM_URL not set", file=sys.stderr)
return
records = [make_record(x) for x in batch]
if SIEM_KIND == "splunk":
body, headers = fmt_splunk(records)
elif SIEM_KIND == "elastic":
body, headers = fmt_elastic(records)
else:
body, headers = fmt_generic(records)
if SIEM_TOKEN:
headers["Authorization"] = f"Bearer {SIEM_TOKEN}"
for attempt in range(5):
try:
resp = requests.post(SIEM_URL, data=body, headers=headers, timeout=10, verify=VERIFY_TLS)
if 200 <= resp.status_code < 300:
return
else:
time.sleep(min(2 ** attempt, 10))
except Exception:
time.sleep(min(2 ** attempt, 10))
def main():
import argparse
p = argparse.ArgumentParser(description="Ship Stellarbridge logs to SIEM")
src = p.add_mutually_exclusive_group(required=True)
src.add_argument("--stdin", action="store_true", help="read from stdin")
src.add_argument("--file", help="path to log file to tail")
args = p.parse_args()
if args.stdin:
threading.Thread(target=reader_from_stdin, daemon=True).start()
else:
threading.Thread(target=reader_from_file, args=(args.file,), daemon=True).start()
batch = []
last = time.time()
while True:
try:
line = qlines.get(timeout=0.5)
batch.append(line)
except queue.Empty:
pass
now = time.time()
if len(batch) >= BATCH_SIZE or (batch and now - last >= FLUSH_INTERVAL):
send_batch(batch)
batch = []
last = now
if __name__ == "__main__":
main()
- Run it
- Pipe container logs from Kubernetes to the shipper (generic webhook example):
kubectl logs -f deployment/stellarbridge -c app | \
SIEM_URL=https://logs.example.com/ingest \
SIEM_TOKEN=your-token \
python3 shipper.py --stdin
- Tail a local file and send to Splunk HEC:
export SIEM_KIND=splunk
export SIEM_URL=https://splunk.example.com:8088/services/collector/event
export SIEM_TOKEN=hec-xxxx
python3 shipper.py --file /var/log/stellarbridge/access.log
- Send logs to Elasticsearch/OpenSearch bulk API:
export SIEM_KIND=elastic
export SIEM_URL=https://elastic.example.com:9200/_bulk
export SIEM_INDEX=stellarbridge-access-logs
python3 shipper.py --stdin < /var/log/containers/stellarbridge.log
Notes:
- For Splunk, ensure the HEC token is enabled and sourcetype _json accepts JSON payloads.
- For Elasticsearch/OpenSearch, create the index up front or rely on auto-create. Consider index lifecycle policies for retention.
If you prefer to use the Kubernetes API directly, install the client and stream logs from pods with a label (for example, app=stellarbridge).
- Install
pip install kubernetes requests
- Example snippet (prints lines; you can feed each line into the shipper functions above)
from kubernetes import client, config, watch
config.load_kube_config() # Or load_incluster_config() if running in a Pod
v1 = client.CoreV1Api()
w = watch.Watch()
for ev in w.stream(v1.list_namespaced_pod, namespace="default", label_selector="app=stellarbridge", _preload_content=False):
# For each pod, you can follow logs; this is a minimal illustration
pod = ev['object']
name = pod.metadata.name
print(f"Pod seen: {name}")
# See kubernetes client docs for streaming pod logs with follow=True
For production, prefer a DaemonSet (e.g., Fluent Bit) for cluster-wide shipping. The Python approach is best for targeted exports or bespoke pipelines.
Cloud-hosted organizations can pull audit events directly via the API instead of tailing logs. This is useful for periodic exports to a SIEM.
- Endpoint: GET /api/v1/dashboard/organization/get-events-in-org
- Auth: include the JWT cookie named “stellarbridge” from a logged-in session. For programmatic use, authenticate once (POST /api/v1/auth/login-handler) to obtain the cookie, then reuse it until expiry.
- Response: JSON object with a data array of event objects (LogWrapper shape: time, level, msg, and message{ timestamp, actor, action, target, result{ title, description, code }, remote{ ip, port }, sender{ file, email }, extra, org }).
- Rate limits: default ~30 requests per 15 seconds per client.
Example (Python requests):
import os, requests
BASE = os.getenv("BASE_URL", "https://your-tenant.stellarbridge.app")
COOKIE = os.getenv("STELLARBRIDGE_COOKIE") # the value of the 'stellarbridge' cookie
s = requests.Session()
s.cookies.set("stellarbridge", COOKIE, domain=os.getenv("COOKIE_DOMAIN", None))
resp = s.get(f"{BASE}/api/v1/dashboard/organization/get-events-in-org", timeout=15)
resp.raise_for_status()
data = resp.json().get("data", [])
print(f"fetched {len(data)} events")
# forward each event to your SIEM using the shipper from Option A
Example (curl):
curl -s -H 'Cookie: stellarbridge=YOUR_JWT_COOKIE' \
"$BASE_URL/api/v1/dashboard/organization/get-events-in-org" | jq
Note:
- The cookie is HttpOnly and set by the server on login; prefer running this job where a service account can authenticate and store the cookie securely.
- If you need a token string for other tooling, you can call /api/v1/auth/token while logged in to retrieve it, but the events endpoint expects the cookie.
- Use HTTPS endpoints and validate TLS (VERIFY_TLS=true) when sending to your SIEM
- Treat tokens as secrets; inject via environment variables or a secrets manager
- Logs do not include passwords or MFA secrets. Avoid adding sensitive payloads in custom logs you forward
- Ensure outbound egress to the SIEM is allowed only from trusted networks
- 4xx from SIEM: check token, endpoint path, and required payload shape (Splunk HEC vs Elastic bulk)
- Time skew: ensure nodes are synced (NTP) and remember Stellarbridge logs use UTC timestamps
- Empty stream: verify your log source command produces lines and that your account has permission to read logs
- Throughput: increase BATCH_SIZE or reduce FLUSH_INTERVAL; consider deploying multiple shippers
| Admin action | Self-hosted | Cloud-hosted | Notes |
|---|---|---|---|
| Access to application logs (stdout) | Yes | Managed | Cloud customers request exports via support |
| Run and maintain the Python shipper | Yes | Not needed | Optional; use your existing log pipeline instead if preferred |
| Configure SIEM endpoint and token | Yes | Managed | Ensure HTTPS egress from the shipper host/pod |
| Decide retention and index policy | Yes | Managed | Align with your org’s retention requirements |
| Time sync (UTC) across nodes | Yes | Managed | Consistent timestamps for correlation |
- Audit overview
- Security summary
- Architecture