Enterprise Security FAQ
Security posture and trust boundaries for gpufirewall.io.
No. GPU Firewall does not store kubeconfig and does not run kubectl or helm against customer clusters. Customers install the agent themselves (GitOps, CI, or Helm). The agent connects outbound over HTTPS only.
No. There is no inbound connectivity from GPU Firewall to customer clusters. Outbound HTTPS from the cluster to the control plane is sufficient.
Telemetry includes cluster_id, namespace inventory and labels, node inventory (optional), GPU usage signals (GPU seconds and model), and health heartbeat timestamps. No application payloads, secrets, or customer data stores are required.
Deletion is cloud confirmed: AWS, GCP, and Azure control plane events (and a reconciler fallback) set cloud_status=not_found. Telemetry staleness never implies deletion.
Enforcement is limited to explicitly managed namespaces (for example gfw.ai/managed=true) and explicit budgets. Unmanaged namespaces are always skipped.
GPU Firewall does not enforce destructive actions when pricing is unknown. It records usage and warns only.
Yes. Finance can define rate cards per provider, region, and GPU model, including CSV import. Overrides take precedence over public list prices.
API access uses scoped bearer tokens (admin_ui, telemetry, node_sync, cloud_events). Tokens are hashed at rest, revocable, and auditable. No session cookies are required.
Yes. Actions and configuration changes are logged with timestamps, actor identity (token), inputs, pricing source, and decision outputs.
Agent uninstall is customer controlled (helm uninstall). Node-sync is required for pricing and cluster actions in Kubernetes installs; webhook is optional. The control plane surfaces remediation commands for onboarding failures and provides status reports for discovery and pricing.