Flux Helm Usage Guide
Helm and Flux Integration
Section titled “Helm and Flux Integration”Using kube-prometheus-stack as the example. This guide follows the Flux HelmRelease workflow.
-
Install with Helm
Terminal window helm repo add prometheus-community https://prometheus-community.github.io/helm-chartshelm repo updatehelm install kube-prometheus prometheus-community/kube-prometheus-stack \-n monitoring --create-namespace --version 82.15.0NAME: kube-prometheusLAST DEPLOYED: Fri Mar 27 12:54:18 2026NAMESPACE: monitoringSTATUS: deployedREVISION: 1DESCRIPTION: Install completeTEST SUITE: NoneNOTES:kube-prometheus-stack has been installed. Check its status by running:kubectl --namespace monitoring get pods -l "release=kube-prometheus"Get Grafana 'admin' user password by running:kubectl --namespace monitoring get secrets kube-prometheus-grafana -o jsonpath="{.data.admin-password}" | base64 -d ; echoAccess Grafana local instance:export POD_NAME=$(kubectl --namespace monitoring get pod -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=kube-prometheus" -oname)kubectl --namespace monitoring port-forward $POD_NAME 3000Get your grafana admin user password by running:kubectl get secret --namespace monitoring -l app.kubernetes.io/component=admin-secret -o jsonpath="{.items[0].data.admin-password}" | base64 --decode ; echoVisit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.________________________________________________________Executed in 14.31 secs fish externalusr time 2.31 secs 802.00 micros 2.31 secssys time 0.31 secs 0.00 micros 0.31 secsTerminal window PASSWORD=$(kubectl --namespace monitoring get secret kube-prometheus-grafana -o jsonpath="{.data.admin-password}" | base64 -d)echo "$PASSWORD"POD_NAME=$(kubectl --namespace monitoring get pod -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=kube-prometheus" -o name)kubectl --namespace monitoring port-forward "$POD_NAME" 3000 &FWD_PID=$!sleep 2curl -s -u "admin:${PASSWORD}" http://localhost:3000/api/datasources | python3 -m json.toolkill "$FWD_PID"#!/usr/bin/bunconst NS = "monitoring";const RELEASE = "kube-prometheus";const PROXY_PORT = 8001;async function main() {const proxy = kubectlProxy(PROXY_PORT);await Bun.sleep(1000);try {const password = await grafanaPassword();const podName = await grafanaPod();const fwd = portForward(podName, 3000);await Bun.sleep(2000);try {const res = await fetch("http://localhost:3000/api/datasources", {headers: { Authorization: `Basic ${btoa(`admin:${password}`)}` },});console.log(await res.json());} finally {fwd.kill();}} finally {proxy.kill();}}function kubectlProxy(port: number) {return Bun.spawn(["kubectl", "proxy", "--port", String(port)], {stdout: "ignore",stderr: "ignore",});}function k8s(path: string) {return fetch(`http://localhost:${PROXY_PORT}${path}`).then((r) => r.json());}async function grafanaPassword() {const secret = await k8s(`/api/v1/namespaces/${NS}/secrets/${RELEASE}-grafana`);return atob(secret.data["admin-password"]);}async function grafanaPod() {const label = "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=" + RELEASE;const pods = await k8s(`/api/v1/namespaces/${NS}/pods?labelSelector=${encodeURIComponent(label)}`,);const name = pods.items[0]?.metadata?.name;if (!name) throw new Error("No grafana pod found");return name;}function portForward(pod: string, localPort: number) {return Bun.spawn(["kubectl","--namespace",NS,"port-forward",`pod/${pod}`,String(localPort),]);}await main(); -
Experiment
Terminal window kubectl get pods -n monitoringkubectl get svc -n monitoringkubectl port-forward -n monitoring svc/kube-prometheus-grafana 3000:80# tweak and re-apply:helm upgrade kube-prometheus prometheus-community/kube-prometheus-stack \-n monitoring -f values.yamlSet a memory limit on prometheus, it’s a metrics database to store time series.
helm upgrade kube-prometheus prometheus-community/kube-prometheus-stack \-n monitoring \--set prometheus.prometheusSpec.resources.requests.memory=512Mi \--set prometheus.prometheusSpec.resources.limits.memory=1Gikubectl get pod \-n monitoring \-l app.kubernetes.io/name=prometheus \-o jsonpath='{.items[0].spec.containers[0].resources}' |\python3 -m json.tool#!/usr/bin/bunconst NS = "monitoring";const RELEASE = "kube-prometheus";const PROXY_PORT = 8001;async function main() {const proxy = kubectlProxy(PROXY_PORT);await Bun.sleep(1000);try {helmUpgrade();waitForRollout();const resources = await prometheusPodResources();console.log(JSON.stringify(resources, null, 2));} finally {proxy.kill();}}function kubectlProxy(port: number) {return Bun.spawn(["kubectl", "proxy", "--port", String(port)], {stdout: "ignore",stderr: "ignore",});}function k8s(path: string, init?: RequestInit) {return fetch(`http://localhost:${PROXY_PORT}${path}`, init).then((r) => r.json());}function helmUpgrade() {const result = Bun.spawnSync(["helm","upgrade",RELEASE,"prometheus-community/kube-prometheus-stack","-n",NS,"--set","prometheus.prometheusSpec.resources.requests.memory=512Mi","--set","prometheus.prometheusSpec.resources.limits.memory=1Gi",],{ stdout: "inherit", stderr: "inherit" },);if (result.exitCode !== 0) throw new Error("helm upgrade failed");}function waitForRollout() {const result = Bun.spawnSync(["kubectl","rollout","status","statefulset",`prometheus-${RELEASE}-kube-prome-prometheus`,"-n",NS,"--timeout=120s",],{ stdout: "inherit", stderr: "inherit" },);if (result.exitCode !== 0) throw new Error("rollout timed out");}async function prometheusPodResources() {const label = "app.kubernetes.io/name=prometheus";const pods = await k8s(`/api/v1/namespaces/${NS}/pods?labelSelector=${encodeURIComponent(label)}`,);const containers = pods.items[0]?.spec?.containers;if (!containers) throw new Error("No prometheus pod found");const prometheus = containers.find((c: any) => c.name === "prometheus");return prometheus?.resources;}await main();Terminal window # The actual volumes themselves# i.e. What volumes we have available in the cluster# e.g. The pizza shop has these pizzas ready to gokubectl get pvTerminal window # Get persistent volume claims for the `monitoring` namespace# i.e. what volumes a pod is requesting to use (filtered by namespace)# e.g. I ordered a Supreme with extra olives, can I have it pleasekubectl get pvc -n monitoring## No output -
Export what worked
Terminal window # The `-a` gives everything, which is not very useful for us here# helm get values kube-prometheus -n monitoring -a -o yaml > values.exported.yamlhelm get values kube-prometheus -n monitoring -o yaml > /tmp/values.exported.yaml -
Uninstall the trial
Terminal window helm uninstall kube-prometheus -n monitoring -
Generate Flux manifests
Terminal window mkdir -p infrastructure/base/monitoring/flux create source helm prometheus-community \--url=https://prometheus-community.github.io/helm-charts \--interval=1h \--export > infrastructure/base/monitoring/helmrepository.yamlflux create helmrelease kube-prometheus \--source=HelmRepository/prometheus-community \--chart=kube-prometheus-stack --chart-version=82.15.0 \--release-name=kube-prometheus --target-namespace=monitoring \--interval=5m --values=values.exported.yaml \--export > infrastructure/base/monitoring/helmrelease.yaml -
Commit and let Flux take over
Terminal window git add infrastructure/base/monitoring/git commit -m "Add kube-prometheus-stack via Flux"git pushflux reconcile source git flux-systemflux get helmreleases
Flux’s create commands support --export to print YAML, and HelmRelease supports releaseName, targetNamespace, and storageNamespace to match Helm release identity. (Flux)
Trial install with Helm
Section titled “Trial install with Helm”Add the repo and install the chart:
helm repo add <repo-alias> https://charts.example.comhelm repo update
helm install <release-name> <repo-alias>/<chart-name> \ -n <target-namespace> \ --create-namespace \ --version <chart-version> \ -f values.yamlIf you have no values file yet, omit -f values.yaml. Helm can show the computed values and rendered manifests for the release. (Helm)
it might be helpful to look at the YAML corresponding to the Helm chart to determine what key value pairs are available:
# List the podskubectl get pods -n monitoring -l app.kubernetes.io/instance=kube-prometheus
# View the yamlhelm get manifest kube-prometheus -n monitoringInspect and save what worked
Section titled “Inspect and save what worked”Once the test install works, capture the settings before you remove it:
helm list -Ahelm status <release-name> -n <target-namespace>helm get values <release-name> -n <target-namespace> -a -o yaml > values.exported.yamlhelm get manifest <release-name> -n <target-namespace> > rendered.exported.yamlFlux needs values.exported.yaml. rendered.exported.yaml is for reference and diffing. Flux manages the chart declaratively as a HelmRelease, not by tracking raw rendered manifests. (Flux)
Uninstall the trial release
Section titled “Uninstall the trial release”helm uninstall <release-name> -n <target-namespace>Remove the trial deployment so Flux can recreate it from Git. (Helm)
Generate the Flux source YAML
Section titled “Generate the Flux source YAML”Create the HelmRepository manifest:
flux create source helm <repo-alias> \ --url=https://charts.example.com \ --interval=1h \ --export > helmrepository.yamlflux create source helm generates a HelmRepository; --export prints YAML instead of applying it. (Flux)
Generate the Flux HelmRelease YAML
Section titled “Generate the Flux HelmRelease YAML”Generate the HelmRelease:
flux create helmrelease <release-name> \ --source=HelmRepository/<repo-alias> \ --chart=<chart-name> \ --chart-version=<chart-version> \ --release-name=<release-name> \ --target-namespace=<target-namespace> \ --interval=15m \ --values=values.exported.yaml \ --export > helmrelease.yamlhelm-controller reconciles the resulting HelmRelease. (Flux)
Put the YAML in Git
Section titled “Put the YAML in Git”Commit:
helmrepository.yamlhelmrelease.yaml
Values can be embedded in the generated manifest or kept in a separate file, depending on how you structure the repo.
Wire Flux to the infrastructure path
Section titled “Wire Flux to the infrastructure path”Flux bootstrap only creates the flux-system Kustomization. It watches clusters/local/ but ignores everything outside that path unless you add pointers. Create a Flux Kustomization that tells it where the monitoring manifests live:
apiVersion: kustomize.toolkit.fluxcd.io/v1kind: Kustomizationmetadata: name: infrastructure namespace: flux-systemspec: interval: 10m sourceRef: kind: GitRepository name: flux-system path: ./infrastructure/base/monitoring prune: trueIf the sync path changed (e.g. you renamed clusters/dev to clusters/local), apply the updated sync config first:
kubectl apply -f clusters/local/flux-system/gotk-sync.yamlThen uninstall the imperative Helm release so Flux can recreate it:
helm uninstall kube-prometheus -n monitoringCommit and push:
git add clusters/local/infrastructure.yamlgit commit -m "Add infrastructure Kustomization for Flux"git pushForce an immediate reconcile or wait for the interval:
flux reconcile source git flux-systemflux reconcile kustomization flux-systemThe chain: flux-system Kustomization reads clusters/local/, finds infrastructure.yaml, follows the path to infrastructure/base/monitoring/, and applies the HelmRepository + HelmRelease.
Verify:
flux get kustomizationsflux get helmreleases -Akubectl get pods -n monitoringNAME REVISION SUSPENDED READY MESSAGEflux-system main@sha1:99348924 False True Applied revision: main@sha1:99348924infrastructure main@sha1:99348924 False True Applied revision: main@sha1:99348924NAMESPACE NAME REVISION SUSPENDED READY MESSAGEflux-system kube-prometheus 82.15.0 False True Helm install succeeded for release monitoring/kube-prometheus.v1 with chart kube-prometheus-stack@82.15.0NAME READY STATUS RESTARTS AGEalertmanager-kube-prometheus-kube-prome-alertmanager-0 1/2 Error 5 (90s ago) 3m3skube-prometheus-grafana-d6bcf544f-q4qkj 3/3 Running 0 3m4skube-prometheus-kube-prome-operator-6857665c6-8cp6t 1/1 Running 0 3m4skube-prometheus-kube-state-metrics-d949d4c54-6hbnk 1/1 Running 0 3m4skube-prometheus-prometheus-node-exporter-x7cxw 1/1 Running 0 3m4sprometheus-kube-prometheus-kube-prome-prometheus-0 2/2 Running 0 3m3s(default) ryan@vale ~/S/j/2/0/2/l/c/local (main)> flux get helmreleases
NAME REVISION SUSPENDED READY MESSAGEkube-prometheus 82.15.0 False True Helm install succeeded for release monitoring/kube-prometheus.v1 with chart kube-prometheus-stack@82.15.0Now this gave an error for the alert manager, We can investigate this with:
kubectl logs -n monitoring alertmanager-kube-prometheus-kube-prome-alertmanager-0 --all-containers...
ts=2026-03-27T05:46:05.30694616Z level=error caller=/workspace/cmd/prometheus-config-reloader/main.go:225 msg="Failed to run" err="too many open files\ncreate watcher\ngithub.com/thanos-io/thanos/pkg/reloader.(*watcher).addPath\n\t/go/pkg/mod/github.co...So we can fix this temporarily with:
echo 1024 | sudo tee /proc/sys/fs/inotify/max_user_instances;sudo sysctl fs.inotify.max_user_watches=10000000To make this permanent, add the values to a file under sysctl.d. Check man sysctl.d(5) and man sysctl(8) for the correct drop-in path on your distribution. On Fedora Atomic, /etc/sysctl.d/ is a persistent overlay and already writable:
# Check what already existsls /etc/sysctl.d/cat /etc/sysctl.d/40-max-user-watches.conf
# Append the missing setting (watches may already be set)echo "fs.inotify.max_user_instances=512" | sudo tee -a /etc/sysctl.d/40-max-user-watches.confApply without rebooting:
sudo sysctl --systemThe alertmanager pod is in a crash loop, so it will pick up the new limit on its next restart. To force it immediately:
kubectl delete pod -n monitoring alertmanager-kube-prometheus-kube-prome-alertmanager-0Wait for it to come back and confirm the error is gone:
kubectl get pods -n monitoring -l app.kubernetes.io/name=alertmanagerkubectl logs -n monitoring alertmanager-kube-prometheus-kube-prome-alertmanager-0 --all-containersAdd Volumes
Section titled “Add Volumes”Preservation of Data
Section titled “Preservation of Data”Overview
Section titled “Overview”In normal operation, data stored through a PersistentVolumeClaim should survive:
- pod restarts
DeploymentrolloutsStatefulSetpod recreation- node reboots
- Helm upgrades
- Flux reconciliations
That statement is only true as long as the PersistentVolumeClaim still exists
and the backing storage has not been deleted.
The important boundaries are:
- Restarting a pod does not delete the claim or the volume.
- Re-applying manifests with Helm or Flux does not delete the claim unless the manifest or chart change removes it.
- Deleting a namespace usually deletes namespaced
PersistentVolumeClaimobjects. - Deleting the cluster may or may not delete the underlying disk, depending on the platform and storage backend.
Reclaim Policy
Section titled “Reclaim Policy”The reclaim policy is a property of the PersistentVolume, not the
PersistentVolumeClaim.
For dynamically provisioned volumes, the default is commonly Delete. In that
mode, deleting the claim usually leads to the backing volume being deleted as
well. With Retain, deleting the claim releases the volume but leaves the
underlying storage asset behind for manual recovery.
kubectl get storageclass -o custom-columns=NAME:.metadata.name,RECLAIM:.reclaimPolicyNAME RECLAIMlocal-path DeleteUsing Retain to Protect specific volumes.
Section titled “Using Retain to Protect specific volumes.”If you want Flux to keep managing the application, but you do not want a
storage cleanup mistake to destroy the underlying disk, set the bound
persistentVolume reclaim policy to Retain.
This is the important distinction:
- Flux manages Git state.
- Helm manages the release resources.
- Kubernetes manages the
PersistentVolumeClaimand the boundPersistentVolume. - The reclaim policy lives on the
persistentVolume, not on the claim and not in Flux.
With reclaim policy Delete, removing the claim usually removes the backing
volume too. With reclaim policy Retain, deleting the claim releases the
volume but leaves the underlying storage asset behind for manual recovery.
That means Retain does not stop Flux from deleting the claim. It protects
the data after the claim has been removed.
Typical use cases:
- Prometheus data you do not want to lose during Git refactors.
- Grafana storage you want to recover after an accidental uninstall.
- Any stateful workload where “delete the app” should not also mean “delete the disk”.
How to protect one specific volume
Section titled “How to protect one specific volume”First, let Kubernetes create and bind the claim as usual. Then find the bound volume and patch its reclaim policy:
kubectl get pvc -n monitoringkubectl get pvkubectl patch pv <pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'Verify the result:
kubectl get pv <pv-name> -o custom-columns=NAME:.metadata.name,RECLAIM:.spec.persistentVolumeReclaimPolicy,CLAIM:.spec.claimRef.nameExample:
kubectl patch pv pvc-1234abcd-5678-efgh-9012-ijklmnopqrst \ -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'After that:
- If Flux or Helm deletes the
persistentVolumeClaim, the claim is gone. - The
persistentVolumemoves to theReleasedstate instead of being destroyed. - The data remains on disk.
- Re-attaching it is now a manual recovery task.
What recovery looks like
Section titled “What recovery looks like”If a claim was deleted but the volume was retained, the normal recovery flow is:
- Inspect the retained
persistentVolume. - Clear or update its
claimRefif needed. - Create a new
persistentVolumeClaimthat matches that volume’s class, size, and access mode. - Re-bind the workload to the recovered claim.
This is less convenient than automatic deletion, but it is much safer for data you care about.
When Retain is the right choice
Section titled “When Retain is the right choice”Use Retain for selected volumes when:
- the data has value beyond the lifetime of the release
- you are still iterating on Flux manifests and want protection from accidental Git deletions
- you prefer manual cleanup over silent data loss
Leave the default Delete behavior in place when:
- the data is disposable
- the environment is short-lived
- automatic cleanup is more important than recovery
Important limitation
Section titled “Important limitation”Retain protects the backing volume, but it does not make Flux “storage-aware”.
Flux will still reconcile deleted files. If a Helm release or PVC-producing
manifest is removed from Git, Flux can still remove the claim. Retain simply
turns that situation from “claim deleted and data destroyed” into “claim deleted
but data recoverable”.
Creating persistentVolume by creating a persistentVolumeClaim
Section titled “Creating persistentVolume by creating a persistentVolumeClaim”By creating a persistent volume claim, here we are automatically creating the persistent volumes.
infrastructure/base/monitoring/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationresources: - grafana-secret.yaml - helmrepository.yaml - helmrelease.yamlinfrastructure/base/monitoring/helmrelease.yaml
-
diff
diff --git a/infrastructure/base/monitoring/helmrelease.yaml b/infrastructure/base/monitoring/helmrelease.yamlindex 2eaff72..6a0f6c1 100644--- a/infrastructure/base/monitoring/helmrelease.yaml+++ b/infrastructure/base/monitoring/helmrelease.yaml@@ -18,9 +18,24 @@ spec:storageNamespace: monitoringtargetNamespace: monitoringvalues:+ alertmanager:+ alertmanagerSpec:+ storage:+ volumeClaimTemplate:+ spec:+ accessModes:+ - ReadWriteOnce+ resources:+ requests:+ storage: 2Gigrafana:admin:existingSecret: grafana-admin+ persistence:+ enabled: true+ accessModes:+ - ReadWriteOnce+ size: 10Giprometheus:prometheusSpec:resources:@@ -28,3 +43,11 @@ spec:memory: 1Girequests:memory: 512Mi+ storageSpec:+ volumeClaimTemplate:+ spec:+ accessModes:+ - ReadWriteOnce+ resources:+ requests:+ storage: 20Gi -
full
apiVersion: helm.toolkit.fluxcd.io/v2kind: HelmReleasemetadata:name: kube-prometheusnamespace: flux-systemspec:chart:spec:chart: kube-prometheus-stackreconcileStrategy: ChartVersionsourceRef:kind: HelmRepositoryname: prometheus-communityversion: 82.15.0interval: 5m0sreleaseName: kube-prometheusstorageNamespace: monitoringtargetNamespace: monitoringvalues:alertmanager:alertmanagerSpec:storage:volumeClaimTemplate:spec:accessModes:- ReadWriteOnceresources:requests:storage: 2Gigrafana:admin:existingSecret: grafana-adminpersistence:enabled: trueaccessModes:# ReadWriteOnce means the PVC can be mounted read-write by one node at a time.# This is the common mode for a single Grafana replica backed by one disk.- ReadWriteOnce# Requested PVC capacity for Grafana data.# Kubernetes does not support an "unlimited" PVC size; a concrete storage request is required.# If omitted, Helm falls back to the chart's default value instead of allowing unbounded growth.size: 10Giprometheus:prometheusSpec:resources:limits:memory: 1Girequests:memory: 512MistorageSpec:volumeClaimTemplate:spec:accessModes:- ReadWriteOnceresources:requests:storage: 20Gi
Declaring a PersistentVolume first, then binding a PersistentVolumeClaim
Section titled “Declaring a PersistentVolume first, then binding a PersistentVolumeClaim”The previous example relies on dynamic provisioning: the PVC is created first, and the storage class provisions the backing volume for you.
Sometimes you want the opposite model:
- create the
PersistentVolumeyourself - set its reclaim policy to
Retain - bind a specific
PersistentVolumeClaimto that exact volume
This is useful when you want more explicit control over the disk lifecycle.
For a learning cluster, a minimal example looks like this:
apiVersion: v1kind: PersistentVolumemetadata: name: grafana-static-pvspec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: "" hostPath: path: /var/lib/k8s-static/grafana---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: grafana-static-pvc namespace: monitoringspec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: "" volumeName: grafana-static-pvImportant details:
persistentVolumeReclaimPolicy: Retainis set on thePersistentVolumevolumeName: grafana-static-pvtells the claim to bind to that exact volumestorageClassName: \"\"avoids accidentally using dynamic provisioning- the PVC request must be compatible with the PV’s size and access mode
If the claim is later deleted:
- the PVC is removed
- the PV moves to
Released - the data under the backing path is not automatically deleted
For this specific example, hostPath is acceptable for a local or single-node
lab, but it is not the pattern you would normally use for a production cluster.
To verify the binding:
kubectl get pv grafana-static-pvkubectl get pvc -n monitoring grafana-static-pvcTo use that claim in a workload, reference grafana-static-pvc from the pod or
chart values instead of creating a new volume claim template.
Apply via Flux
Section titled “Apply via Flux”Once you commit and then push under the path Flux reconciles, Flux installs it. Check status:
flux get helmreleasesTrigger a reconcile manually if needed:
flux reconcile helmrelease <release-name>Both commands are part of Flux’s current CLI. (Flux)
Practical rule
Section titled “Practical rule”For a chart you have not committed to yet:
graph LR
A[helm install] --> B[test]
B --> C[helm get values]
C --> D[helm uninstall]
D --> E["flux create ... --export"]
E --> F[commit & push]
This beats trying to have Flux “adopt” a release you installed imperatively, even though Flux can reconcile releases when releaseName and namespaces match. (Flux)
Copy-paste template
Section titled “Copy-paste template”# Trialhelm repo add demo https://charts.example.comhelm repo update
helm install myapp demo/thechart \ -n myapp \ --create-namespace \ --version 1.2.3 \ -f values.yaml
# Inspect and savehelm status myapp -n myapphelm get values myapp -n myapp -a -o yaml > values.exported.yamlhelm get manifest myapp -n myapp > rendered.exported.yaml
# Remove trialhelm uninstall myapp -n myapp
# Generate Flux YAMLflux create source helm demo \ --url=https://charts.example.com \ --interval=1h \ --export > helmrepository.yaml
flux create helmrelease myapp \ --source=HelmRepository/demo \ --chart=thechart \ --chart-version=1.2.3 \ --release-name=myapp \ --target-namespace=myapp \ --interval=15m \ --values=values.exported.yaml \ --export > helmrelease.yamlOne caution: if the chart installed CRDs or left behind cluster-scoped resources, uninstall may not return the cluster to a blank state. This is chart-specific, not Flux-specific. HelmRelease supports install, upgrade, uninstall, rollback, and drift correction once you move under Flux. (Flux)