The module is deprecated.
We have renamed the module. Previously, it was called deckhouse-commander
, but now it is simply commander
. All updates will be provided to the module with the new name. This will not affect the user experience, as the interface will remain accessible at the same location, and the database will continue to be used as before. However, technically, users will need to disable the old module and enable the new one.
You can migrate from the old module to the new one without losing any data, but if you are currently using PostgreSQL, which comes with the module, there are three requirements that must be met. Let’s discuss the configuration with an external database separately. Everything is straightforward in this case.
If you are using your own PostgreSQL installation
If you are using your own PostgreSQL installation, then you need to do few steps:
-
Create a backup of the database (recommended)
-
Enable the commander module
-
Switch to new dependencies in existing clusters
Ensure you are using an external database
In the module configuration, the External
mode should be selected under the postgres
section.
kubectl get mc deckhouse-commander -oyaml
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
name: deckhouse-commander
...
spec:
enabled: true
settings:
postgres:
...
mode: External # <------- database mode
version: 1
Step 1: creating backup (recommended)
Read the official documentation.
Step 2: enabling the commander module
Enable the commander module.
The migration will be performed automatically. The module settings will transfer themselves from the old ModuleConfig/deckhouse-commander
. Upon completion of the migration, the deckhouse-commander
module will be disabled. The new module will be available at the same address and deployed in the same d8-commander
namespace as the old one.
cat <<EOF | kubectl create -f -
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
name: commander
spec:
enabled: true
EOF
Step 3: switching to new dependencies in existing clusters
As a result, two modules will be replaced:
deckhouse-commander-agent
→commander-agent
deckhouse-admin
→console
To do this, it is enough to enable the commander-agent
module. What will happen as a result:
- The
commander-agent
module will automatically turn off thedeckhouse-commander-agent
anddeckhouse-admin
modules, and turn on theconsole
. - Administration tab in Commander will work again
This can be done either manually or using a template.
Option 1: through the template
In the templates used, add the commander-agent
manifest to the Resources tab and update all clusters to this template.
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
name: commander-agent
labels:
heritage: deckhouse-commander
spec:
enabled: true
version: 1
settings:
commanderUrl: "https://{{ .dc_domain }}/agent_api/{{ .dc_clusterUUID }}"
Option 2: manually
Enable the commander-agent
module in application clusters. You need to transfer the settings from deckhouse-commander-agent
to it without changes. The only difference with the previous option is that it will need to be done manually in all clusters.
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
name: commander-agent
labels:
heritage: deckhouse-commander
spec:
enabled: true
version: 1
settings:
commanderUrl: "https://....from deckhouse-commander-agent ...."
Recovery in case of data loss
If the database was corrupted during the migration for some reason, it can be restored to the new commander module without rolling back to the old module.
To restore the data, use the backup from step 1.
If you use an internal PostgreSQL database
If you are using postgres from the operator-postgres
module, then you need to do few steps:
-
Update the
deckhouse-commander
module to at least version1.4.3
-
Create a backup of the database
-
Enable the commander module
-
Switch to new dependencies in existing clusters
Ensure you are using an internal database
In the module configuration, the Internal
mode should be selected under the postgres
section.
kubectl get mc deckhouse-commander -oyaml
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
name: deckhouse-commander
...
spec:
enabled: true
settings:
postgres:
...
mode: Internal # <------- database mode
version: 1
Step 1: updating the module
Update the deckhouse-commander
module to version 1.4.3
or higher. In this release, the annotation helm.sh/resource-policy: keep
has been added for PostgreSQL resources.
If module update policy is set to manual, use the command:
kubectl annotate modulerelease deckhouse-commander-v1.4.8 modules.deckhouse.io/approved=true
Step 2: creating backup
To back up the internal database, use the command:
kubectl -n d8-commander exec -t commander-postgres-0 -- su - postgres -c "pg_dump -v -Fc -b -d commander" > commander.dump
Step 3: enabling the commander module
Enable the commander module.
The migration will be performed automatically. The module settings will transfer themselves from the old ModuleConfig/deckhouse-commander
. Upon completion of the migration, the deckhouse-commander
module will be disabled. The new module will be available at the same address and deployed in the same d8-commander
namespace as the old one.
cat <<EOF | kubectl create -f -
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
name: commander
spec:
enabled: true
EOF
Step 4: switching to new dependencies in existing clusters
As a result, two modules will be replaced:
deckhouse-commander-agent
→commander-agent
deckhouse-admin
→console
To do this, it is enough to enable the commander-agent
module. What will happen as a result:
- The
commander-agent
module will automatically turn off thedeckhouse-commander-agent
anddeckhouse-admin
modules, and turn on theconsole
. - Administration tab in Commander will work again
This can be done either manually or using a template.
Option 1: through the template
In the templates used, add the commander-agent
manifest to the Resources tab and update all clusters to this template.
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
name: commander-agent
labels:
heritage: deckhouse-commander
spec:
enabled: true
version: 1
settings:
commanderUrl: "https://{{ .dc_domain }}/agent_api/{{ .dc_clusterUUID }}"
Option 2: manually
Enable the commander-agent
module in application clusters. You need to transfer the settings from deckhouse-commander-agent
to it without changes. The only difference with the previous option is that it will need to be done manually in all clusters.
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
name: commander-agent
labels:
heritage: deckhouse-commander
spec:
enabled: true
version: 1
settings:
commanderUrl: "https://....from deckhouse-commander-agent ...."
Recovery in case of data loss
If the database was corrupted during the migration for some reason, it can be restored to the new commander module without rolling back to the old module.
To restore the data, use the backup from step 2 and the command:
kubectl -n d8-commander exec -it commander-postgres-0 -- su - postgres -c "pg_restore -v -c --if-exists -Fc -d commander" < commander.dump