This page is under active development and may contain incomplete information. Below is an overview of the Deckhouse installation process. For more detailed instructions, we recommend visiting the Getting Started section, where step-by-step guides are available.
The Deckhouse installer is available as a container image and is based on the dhctl utility, which is responsible for:
- Creating and configuring cloud infrastructure objects using Terraform;
- Installing necessary OS packages on nodes (including Kubernetes packages);
- Installing Deckhouse;
- Creating and configuring nodes for the Kubernetes cluster;
- Maintaining the cluster state according to the defined configuration.
Deckhouse installation options:
-
In a supported cloud. The
dhctl
utility automatically creates and configures all necessary resources, including virtual machines, deploys the Kubernetes cluster, and installs Deckhouse. A full list of supported cloud providers is available in the Platform integration with infrastructure section. -
On bare-metal servers or in unsupported clouds. In this option,
dhctl
configures the server or virtual machine, deploys the Kubernetes cluster with a single master node, and installs Deckhouse. Additional nodes can be added to the cluster using pre-existing setup scripts. -
In an existing Kubernetes cluster. If a Kubernetes cluster is already deployed,
dhctl
installs Deckhouse and integrates it with the existing infrastructure.
Preparing the Infrastructure
Before installation, ensure the following:
-
For bare-metal clusters and unsupported clouds: The server is running an operating system from the supported OS list or a compatible version, and it is accessible via SSH using a key.
-
For supported clouds: Ensure that necessary quotas are available for resource creation and that access credentials to the cloud infrastructure are prepared (these depend on the specific provider).
-
For all installation options: Access to the container registry with Deckhouse images (
registry.deckhouse.io
orregistry.deckhouse.ru
) is configured.
Preparing the Configuration
Before starting the Deckhouse installation, you need to prepare the configuration YAML file. This file contains the main parameters for configuring Deckhouse, including information about cluster components, network settings, and integrations, as well as a description of resources to be created after installation (node settings and Ingress controller).
Make sure that the configuration files meet the requirements of your infrastructure and include all the necessary parameters for a correct deployment.
Installation config
The installation configuration YAML file contains parameters for several resources (manifests):
-
InitConfiguration — initial parameters for Deckhouse configuration, necessary for the proper startup of Deckhouse after installation.
Key settings specified in this resource:
- Component placement parameters;
- The StorageClass (storage parameters);
- Access parameters for the container registry;
- Template for DNS names;
- Other essential parameters required for Deckhouse to function correctly.
- ClusterConfiguration — general cluster parameters, such as control plane version, network settings, CRI parameters, etc.
This resource is needed only when Deckhouse is being installed with a pre-deployed Kubernetes cluster. If Deckhouse is being installed in an already existing cluster, this resource is not required.
- StaticClusterConfiguration — parameters for Kubernetes clusters deployed on bare-metal servers or virtual machines in unsupported clouds.
This resource is needed only when Deckhouse is being installed with a pre-deployed Kubernetes cluster. If Deckhouse is being installed in an already existing cluster, this resource is not required.
<CLOUD_PROVIDER>ClusterConfiguration
— a set of resources containing configuration parameters for supported cloud providers. These include:- Cloud infrastructure access settings (authentication parameters);
- Resource placement scheme type and parameters;
- Network settings;
- Node group creation settings.
List of cloud provider configuration resources:
- AWSClusterConfiguration — Amazon Web Services;
- AzureClusterConfiguration — Microsoft Azure;
- GCPClusterConfiguration — Google Cloud Platform;
- OpenStackClusterConfiguration — OpenStack;
- VsphereClusterConfiguration — VMware vSphere;
- VCDClusterConfiguration — VMware Cloud Director;
- YandexClusterConfiguration — Yandex Cloud;
- ZvirtClusterConfiguration — zVirt.
-
ModuleConfig
— a set of resources containing configuration parameters for Deckhouse built-in modules.If the cluster is initially created with nodes dedicated to specific types of workloads (e.g., system nodes or monitoring nodes), it is recommended to explicitly set the
nodeSelector
parameter in the configuration of modules that use persistent storage volumes.For example, for the
prometheus
module, the configuration is specified in the nodeSelector parameter. -
IngressNginxController
— deploying the Ingress controller. -
NodeGroup
— creating additional node groups. -
InstanceClass
— adding configuration resources. ClusterAuthorizationRule
,User
— setting up roles and users.
Post-bootstrap script
After Deckhouse installation is complete, the installer offers the option to run a custom script on one of the master nodes. This script can be used for:
- Performing additional cluster configurations;
- Collecting diagnostic information;
- Integrating with external systems or other tasks.
The path to the post-bootstrap script can be specified using the --post-bootstrap-script-path
parameter during the installation process.
Installing Deckhouse
When installing a commercial edition of Deckhouse Kubernetes Platform from the official container registry registry.deckhouse.io
, you must first log in with your license key:
docker login -u license-token registry.deckhouse.io
The command to pull the installer container from the Deckhouse public registry and run it looks as follows:
docker run --pull=always -it [<MOUNT_OPTIONS>] registry.deckhouse.io/deckhouse/<DECKHOUSE_REVISION>/install:<RELEASE_CHANNEL> bash
Where:
<DECKHOUSE_REVISION>
— the Deckhouse edition, such asee
for Enterprise Edition,ce
for Community Edition, etc.<MOUNT_OPTIONS>
— parameters for mounting files into the installer container, such as:- SSH access keys;
- Configuration file;
- Resource file, etc.
<RELEASE_CHANNEL>
— the release channel in kebab-case format:alpha
— for the Alpha release channel;beta
— for the Beta release channel;early-access
— for the Early Access release channel;stable
— for the Stable release channel;rock-solid
— for the Rock Solid release channel.
Here is an example of a command to run the installer container for Deckhouse CE:
docker run -it --pull=always \
-v "$PWD/config.yaml:/config.yaml" \
-v "$PWD/dhctl-tmp:/tmp/dhctl" \
-v "$HOME/.ssh/:/tmp/.ssh/" registry.deckhouse.io/deckhouse/ce/install:stable bash
Deckhouse installation is performed within the installer container using the dhctl
utility:
- To start the installation of Deckhouse with the deployment of a new cluster (for all cases except installing into an existing cluster), use the command
dhctl bootstrap
. - To install Deckhouse into an already existing cluster, use the command
dhctl bootstrap-phase install-deckhouse
.
Run dhctl bootstrap -h
to learn more about the parameters available.
Example of running the Deckhouse installation with cloud cluster deployment:
dhctl bootstrap \
--ssh-user=<SSH_USER> --ssh-agent-private-keys=/tmp/.ssh/id_rsa \
--config=/config.yml
Where:
/config.yml
— the installation configuration file;<SSH_USER>
— the username for SSH connection to the server;--ssh-agent-private-keys
— the private SSH key file for SSH connection.
Pre-Installation Checks
List of checks performed by the installer before starting Deckhouse installation:
- General checks:
- The values of the parameters PublicDomainTemplate and clusterDomain do not match.
- The authentication data for the container image registry specified in the installation configuration is correct.
- The host name meets the following requirements:
- The length does not exceed 63 characters;
- It consists only of lowercase letters;
- It does not contain special characters (hyphens
-
and periods.
are allowed, but they cannot be at the beginning or end of the name).
- The server (VM) has a supported container runtime (
containerd
) installed. - The host name is unique within the cluster.
- The server’s system time is correct.
- The address spaces for Pods (
podSubnetCIDR
) and services (serviceSubnetCIDR
) do not intersect.
- Checks for static and hybrid cluster installation:
- Only one
--ssh-host
parameter is specified. For static cluster configuration, only one IP address can be provided for configuring the first master node. - SSH connection is possible using the specified authentication data.
- SSH tunneling to the master node server (or VM) is possible.
- The server (VM) selected for the master node installation must meet the minimum system requirements:
- at least 4 CPU cores;
- at least 8 GB of RAM;
- at least 60 GB of disk space with 400+ IOPS performance;
- Linux kernel version 5.8 or newer;
- one of the package managers installed:
apt
,apt-get
,yum
, orrpm
; - access to standard OS package repositories.
- Python is installed on the master node server (VM).
- The container image registry is accessible through a proxy (if proxy settings are specified in the installation configuration).
- Required installation ports are free on the master node server (VM) and the installer host.
- DNS must resolve
localhost
to IP address 127.0.0.1. - The user has
sudo
privileges on the server (VM). - Required ports for the installation must be open:
- port 22/TCP between the host running the installer and the server;
- no port conflicts with those used by the installation process.
- The server (VM) has the correct time.
- The user
deckhouse
must not exist on the server (VM). - The address spaces for Pods (
podSubnetCIDR
) services (serviceSubnetCIDR
) and internal network (internalNetworkCIDRs
) do not intersect.
- Only one
- Checks for cloud cluster installation:
- The configuration of the virtual machine for the master node meets the minimum requirements.
- The cloud provider API is accessible from the cluster nodes.
- For Yandex Cloud deployments with NAT Instance, the configuration for Yandex Cloud with NAT Instance is verified.
Aborting the installation
If the installation was interrupted or issues occurred during the installation process in a supported cloud, there might be leftover resources created during the installation. To remove them, use the dhctl bootstrap-phase abort
command within the installer container.
The configuration file provided through the --config
parameter when running the installer must be the same one used during the initial installation.
Air-Gapped environment, working via proxy and using external registries
Installing Deckhouse Kubernetes Platform from an external registry
Available in the following editions: BE, SE, SE+, EE, CSE Lite (1.67), CSE Pro (1.67).
DKP supports only the Bearer token authentication scheme for container registries.
The following container registries are tested and officially supported:
During installation, DKP can be configured to work with an external registry (e.g., a proxy registry in an air-gapped environment).
Set the following parameters in the InitConfiguration
resource:
imagesRepo: <PROXY_REGISTRY>/<DECKHOUSE_REPO_PATH>/ee
— the path to the DKP EE image in the external registry.
Example:imagesRepo: registry.deckhouse.ru/deckhouse/ee
;registryDockerCfg: <BASE64>
— base64-encoded Docker config with access credentials to the external registry.
If anonymous access is allowed to DKP images in the external registry, the registryDockerCfg
should look like this:
{"auths": { "<PROXY_REGISTRY>": {}}}
The provided value must be Base64-encoded.
If authentication is required to access DKP images in the external registry, the registryDockerCfg
should look like this:
{"auths": { "<PROXY_REGISTRY>": {"username":"<PROXY_USERNAME>","password":"<PROXY_PASSWORD>","auth":"<AUTH_BASE64>"}}}
where:
<PROXY_USERNAME>
— the username for authenticating to<PROXY_REGISTRY>
;<PROXY_PASSWORD>
— the password for authenticating to<PROXY_REGISTRY>
;<PROXY_REGISTRY>
— the address of the external registry in the format<HOSTNAME>[:PORT]
;<AUTH_BASE64>
— a Base64-encoded string of<PROXY_USERNAME>:<PROXY_PASSWORD>
.
The final value for registryDockerCfg
must also be Base64-encoded.
You can use the following script to generate the registryDockerCfg
:
declare MYUSER='<PROXY_USERNAME>'
declare MYPASSWORD='<PROXY_PASSWORD>'
declare MYREGISTRY='<PROXY_REGISTRY>'
MYAUTH=$(echo -n "$MYUSER:$MYPASSWORD" | base64 -w0)
MYRESULTSTRING=$(echo -n "{\"auths\":{\"$MYREGISTRY\":{\"username\":\"$MYUSER\",\"password\":\"$MYPASSWORD\",\"auth\":\"$MYAUTH\"}}}" | base64 -w0)
echo "$MYRESULTSTRING"
Custom external registry configuration
To support non-standard configurations of external registries, the InitConfiguration
resource provides two additional parameters:
registryCA
— a root certificate to validate the registry’s certificate (used if the registry uses self-signed certificates);registryScheme
— the protocol used to access the registry (HTTP
orHTTPS
). Defaults toHTTPS
.
Nexus configuration notes
When interacting with a docker
-type repository in Nexus (e.g., using docker pull
or docker push
), you must specify the address in the format <NEXUS_URL>:<REPOSITORY_PORT>/<PATH>
.
Using the URL
value from the Nexus repository settings is not supported.
When using the Nexus repository manager, the following requirements must be met:
- A proxy Docker repository must be created (
Administration
→Repository
→Repositories
):- Set the
Maximum metadata age
parameter to0
.
- Set the
- Access control must be configured:
- Create a role named Nexus (
Administration
→Security
→Roles
) with the following privileges:nx-repository-view-docker-<repository>-browse
nx-repository-view-docker-<repository>-read
- Create a user (
Administration
→Security
→Users
) and assign them the Nexus role.
- Create a role named Nexus (
Setup Steps:
-
Create a proxy Docker repository (
Administration
→Repository
→Repositories
) that points to the Deckhouse registry:
- Fill out the repository creation form with the following values:
Name
: the desired repository name, e.g.,d8-proxy
.Repository Connectors / HTTP
orHTTPS
: a dedicated port for the new repository, e.g.,8123
or another.Remote storage
: must be set tohttps://registry.deckhouse.ru/
.Auto blocking enabled
andNot found cache enabled
: can be disabled for debugging; otherwise, enable them.Maximum Metadata Age
: must be set to0
.- If using a commercial edition of Deckhouse Kubernetes Platform, enable the
Authentication
checkbox and fill in the following:Authentication Type
:Username
Username
:license-token
Password
: your Deckhouse Kubernetes Platform license key
- Configure Nexus access control to allow DKP to access the created repository:
-
Create a Nexus role (
Administration
→Security
→Roles
) with the following privileges:
nx-repository-view-docker-<repository>-browse
andnx-repository-view-docker-<repository>-read
. -
Create a user (
Administration
→Security
→Users
) and assign them the role created above.As a result, DKP images will be available at a URL like:
https://<NEXUS_HOST>:<REPOSITORY_PORT>/deckhouse/ee:<d8s-version>
.
-
Harbor configuration notes
Use the Harbor Proxy Cache feature.
- Configure the registry:
- Go to
Administration
→Registries
→New Endpoint
. Provider
: Docker Registry.Name
: arbitrary value of your choice.Endpoint URL
:https://registry.deckhouse.ru
.-
Set
Access ID
andAccess Secret
(your Deckhouse Kubernetes Platform license key).
- Go to
- Create a new project:
- Navigate to
Projects → New Project
. Project Name
will be part of the URL. Choose any name, e.g.,d8s
.Access Level
:Public
.-
Enable
Proxy Cache
and select the registry created in the previous step.As a result, DKP images will be available at a URL like:
https://your-harbor.com/d8s/deckhouse/ee:{d8s-version}
.
- Navigate to
Manual loading of Deckhouse Kubernetes Platform images, vulnerability scanner DB, and DKP modules into a private registry
The d8 mirror
utility is not available for use with the Community Edition (CE) and Basic Edition (BE).
You can check the current status of versions in the release channels at releases.deckhouse.ru.
-
Download DKP images to a dedicated directory using the
d8 mirror pull
command.By default,
d8 mirror pull
downloads only the current versions of DKP, vulnerability scanner databases (if included in the DKP edition), and officially delivered modules.For example, for Deckhouse Kubernetes Platform 1.59, only version 1.59.12 will be downloaded, as it is sufficient for upgrading the platform from 1.58 to 1.59.
Run the following command (specify the edition code and license key) to download the current version images:
d8 mirror pull \ --source='registry.deckhouse.ru/deckhouse/<EDITION>' \ --license='<LICENSE_KEY>' /home/user/d8-bundle
where:
<EDITION>
— Deckhouse Kubernetes Platform edition code (e.g.,ee
,se
,se-plus
). By default, the--source
parameter refers to the Enterprise Edition (ee
) and can be omitted;<LICENSE_KEY>
— Deckhouse Kubernetes Platform license key;/home/user/d8-bundle
— directory where the image packages will be placed. It will be created if it does not exist.
If the image download is interrupted, rerunning the command will resume the download, provided no more than one day has passed since the interruption.
You can also use the following command options:
--no-pull-resume
— force the download to start from the beginning;--no-platform
— skip downloading the Deckhouse Kubernetes Platform image package (platform.tar
);--no-modules
— skip downloading module packages (module-*.tar
);--no-security-db
— skip downloading the vulnerability scanner database package (security.tar
);-
--include-module
/-i
=name[@Major.Minor]
— download only a specific set of modules using a whitelist (and, if needed, their minimum versions). Use multiple times to add more modules to the whitelist. These flags are ignored if used with--no-modules
.The following syntax options are supported for specifying module versions:
module-name@1.3.0
— pulls versions with semver ^ constraint (^1.3.0), including v1.3.0, v1.3.3, v1.4.1;module-name@~1.3.0
— pulls versions with semver ~ constraint (>=1.3.0 <1.4.0), including only v1.3.0, v1.3.3;module-name@=v1.3.0
— pulls exact tag match v1.3.0, publishing to all release channels;module-name@=bobV1
— pulls exact tag match “bobV1”, publishing to all release channels;
--exclude-module
/-e
=name
— skip downloading a specific set of modules using a blacklist. Use multiple times to add more modules to the blacklist. Ignored if--no-modules
or--include-module
is used;--modules-path-suffix
— change the suffix of the path to the module repository in the main DKP registry. The default suffix is/modules
(e.g., full path to the module repo will beregistry.deckhouse.ru/deckhouse/EDITION/modules
);--since-version=X.Y
— download all DKP versions starting from the specified minor version. This option is ignored if the specified version is higher than the version on the Rock Solid update channel. Cannot be used with--deckhouse-tag
;--deckhouse-tag
— download only the specific DKP version (regardless of update channels). Cannot be used with--since-version
;--gost-digest
— calculate the checksum of the final DKP image bundle using the GOST R 34.11-2012 (Streebog) algorithm. The checksum will be displayed and written to a.tar.gostsum
file in the folder containing the image tarball;--source
— specify the source image registry address (default:registry.deckhouse.ru/deckhouse/ee
);- use the
--license
parameter with a valid license key to authenticate with the official DKP image registry; - use the
--source-login
and--source-password
parameters to authenticate with an external image registry; --images-bundle-chunk-size=N
— set the maximum file size (in GB) to split the image archive. As a result, instead of one image archive, a set of.chunk
files will be created (e.g.,d8.tar.NNNN.chunk
). To upload images from such a set, use the file name without the.NNNN.chunk
suffix (e.g.,d8.tar
for filesd8.tar.NNNN.chunk
);--tmp-dir
— path to a directory for temporary files used during image download and upload. All processing is done in this directory. It must have enough free disk space to hold the entire image bundle. Defaults to the.tmp
subdirectory in the image bundle directory.
Additional configuration parameters for the
d8 mirror
command family are available as environment variables:HTTP_PROXY
/HTTPS_PROXY
— proxy server URL for HTTP(S) requests not listed in the$NO_PROXY
variable;NO_PROXY
— comma-separated list of hosts to exclude from proxying. Each entry can be an IP (1.2.3.4
), CIDR (1.2.3.4/8
), domain, or wildcard (*
). IPs and domains may include a port (1.2.3.4:80
). A domain matches itself and all subdomains. A domain starting with a.
matches only subdomains. For example,foo.com
matchesfoo.com
andbar.foo.com
;.y.com
matchesx.y.com
but noty.com
. The*
disables proxying;SSL_CERT_FILE
— path to an SSL certificate. If set, system certificates are not used;SSL_CERT_DIR
— colon-separated list of directories to search for SSL certificate files. If set, system certificates are not used. More info…;MIRROR_BYPASS_ACCESS_CHECKS
— set this variable to1
to disable credential validation for the registry;
Example command to download all DKP EE versions starting from version 1.59 (specify your license key):
d8 mirror pull \ --license='<LICENSE_KEY>' \ --since-version=1.59 /home/user/d8-bundle
Example command to download the current DKP SE versions (specify your license key):
d8 mirror pull \ --license='<LICENSE_KEY>' \ --source='registry.deckhouse.ru/deckhouse/se' \ /home/user/d8-bundle
Example command to download DKP images from an external image registry:
d8 mirror pull \ --source='corp.company.com:5000/sys/deckhouse' \ --source-login='<USER>' --source-password='<PASSWORD>' /home/user/d8-bundle
Example command to download the vulnerability scanner database package:
d8 mirror pull \ --license='<LICENSE_KEY>' \ --no-platform --no-modules /home/user/d8-bundle
Example command to download all available additional module packages:
d8 mirror pull \ --license='<LICENSE_KEY>' \ --no-platform --no-security-db /home/user/d8-bundle
Example command to download module packages
stronghold
andsecrets-store-integration
:d8 mirror pull \ --license='<LICENSE_KEY>' \ --no-platform --no-security-db \ --include-module stronghold \ --include-module secrets-store-integration \ /home/user/d8-bundle
Example command to download
stronghold
module with semver^
constraint from version 1.2.0:d8 mirror pull \ --license='<LICENSE_KEY>' \ --no-platform --no-security-db \ --include-module stronghold@1.2.0 \ /home/user/d8-bundle
Example command to download
secrets-store-integration
module with semver~
constraint from version 1.1.0:d8 mirror pull \ --license='<LICENSE_KEY>' \ --no-platform --no-security-db \ --include-module secrets-store-integration@~1.1.0 \ /home/user/d8-bundle
Example command to download exact version of
stronghold
module 1.2.5 and publish to all release channels:d8 mirror pull \ --license='<LICENSE_KEY>' \ --no-platform --no-security-db \ --include-module stronghold@=v1.2.5 \ /home/user/d8-bundle
-
On the host with access to the registry where DKP images should be uploaded, copy the downloaded DKP image bundle and install the Deckhouse CLI.
-
Copy the downloaded DKP image bundle and install the Deckhouse CLI on the host that has access to the target image registry.
-
Upload the DKP images to the registry using the
d8 mirror push
command.The
d8 mirror push
command uploads images from all packages located in the specified directory. If you only want to push specific packages, you can either run the command separately for each.tar
image bundle by specifying the direct path to it, or temporarily remove the.tar
extension from unwanted files or move them out of the directory.Example command to upload image packages from the
/mnt/MEDIA/d8-images
directory (provide authentication data if required):d8 mirror push /mnt/MEDIA/d8-images 'corp.company.com:5000/sys/deckhouse' \ --registry-login='<USER>' --registry-password='<PASSWORD>'
Before uploading the images, make sure that the target path in the image registry exists (in the example —
/sys/deckhouse
) and that the account used has write permissions.If you’re using Harbor, you won’t be able to upload images to the root of a project. Use a dedicated repository within the project to store DKP images.
-
After uploading the images to the registry, you can proceed with installing DKP. Use the Quick Start Guide.
When running the installer, use the address of your own image registry (where the images were uploaded earlier) instead of the official public DKP registry. For the example above, the installer image address will be
corp.company.com:5000/sys/deckhouse/install:stable
instead ofregistry.deckhouse.ru/deckhouse/ee/install:stable
.In the InitConfiguration resource during installation, also use your registry address and authorization data (parameters imagesRepo, registryDockerCfg, or Step 3 of the Quick Start Guide).
Creating a cluster and running DKP without using update channels
This method should only be used if your isolated private registry does not contain images with update channel metadata.
If you need to install DKP with automatic updates disabled:
- Use the installer image tag corresponding to the desired version. For example, to install release
v1.44.3
, use the imageyour.private.registry.com/deckhouse/install:v1.44.3
. - Specify the appropriate version number in the deckhouse.devBranch parameter of the InitConfiguration resource.
Do not specify the deckhouse.releaseChannel parameter in the InitConfiguration resource.
If you want to disable automatic updates in an already running Deckhouse installation (including patch updates), remove the releaseChannel parameter from the deckhouse
module configuration.
Using a proxy server
Available in the following editions: BE, SE, SE+, EE, CSE Lite (1.67), CSE Pro (1.67).
To configure DKP to use a proxy, use the proxy parameter in the ClusterConfiguration resource.
Example:
apiVersion: deckhouse.io/v1
kind: ClusterConfiguration
clusterType: Cloud
cloud:
provider: OpenStack
prefix: main
podSubnetCIDR: 10.111.0.0/16
serviceSubnetCIDR: 10.222.0.0/16
kubernetesVersion: "Automatic"
cri: "Containerd"
clusterDomain: "cluster.local"
proxy:
httpProxy: "http://user:password@proxy.company.my:3128"
httpsProxy: "https://user:password@proxy.company.my:8443"
Automatic proxy variable loading for users in CLI
Starting from DKP v1.67, the /etc/profile.d/d8-system-proxy.sh
file is no longer configured to set proxy variables for users.
To automatically load proxy variables for users in CLI, use the NodeGroupConfiguration resource:
apiVersion: deckhouse.io/v1alpha1
kind: NodeGroupConfiguration
metadata:
name: profile-proxy.sh
spec:
bundles:
- '*'
nodeGroups:
- '*'
weight: 99
content: |
{{- if .proxy }}
{{- if .proxy.httpProxy }}
export HTTP_PROXY={{ .proxy.httpProxy | quote }}
export http_proxy=${HTTP_PROXY}
{{- end }}
{{- if .proxy.httpsProxy }}
export HTTPS_PROXY={{ .proxy.httpsProxy | quote }}
export https_proxy=${HTTPS_PROXY}
{{- end }}
{{- if .proxy.noProxy }}
export NO_PROXY={{ .proxy.noProxy | join "," | quote }}
export no_proxy=${NO_PROXY}
{{- end }}
bb-sync-file /etc/profile.d/profile-proxy.sh - << EOF
export HTTP_PROXY=${HTTP_PROXY}
export http_proxy=${HTTP_PROXY}
export HTTPS_PROXY=${HTTPS_PROXY}
export https_proxy=${HTTPS_PROXY}
export NO_PROXY=${NO_PROXY}
export no_proxy=${NO_PROXY}
EOF
{{- else }}
rm -rf /etc/profile.d/profile-proxy.sh
{{- end }}