Configuring the module to work with Deckhouse Stronghold
Enable the Stronghold module beforehand to automatically configure the secrets-store-integration module to work with Deckhouse Stronghold.
Next, apply the ModuleConfig
:
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
name: secrets-store-integration
spec:
enabled: true
version: 1
The connectionConfiguration paramater is optional and set to DiscoverLocalStronghold
value by default.
Configuring the module to work with the external secret store
The module requires a pre-configured secret vault compatible with HashiCorp Vault. An authentication path must be preconfigured in the vault. An example of how to configure the secret vault is provided in Setting up the test environment.
To ensure that each API request is encrypted, sent to, and replied by the correct recipient, a valid public Certificate Authority certificate used by the secret store is required. A caCert
variable in the module configuration must refer to such a CA certificate in PEM format.
The following is an example module configuration for using a Vault-compliant secret store running at “secretstoreexample.com” on a regular port (443 TLS). Note that you will need to replace the parameters values in the configuration with the values that match your environment.
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
name: secrets-store-integration
spec:
version: 1
enabled: true
settings:
connection:
url: "https://secretstoreexample.com"
authPath: "main-kube"
caCert: |
-----BEGIN CERTIFICATE-----
MIIFoTCCA4mgAwIBAgIUX9kFz7OxlBlALMEj8WsegZloXTowDQYJKoZIhvcNAQEL
................................................................
WoR9b11eYfyrnKCYoSqBoi2dwkCkV1a0GN9vStwiBnKnAmV3B8B5yMnSjmp+42gt
o2SYzqM=
-----END CERTIFICATE-----
It is strongly recommended to set the caCert
variable. Otherwise, the module will use system ca-certificates.
Setting up the test environment
First of all, you’ll need a root or similiar token and the Stronghold address. You can get such a root token while initializing a new secrets store.
All subsequent commands will assume that these settings are specified in environment variables.
export VAULT_TOKEN=xxxxxxxxxxx
export VAULT_ADDR=https://secretstoreexample.com
This guide will cover two ways to do this:
- using the console version of Stronghold (Get stronghold cli);
- using curl to make direct requests to the secrets store API.
Before proceeding with the secret injection instructions in the examples below, do the following:
- Create a kv2 type secret in Stronghold in
demo-kv/myapp-secret
and copyDB_USER
andDB_PASS
there. - If necessary, add an authentication path (authPath) for authentication and authorization to Stronghold using the Kubernetes API of the remote cluster
- Create a policy named
myapp-ro-policy
in Stronghold that allows reading secrets fromdemo-kv/myapp-secret
. - Create a
myapp-role
role in Stronghold for themyapp-sa
service account in themyapp-namespace
namespace and bind the policy you created earlier to it. - Create a
myapp-namespace
namespace in the cluster. - Create a
myapp-sa
service account in the created namespace.
Example commands to set up the environment:
-
Enable and create the Key-Value store:
stronghold secrets enable -path=demo-kv -version=2 kv
The same command as a curl HTTP request:
curl \ --header "X-Vault-Token: ${VAULT_TOKEN}" \ --request POST \ --data '{"type":"kv","options":{"version":"2"}}' \ ${VAULT_ADDR}/v1/sys/mounts/demo-kv
-
Set the database username and password as the value of the secret:
stronghold kv put demo-kv/myapp-secret DB_USER="username" DB_PASS="secret-password"
The curl equivalent of the above command:
curl \ --header "X-Vault-Token: ${VAULT_TOKEN}" \ --request PUT \ --data '{"data":{"DB_USER":"username","DB_PASS":"secret-password"}}' \ ${VAULT_ADDR}/v1/demo-kv/data/myapp-secret
-
Double-check that the password has been saved successfully:
stronghold kv get demo-kv/myapp-secret
The curl equivalent of the above command:
curl \ --header "X-Vault-Token: ${VAULT_TOKEN}" \ ${VAULT_ADDR}/v1/demo-kv/data/myapp-secret
-
By default, the method of authentication in Stronghold via Kubernetes API of the cluster on which Stronghold itself is running is enabled and configured under the name
kubernetes_local
. If you want to configure access via remote clusters, set the authentication path (authPath
) and enable authentication and authorization in Stronghold via Kubernetes API for each cluster:stronghold auth enable -path=remote-kube-1 kubernetes
The curl equivalent of the above command:
curl \ --header "X-Vault-Token: ${VAULT_TOKEN}" \ --request POST \ --data '{"type":"kubernetes"}' \ ${VAULT_ADDR}/v1/sys/auth/remote-kube-1
-
Set the Kubernetes API address for each cluster (in this case, it is the K8s’s API server service):
stronghold write auth/remote-kube-1/config \ kubernetes_host="https://api.kube.my-deckhouse.com"
The curl equivalent of the above command:
curl \ --header "X-Vault-Token: ${VAULT_TOKEN}" \ --request PUT \ --data '{"kubernetes_host":"https://api.kube.my-deckhouse.com"}' \ ${VAULT_ADDR}/v1/auth/remote-kube-1/config
-
Create a policy in Stronghold called
myapp-ro-policy
that allows reading of themyapp-secret
secret:stronghold policy write myapp-ro-policy - <<EOF path "demo-kv/data/myapp-secret" { capabilities = ["read"] } EOF
The curl equivalent of the above command:
curl \ --header "X-Vault-Token: ${VAULT_TOKEN}" \ --request PUT \ --data '{"policy":"path \"demo-kv/data/myapp-secret\" {\n capabilities = [\"read\"]\n}\n"}' \ ${VAULT_ADDR}/v1/sys/policies/acl/myapp-ro-policy
-
Create a database role and bind it to the
myapp-sa
ServiceAccount in themyapp-namespace
namespace and themyapp-ro-policy
policy:Important!
In addition to the Stronghold side settings, you must configure the authorization permissions of theserviceAccount
used in the kubernetes cluster. See the paragraph below section for details.stronghold write auth/kubernetes_local/role/myapp-role \ bound_service_account_names=myapp-sa \ bound_service_account_namespaces=myapp-namespace \ policies=myapp-ro-policy \ ttl=10m
The curl equivalent of the above command:
curl \ --header "X-Vault-Token: ${VAULT_TOKEN}" \ --request PUT \ --data '{"bound_service_account_names":"myapp-sa","bound_service_account_namespaces":"myapp-namespace","policies":"myapp-ro-policy","ttl":"10m"}' \ ${VAULT_ADDR}/v1/auth/kubernetes_local/role/myapp-role
-
Repeat the same for the rest of the clusters, specifying a different authentication path:
stronghold write auth/remote-kube-1/role/myapp-role \ bound_service_account_names=myapp-sa \ bound_service_account_namespaces=myapp-namespace \ policies=myapp-ro-policy \ ttl=10m
The curl equivalent of the above command:
curl \ --header "X-Vault-Token: ${VAULT_TOKEN}" \ --request PUT \ --data '{"bound_service_account_names":"myapp-sa","bound_service_account_namespaces":"myapp-namespace","policies":"myapp-ro-policy","ttl":"10m"}' \ ${VAULT_ADDR}/v1/auth/remote-kube-1/role/myapp-role
Important!
The recommended TTL value of the Kubernetes token is 10m.
These settings allow any pod within the myapp-namespace
namespace in both K8s clusters that uses the myapp-sa
ServiceAccount to authenticate, authorize, and read secrets in the Stronghold according to the myapp-ro-policy
policy.
- Create namespace and then ServiceAccount in the specified namespace:
kubectl create namespace myapp-namespace kubectl -n myapp-namespace create serviceaccount myapp-sa
How to allow a ServiceAccount to log in to Stronghold?
To log in to Stronghold, a k8s pod uses a token generated for its ServiceAccount. In order for Stronghold to be able to check the validity of the ServiceAccount data provided by the service, Stronghold must have permission to get
, list
, and watch
for the tokenreviews.authentication.k8s.io
and subjectaccessreviews.authorization.k8s.io
endpoints. You can also use the system:auth-delegator
clusterRole for this.
Stronghold can use different credentials to make requests to the Kubernetes API:
- Use the token of the application that is trying to log in to Stronghold. In this case, each service that logs in to Stronghold must have the
system:auth-delegator
clusterRole (or the API rights listed above) in the ServiceAccount it uses. - Use a static token created specifically for Stronghold
ServiceAccount
that has the necessary rights. Setting up Stronghold for this case is described in detail in Vault documentation.
Injecting environment variables
How it works
When the module is enabled, a mutating-webhook becomes available in the cluster. It modifies the pod manifest, adding an injector, if the pod has the secrets-store.deckhouse.io/role
annotation an init container is added to the modified pod. Its mission is to copy a statically compiled binary injector file from a service image into a temporary directory shared by all containers in the pod. In the other containers, the original startup commands are replaced with a command that starts the injector. It then fetches the required data from a Vault-compatible storage using the application’s service account, sets these variables in the process ENV, and then issues an execve system call, invoking the original command.
If the container does not have a startup command in the pod manifest, the image manifest is retrieved from the image registry,
and the command is retrieved from it.
The credentials from imagePullSecrets
specified in the pod manifest are used to retrieve the manifest from the private image registry.
The following are the available annotations to modify the injector behavior:
Annotation | Default value | Function |
---|---|---|
secrets-store.deckhouse.io/role | Sets the role to be used to connect to the secret store | |
secrets-store.deckhouse.io/env-from-path | Specifies the path to the secret in the vault to retrieve all keys from and add them to the environment | |
secrets-store.deckhouse.io/ignore-missing-secrets | false | Runs the original application if an attempt to retrieve a secret from the store fails |
secrets-store.deckhouse.io/client-timeout | 10s | Timeout to use for secrets retrieval |
secrets-store.deckhouse.io/mutate-probes | false | Injects environment variables into the probes |
secrets-store.deckhouse.io/log-level | info | Logging level |
secrets-store.deckhouse.io/enable-json-log | false | Log format (string or JSON) |
The injector allows you to specify env templates instead of values in the pod manifests. They will be replaced at the container startup stage with the values from the store.
For example, here’s how you can retrieve the DB_PASS
key from the kv2-secret at demo-kv/myapp-secret
from the Vault-compatible store:
env:
- name: PASSWORD
value: secrets-store:demo-kv/data/myapp-secret#DB_PASS
The example below retrieves the DB_PASS
key version 4
from the kv2 secret at demo-kv/myapp-secret
from the Vault-compatible store:
env:
- name: PASSWORD
value: secrets-store:demo-kv/data/myapp-secret#DB_PASS#4
The template can also be stored in the ConfigMap or in the Secret and can be hooked up using envFrom
:
envFrom:
- secretRef:
name: app-secret-env
- configMapRef:
name: app-env
The actual secrets from the Vault-compatible store will be injected at the application startup; the Secret and ConfigMap will only contain the templates.
Setting environment variables by specifying the path to the secret in the vault to retrieve all keys from
The following is the specification of a pod named myapp1
. In it, all the values are retrieved from the store at the demo-kv/data/myapp-secret
path and stored as environment variables:
kind: Pod
apiVersion: v1
metadata:
name: myapp1
namespace: myapp-namespace
annotations:
secrets-store.deckhouse.io/role: "myapp-role"
secrets-store.deckhouse.io/env-from-path: demo-kv/data/myapp-secret
spec:
serviceAccountName: myapp-sa
containers:
- image: alpine:3.20
name: myapp
command:
- sh
- -c
- while printenv; do sleep 5; done
Let’s apply it:
kubectl create --filename myapp1.yaml
Check the pod logs after it has been successfully started. You should see all the values from demo-kv/data/myapp-secret
:
kubectl -n myapp-namespace logs myapp1
Delete the pod:
kubectl -n myapp-namespace delete pod myapp1 --force
Explicitly specifying the values to be retrieved from the vault and used as environment variables
Below is the spec of a test pod named myapp2
. The pod will retrieve the required values from the vault according to the template and turn them into environment variables:
kind: Pod
apiVersion: v1
metadata:
name: myapp2
namespace: myapp-namespace
annotations:
secrets-store.deckhouse.io/role: "myapp-role"
spec:
serviceAccountName: myapp-sa
containers:
- image: alpine:3.20
env:
- name: DB_USER
value: secrets-store:demo-kv/data/myapp-secret#DB_USER
- name: DB_PASS
value: secrets-store:demo-kv/data/myapp-secret#DB_PASS
name: myapp
command:
- sh
- -c
- while printenv; do sleep 5; done
Apply it:
kubectl create --filename myapp2.yaml
Check the pod logs after it has been successfully started. You should see the values from demo-kv/data/myapp-secret
matching those in the pod specification:
kubectl -n myapp-namespace logs myapp2
Delete the pod:
kubectl -n myapp-namespace delete pod myapp2 --force
Retrieving a secret from the vault and mounting it as a file in a container
Use the SecretStoreImport
CustomResource to deliver secrets to the application.
In this example, we use the already created ServiceAccount myapp-sa
and namespace myapp-namespace
from step Setting up the test environment
Create a SecretsStoreImport CustomResource named myapp-ssi
in the cluster:
apiVersion: deckhouse.io/v1alpha1
kind: SecretsStoreImport
metadata:
name: myapp-ssi
namespace: myapp-namespace
spec:
type: CSI
role: myapp-role
files:
- name: "db-password"
source:
path: "demo-kv/data/myapp-secret"
key: "DB_PASS"
Create a test pod in the cluster named myapp3
. It will retrieve the required values from the vault and mount them as a file:
kind: Pod
apiVersion: v1
metadata:
name: myapp3
namespace: myapp-namespace
spec:
serviceAccountName: myapp-sa
containers:
- image: alpine:3.20
name: myapp
command:
- sh
- -c
- while cat /mnt/secrets/db-password; do echo; sleep 5; done
name: backend
volumeMounts:
- name: secrets
mountPath: "/mnt/secrets"
volumes:
- name: secrets
csi:
driver: secrets-store.csi.deckhouse.io
volumeAttributes:
secretsStoreImport: "myapp-ssi"
Once these resources have been applied, a pod will be created, inside which a container named backend
will then be started. This container’s filesystem will have a directory /mnt/secrets
, with the secrets
volume mounted to it. The directory will contain a db-password
file with the password for database (DB_PASS
) from the Stronghold key-value store.
Check the pod logs after it has been successfully started (you should see the contents of the /mnt/secrets/db-password
file):
kubectl -n myapp-namespace logs myapp3
Delete the pod:
kubectl -n myapp-namespace delete pod myapp3 --force
The autorotation feature
The autorotation feature of the secret-store-integration module is enabled by default. Every two minutes, the module polls Stronghold and synchronizes the secrets in the mounted file if it has been changed.
There are two ways to keep track of changes to the secret file in the pod. The first is to keep track of when the mounted file changes (mtime), reacting to changes in the file. The second is to use the inotify API, which provides a mechanism for subscribing to file system events. Inotify is part of the Linux kernel. Once a change is detected, there are a large number of options for responding to the change event, depending on the application architecture and programming language used. The most simple one is to force K8s to restart the pod by failing the liveness probe.
Here is how you can use inotify in a Python application leveraging the inotify
Python package:
#!/usr/bin/python3
import inotify.adapters
def _main():
i = inotify.adapters.Inotify()
i.add_watch('/mnt/secrets-store/db-password')
for event in i.event_gen(yield_nones=False):
(_, type_names, path, filename) = event
if 'IN_MODIFY' in type_names:
print("file modified")
if __name__ == '__main__':
_main()
Sample code to detect whether a password has been changed within a Go application using inotify and the inotify
Go package:
watcher, err := inotify.NewWatcher()
if err != nil {
log.Fatal(err)
}
err = watcher.Watch("/mnt/secrets-store/db-password")
if err != nil {
log.Fatal(err)
}
for {
select {
case ev := <-watcher.Event:
if ev == 'InModify' {
log.Println("file modified")}
case err := <-watcher.Error:
log.Println("error:", err)
}
}
Secret rotation limitations
A container that uses the subPath
volume mount will not get secret updates when the latter is rotated.
volumeMounts:
- mountPath: /app/settings.ini
name: app-config
subPath: settings.ini
...
volumes:
- name: app-config
csi:
driver: secrets-store.csi.deckhouse.io
volumeAttributes:
secretsStoreImport: "python-backend"
Get stronghold cli
On the cluster’s master node, run the following commands as root
:
mkdir $HOME/bin
sudo cp /proc/$(pidof stronghold)/root/usr/bin/stronghold bin && sudo chmod a+x bin/stronghold
export PATH=$PATH:$HOME/bin
As a result, the command stronhold
is ready to be used.