Cluster Emergency Credentials
Problem Statement
We started using our emergency credentials controller to rotate emergency credentials for clusters.
We need a way store and retrieve those emergency credentials for clusters. We should be able to retrieve the credentials in an emergency situation where we lost access to our internal systems.
VSHN uses passbolt to store secrets independently of our internal systems. Passbolt uses asymetric encryption to store secrets.
In order to share secrets using passbolt you need to be in the possession of a private key that can decrypt the secret. Since passbolt login and decryption keys can’t be separated, or rotated, at the moment, we would need to protect the private keys very carefully.
High Level Goals
-
Clusters should be able to write their emergency credentials automatically and securely
-
Clusters must not be able to read other clusters emergency credentials
-
Clusters shouldn’t be able to read other clusters metadata
-
VSHNeers should be able to read the emergency credentials from passbolt without any VSHN infrastructure
-
VSHNeers leaving the company shouldn’t be able to retain access to the (refreshed) emergency credentials
Implementation
The emergency credentials controller gets a new S3 integration that allows encrypting the credentials with a list of public keys and then pushing them to S3.
A pipeline in the VSHN Gitlab will export public keys for all users in the On-Call group and check them into the commodore defaults repository.
A utility tool is provided to download the encrypted credentials for a cluster from S3 and decrypt them using the passbolt private key.
Emergency Credentials Controller
The controller can be configured to use a new S3 integration that allows encrypting the credentials with a list of public keys and then pushing them to S3.
tokenStores:
- name: s3
type: s3
s3:
pathTemplate: "{{ env CLUSTER_ID | sha256sum }}-{{ now | unixEpoch }}.gpg" (1)
endpoint:
bucketnames: bucket_name1, bucket_name2
endpoint: s3.endpoint.com
region: s3_region
access_key_id: s3_access_key_id
secret_access_key: s3_secret_access_key
http_config: {}
s3forcepathstyle: true
encryption:
type: gpg
publicKeys: [] (2)
publicKeysFile: "" (3)
1 | Sprig template that will be used to generate the path for the uploaded credentials file. The cluster ID here is sha256 hashed to prevent leaking metadata about the clusters if using a shared bucket for multiple clusters. |
2 | List will be concatenated |
3 | A concatenated list of public keys will be read from this file |
Uploaded file format
The uploaded file will be a JSON file with the following format:
{
"secrets": [
{
"data": "-----BEGIN PGP MESSAGE-----\nVersion: GopenPGP 2.7.4\nComment: https://gopenpgp.org\n\n[...]\n-----END PGP MESSAGE-----",
},
{
"data": "-----BEGIN PGP MESSAGE-----\nVersion: GopenPGP 2.7.4\nComment: https://gopenpgp.org\n\n[...]\n-----END PGP MESSAGE-----",
}
]
}
The data
field contains the encrypted secret.
A client should try to decrypt the secret with all available private keys until one succeeds.
Public Key Sync
A scheduled pipeline in the VSHN Gitlab will export public keys for all users in the On-Call group and check them into the commodore defaults repository.
The pipeline has access to a passbolt User without any access to the secrets. Every passbolt user is allowed to export every other users public key. It will export the public keys for all users in the On-Call group and check them into the commodore defaults repository.
We currently compile all cluster repositories daily, so the public keys will be updated daily.
Utility Tool
A utility tool is provided to download the encrypted credentials for a cluster from S3 and decrypt them using the passbolt private key.
The tool receives the secret location and any bucket access keys from passbolt. The tool receives the cluster ID as a parameter and hashes/formats it to match the path template used by the emergency credentials controller. It then downloads the file from S3 and tries to decrypt it using the passbolt private key. After decryption it prints the secrets to stdout with an optional kubeconfig file for the cluster.
$ cat > ~/.retrieve-emergency-credentials/config.yaml << YAML
passbolt:
url: "https://passbolt.vshn.net"
key: |
ASCII ARMORED PUBLIC KEY
privateKey: |
ASCII ARMORED PRIVATE KEY
YAML
$ retrieve-emergency-credentials cluster-id
Token: [...]
Kubeconfig exported to /tmp/kubeconfig-XXXXXX. You can access the cluster with:
export KUBECONFIG=/tmp/kubeconfig-XXXXXX
kubectl cluster-info