-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes Secrets Syncing #7364
Comments
Hi, yes. Is this thinking to be two way sync? Or just syncing from Vault into Kube? I was looking to write just this (seemingly) soonish. Just something primitive to sync one way from Vault into Kube. Our goal is to use Vault as the source of truth. But the convenience of using secrets with Kube natively make it really appealing. So my ideal workflow is people interact with Vault explicitly for managing secrets, Kubernetes gets a read only view of a subset of keys. What feedback are you looking for here? Happy to help. |
We did exactly that: we wrote our own syncer and blogged about it. Please have a look at https://github.com/postfinance/vault-kubernetes and https://itnext.io/effective-secrets-with-vault-and-kubernetes-9af5f5c04d06. It is a pragmatic approach, the blast radius is limited because secret paths can be specified, and secrets can be centrally managed in Vault. Apart from that, @sethvargo says it is a terrible idea to sync from Vault to K8s secrets as they are inherently insecure. You need an extra KMS provider to encrypt K8s secrets at rest, for instance. I envision a standardized interface which would be a great way to plug Vault into K8s. Maybe CSI ist the right one, maybe a higher-level secrets management interface (SMI?) would be better. In this scenario, Vault would take over the role of storing and providing K8s secrets without breaking existing APIs. |
Objectively, that’s true. But I don’t think that’s a concern in all use cases. If we treat everything running in Kubernetes as trusted, it’s the same risk tolerance. We are more concerned about mutations of secrets than reads. I agree though and in our case, it’s only a subset of things anyways that fall under this bucket. Kube secrets can also be restricted with RBAC I believe. Granted, it’s different rules than Vault, but anything can at least be restricted on that level. A CSI would be nice as well and could potentially make it work. Right now, a lot of our work relies on secrets being injected as environment variables, which is not something AFAIK can be done with a CSI? If it can be, that’d be great. If not, we’d need to read them from disk and populate into env vars at boot up time. |
The question of course is: runtime, or at rest? The issue with syncing into Kube secrets is that it's (generally, unless you use a KMS provider) not encrypted at rest, so you've gone from storing secrets at rest in an encrypted location to an unencrypted one. |
In my case, and I have to believe this is the default, but syncing to filesystem in a Pod goes into tmpfs, so it isn't persisted at rest within a Pod unless you were to do something out of the ordinary and copy files over. edit Oh, are you referring to how Kubernetes stores it's secrets internally? I guess that's a good question. I had assumed Kubernetes encrypts the storage itself, but I never looked into it. 🤔 edit2 Looking at https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/ you can configure them to be encrypted at application level. Our disks themselves are already encrypted at rest through Google, but this is effectively something we'll make weaker by using Kube secrets unless we configure the EncryptionConfig on top of that. This is good information! |
Currently we use ServiceAccounts for services to authenticate to Vault. This (via templated policies), gives services access specific access to a number of paths inside Vault to manage whatever secrets they need (ssh, aws, and static secrets). Having a system running inside Kubernetes that has super-power access to Vault would be really suboptimal for us. As it stands our services do not need to interact with Vault directly, instead they use an init container + sidecar to fetch and keep fresh various secrets. These containers are managed by tooling, and the secrets the services desire are described in a |
I would second the route of a CSI for a number of reasons:
For applications that are not Vault aware, a sidecar can provide the authentication to the CSI and populate secrets in the filesystem. IMHO, passing secrets in an environment variable is a dubious idea ( |
Hi everyone -- you may be interested in Pentagon which was built specifically to sync vault data with k8s secrets. Full disclosure: my team and I wrote Pentagon. |
I love the idea of a vault->k8s syncer, please do it 👍
|
I agree with this- what I would like to see:
Essentially, I want k8s dealing with everything related to pod runtime with no external platform dependencies. It is nice to have all of the "inputs" to a pod statically declared in the manifest without having to jump through a ton of hoops to figure out where items within a pod are coming from (it is even clearer if you are running with a readonly root file system). Another bonus: How this can be used:
I'm not aligned with the CSI approach- it feels like it adds a ton of extra moving parts to pod runtime with questionable value depending on how your cluster is setup. |
@jingweno and I have been looking at scenarios where secrets will need to be shared across multiple kubernetes namespaces. In the case of two deployments, unique namespaces for each, and a shared secret (basic auth for example), vault would need to sync to a k8s secret in each namespace. I’m on board with leveraging k8s secrets since it’s a simple interface and it allows the community to develop against now and add a vault syncer later down the road. |
Currently, we are using Vault Agent and Consul Template in various scenarios as the standard tooling. Also in k8s as sidecar containers to provide the required secrets and performing renew of leases. It would be great to have a managed injection of these as init containers (enable clean startup of applications) and then as sidecar (for managing renew of token & leases). |
Funny, I am just hitting this problem for the first time and am trying to decide how best to solve this particular issue. Here's my usecase:
My preferred solution (which I was looking to write over the next month or two) would be to create a controller with two CRDs to implement the syncing solution. The first CRD would be cluster scoped and provide the configuration to access Vault. At this point, the secrets would all be synchronized to the cluster, but they still would not trigger a new deployment rollout to ensure that the application had the correct set of credentials cached. Consequently, I envision using an annotation to tell the application which secret to pull from the namespace. The secret would need to have a unique identifier that changes each time the secret is rotated. The controller would then update all deployments/statefulset/etc that reference the secret specified in the annotation with the new secret name, thus triggering a rollout. Again, this is how I'd go about solving the problem. I am not sold on the idea of using a CSI plugin as it requires the application to be aware of changes to the file mounted into the pod (at least, that's my understanding of how the CSI works). I'm not going to say it's not something that should be investigated or even implemented; I'm just saying I don't foresee it solving my usecase. I'm more a fan of the k8s syncer method for those of us who are running legacy applications that are fairly dumb and don't really pay attention to whether or not the secrets are changed after their initial boot. |
https://github.com/DaspawnW/vault-crd can already do this, but there are some key features missing. For example that anyone that can provision the Vault resource can access all secrets that Vault-CRD has access to. It would be nice if this was namespace-aware or that one can provision different CRD controllers with specific access to Vault. It also doesn't support dynamic secrets. Allowing for multiple sync controllers that can be configured with different Vault permissions watching a specific set of namespaces would be important, I think. As for Kubernetes secrets being unsafe. Many still use ServiceAccounts for Vault auth, which is stored as secrets. With other auth mechanisms you will still rely on some kind of secret for identity towards vault. One can achieve encryption-at-rest for secrets by using etcd with TLS and encrypted storage. |
We implemented https://github.com/tuenti/secrets-manager at Tuenti. It is basically a custom controller which reconciles a CRD called
secrets-manager uses vault-approle authtentication to connect to Vault and currently supports kv1 and kv2 engines. I'm looking into the possibility of implementing other vault backends (dynamic secrets will be a killer feature) and match them as With secrets-manager you can "compose" secrets, so a single Kubernetes secret could map to mutiple Vault path/keys, supporting base64 encoding (this is important since Kubernetes will store them in base64). |
Tools that have cluster-wide access to Vault secrets (like https://github.com/tuenti/secrets-manager) are a non-starter for us. Also, the further we go down the road of dynamic credentials (databases, aws, etc), it becomes more important that tooling only cycles to new credentials when the lease is about to expire - comparing dynamic credentials to the ones kept inside of Kubernetes will just cause new credentials to be constantly created which isn't really appropriate. |
Hi @james-atwill-hs! As I said, secrets-manager uses AppRole, and we use policies to control which paths are allowed to be read by secrets-manager. About dynamic secrets, It is not implemented, neither designed. But I think it's feasible to implement the logic to read when the secret ttl is close to expire, as we do with token renewal |
@fcgravalos I believe James's point is not that secrets-manager has to have permssions on Vault but that secrets-manager needs cluster-wide access to secrets to be able to install them in all the required namespaces. I'm assuming that secrets-manager is able to install secrets in multiple namespaces. If it's not, I'd totally be willing to help get support for both namespacing and dynamic credential backends implemented. I've been wanting to dig into writing operators for a while now anyway, and I'm at a point where I need it. I'd also suggest adding support for configuring secrets-manager using a namespaced CRD so that each namespace could have it's own isolated identity to allow for more granular control over secrets synchronization into namespaces. |
Same requirement for us, each application must have its own vault access ensuring it can only access secrets it is allowed to. This is also very important for the vault auditing identifying secret usage of each application. Regarding dynamic credentials, I think leasing is the important part, e.g. databases:
Of course, pki secrets will unfortunately not work that way. |
Hey @thefirstofthe300! Glad to see you are open to contribute! We could check if there's a way a controller can reconcile objects in a given namespace but I am not sure if that would be possible tbh. |
@thefirstofthe300 unfortunately CRDs are not namespace-local objects, an option to support namespace-local CRDs was discussed here kubernetes/kubernetes#65551 and although there was interest, it was not implemented (perhaps in the future 🙏) So, although you could install multiple secrets-managers, the CRD objects are still going to be cluster-wide. @tstoermer, the kind of requirement you are suggesting sounds like a sidecar approach, perhaps something like https://github.com/uswitch/vault-creds will work for you. similar approach discussed here https://www.hashicorp.com/blog/whats-next-for-vault-and-kubernetes "Injecting Vault secrets into Pods via a sidecar" secrets-manager was designed to reconcile secrets in a more centralized way, dynamic credentials is indeed interesting capability to add. |
I'd much rather something that injects init/sidecars into pods like ( https://github.com/uswitch/vault-creds / https://github.com/hootsuite/vault-ctrl-tool ) than one central controller that has more access than it needs; regardless of namespace scoping. To be honest, I'd rather investigate a Secrets Provider and Secrets Provider Controller (ala Ingress / Ingress Controller); that's a huge undertaking though. |
If you install multiple secrets-managers, you can specify per secrets-manager in which namespace to look for SecretDefinition objects. And this can be enforced through RBAC. RBAC would also control in which namespaces a controller could read/write Secret objects. |
@james-atwill-hs a sidecar approach is appealing, together with an admission controller to inject the sidecar to the pods (much like Istio does with proxy sidecar), however you still need to define which secrets your applications (pods) need and where to store them/etc.. so this will need to be a logic in the admission controller I suppose.
Here, I'm confused 😄 since what you are describing is what secrets-manager does, the SecretDefinitions are equivalent to Ingress objects (and are namespaced, etc..) and secrets-manager is the controller that watches these objects and updates the secrets accordingly. Ingress Controller has a concept of an ingress |
Sort of; my understanding is that secrets-manager has enough access to Vault to provide secrets for multiple pods. If there are different Vault policies in place for service1 vs service2, secrets-manager would need to have access to the superset. This means having a second set of policies in place in secrets-manager to prevent service2 from requesting paths that are only supposed to be accessible to service1. It also means trusting secrets-manager with all the secrets for both service1 and service2. Right now with init/sidecar, secrets from Vault come in straight from Vault over TLS and are only materialized within the cgroup of the pod. The pod authenticates as itself and the policies Vault gives it are managed inside Vault. ServiceAccount tokens are the defacto way for services to identify themselves in Kubernetes, and once we have bound serviceaccounts it becomes even less perilous to send a serviceaccount jwt to Vault too. So, we use admission controllers and injecting containers because that's all we have. What I'm saying as a Secrets Provider / Secrets Provider Controller is something that would limit the exposure of secrets available in Vault to all the surrounding infrastructure. Maybe it's something that visits pods and does the heavy lifting, maybe it's a Vault plugin that pushes encrypted Secrets into Kubernetes when a Kubernetes service authenticates and only that service has the decryption key. Dunno. What it's not (for us), is another single point of risk that has a superset of access to Vault. |
Well, if you use a sidecar, there are two options:
A vault plugin sounds appealing |
Right, I may have phrased what I was saying poorly. I understand that the CRD itself can't be namespaced. It's an extension of the API server and consequently isn't easy to isolate the object definition to a namespace since that's not how the API server is designed. I was referring to namespacing the vault configuration that the controller would use. This would allow the controller to pull secrets from Vault into a namespace using an identity that has an isolated set of permissions. Users who can create secrets configurations in the namespace would only be able to use the identity associated with the namespace they are located in.
@james-atwill-hs But services never actually ask secrets-manager for their secrests; they pull them from the secrets objects that secrets-manager creates.
I would argue that this is no more secure than using software using an identity with a combination of all permissions granted to the service accounts inside that namespace. If I'm wrong about any of the below, feel free to correct me. There's a lot of pieces here and I could be very easily be missing something.
If instead of granting access to Vault to each service account in the namespace, why not instead have a series of identities that a controller can use to synchronize Vault secrets to k8s secrets. Each identity will be specific to the superset of needed permissions for that namespace and only used when synchronizing the secrets for the secrets definitions in that namespace. This provides a good UX for friendly users who just want to be able to fetch the secrets that they already have access to anyway while keeping the effective attack surface essentially the same. |
@thefirstofthe300 regarding Vault configuration, I think this is exactly what we do when we deploy secrets-manager. We use a role whose policy rules allow a particular path in Vault. We have 10+ clusters and every secrets-manager vault's permissions are scoped to its cluster path in Vault. We can do the same with namespaces we just need to make sure that if múltiple secrets-manager are deployed per cluster we will need to find a way for them to watch these secretsDefinitions in its namespace. Ingress controller seems to be able to do It: https://kubernetes.github.io/ingress-nginx/user-guide/cli-arguments/ There's a |
I was thinking along the same lines as @Jogy with SMI (secret management interface). Could Kubernetes Secrets be rewritten with a pluggable store? It defaults to etcd, but optionally with Vault through the SMI. |
@fcgravalos writes:
Containers running in a pod can share the same temporary filesystem. This is how the init container and sidecar interact with whatever service container you have. For vault-ctrl-tool, you can specify how you want your secrets outputted: everything from "just write out the vault token and i'll do the rest" to regularly writing ~/.aws/credentials and filling in templated configuration files. The risk is that the secrets are "on disk" in the pod, the win is that, as a developer, if you can read a file, you can be using Vault. @thefirstofthe300 writes:
With RBAC turned on, no one person should have that much power.
We believe that a Service Account is the way to identify a Service. So, this is where admission controllers come into place. We enforce a naming convention for service accounts and services so that service "A" cannot impersonate service "B" by including a service account token for "B".
If someone gains access to the material used to authenticate to Vault, they gain the ability to authenticate to Vault and therefore also gain access to the secrets that service has. This is true of any authentication process. If you are using dynamic credentials (which is what you should be striving for), then once the breach is detected, you can quickly revoke any of the credentials and create new ones. If you're using static credentials then you have a lot of manual work as well. And yes, if you gain access to a node and can become root, you may gain the ability to dump memory and sift through it to find secrets. Again, if your secrets are short lived then your exposure is limited. @daneharrigan writes:
Agree something like this is where we should be going. Currently Vault supports using Service Account Tokens to authenticate, so the scope of policies is per-service account. I strongly think that's the right scope to be aiming for. Operators that have the access of multiple services (and are pulling a wide scope of secrets from Vault into Kubernetes) seem suboptimal to me. |
This proposal would be extremely helpful in the "cluster per service" model which is becoming common now that it's easy to provision clusters on the various cloud providers. Bootstrapping the secrets is currently the most painful part of that process, in my opinion. Of course, there are security tradeoffs for other use cases which might make this approach not viable for them. They can use the other mechanisms instead. Thanks for all the links to tooling that already does this. I think the fact that the tooling exists in several forms shows the vault team the use cases exist. 👍 |
Just in case if it sounds interesting for someone, we added a |
We are currently use the following setup: Pod:
Vault Kubernetes Authenticator gets a Vault token based on a Kubernetes Service Account. It writes the token kubernetes mount 1. Consul Template takes the token and connects to vault. It gets secrets such as database credentials and gcp service accounts. It writes those to kubernetes mount 2. The main container picks up the secrets and uses them. One thing that we experience is troubles when some pods/pets are running longer than the vault defined max_ttl. Then console template cannot renew its token and there is no way to automatically kill the whole kubernetes pod. So I am looking forward to an implementation that can re-authenticate using the kubernetes service account when the max_ttl is expired. |
Talend Vault Sidecar Injector supports this use case (disclaimer: I am the author of this component). |
Looks like Vault 1.3 will do away with needing consul template as |
Oh great. Thanks for pointing that. Will then be able to only inject one sidecar to do all the work! |
https://github.com/tuenti/secrets-manager/releases/tag/v1.0.2 |
How about the approach of a mutating webhook? It seems to allow for specifying the K8S secrets as Vault keys, while it defers the fetching of the actual secret until deployment time. I'm probably missing something here so feel free to let me know what that could be... |
First, I think this is a good and clever approach. One downside is rotation
of secrets. The mutating webhook only runs when the pod is created. Dynamic
credentials like TLS certs with short expiration would need an additional
mechanism to handle refresh/renewal.
…On Wed, Dec 11, 2019 at 7:48 AM Blake Pettersson ***@***.***> wrote:
How about the approach of a mutating webhook
<https://github.com/banzaicloud/bank-vaults/tree/master/docs/mutating-webhook>?
It seems to allow for specifying the K8S secrets as Vault keys, while it
defers the fetching of the actual secret until deployment time. I'm
probably missing something here so feel free to let me know what that could
be...
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#7364?email_source=notifications&email_token=AAC4GAYEYDOQRVJAZN66YJDQYEDVHA5CNFSM4IPYBTMKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGTTFHQ#issuecomment-564605598>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAC4GA3VU5KTND42XMSZIMDQYEDVHANCNFSM4IPYBTMA>
.
|
We had a fair amount of negative experience and drama with the mutating webhook. Just a reminder, the mutating webhook will mutate any created pod. While in theory, it's great to have every pod with its own credentials it wasn't scalable for us, because:
Imo syncing to Kubernetes secrets sounds good (even if not most optimal) and using some strategy like https://github.com/stakater/Reloader should solve the pod rotation issue. |
@estahn wrong joe |
FYI - We just announce a new Vault + Kubernetes integration that enables applications with no native HashiCorp Vault logic built-in to leverage static and dynamic secrets sourced from Vault. Suspect this will be of interest to folks in here since there was chat about init, sidecar, service accounts, etc. Blog: https://hashicorp.com/blog/injecting-vault-secrets-into-kubernetes-pods-via-a-sidecar |
This is just not true. MutatingWebhookConfiguration offers fine-grained control over which pods to mutate. |
I don’t recall the exact mechanism, but my primary point was about secret backends not being resistent to pod churn and especially situations with crash-looping pods which create thousands of users. We have used bank-vaults and contributed the mutating webhook to the project. Unfortunately we had to remove it due to the impact on our workload. |
I think both injection. and replication are needed capabilities. Not every secrets get loaded into a pod/deployment. Here's another use case. Anthos CM for example has a CRD listening for RootSync objects. the configuration requires a references to the git credentials stored as a k8s secret. Therefore, the injector does not work. Yes I could do an imperative step of
|
We reserve github issues for bug reports and feature requests, which this doesn't appear to be. As such, I'm going to close this and suggest that you continue the discussion at https://discuss.hashicorp.com/c/vault/30. |
We are exploring the use case of integrating Vault with the Kubernetes Secrets mechanism via a syncer process. This syncer could be used to periodically sync a subset of Vault secrets with Kubernetes so that secrets are always up-to-date for users without directly interacting with Vault.
We are seeking community feedback on this thread.
The text was updated successfully, but these errors were encountered: