Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

allow mounting extra files/folders and configurable env variables in docker container when connecting to the vcluster with background-proxy #2053

Open
kbiyani33 opened this issue Aug 12, 2024 · 0 comments

Comments

@kbiyani33
Copy link

kbiyani33 commented Aug 12, 2024

Is your feature request related to a problem?

My parent cluster, which hosts vclusters, is inside an EKS Cluster and requires AWS credentials for access (kubeconfig executess aws cli to authenticate. However, the background proxy Docker container lacks AWS account configuration and the AWS CLI. Additionally, it needs a method to add extra volume mounts.

Which solution do you suggest?

Suggested Solutions

1.	Add AWS CLI:
	a.	Either include AWS CLI inside the vcluster-cli docker image by default.
	b.	Or provide a flag to indicate that the vcluster is inside an EKS cluster on AWS, so the background-proxy can start a container with a non-default Docker image that includes AWS CLI. 

2.	Extend to Other Cloud Providers:
	•	point 1.b feature could be extended to support other cloud providers as well.
3.	Environment Variables:
	•	Allow setting environment variables inside docker container via the CLI.
4.	Extra Volume Mounts:
	•	Allow configuring extra volume mounts inside docker container for AWS credentials and other necessary files.

The changes could be made in the following file: configure.go.

Which alternative solutions exist?

No response

Additional context

Issue Summary

•	Setup: My parent cluster (EKS Cluster) contains multiple virtual clusters (vclusters).
•	Security Requirement: Access to the EKS Cluster should be restricted to users with a specific IAM role. Even if someone gets the KUBECONFIG, they shouldn't access the cluster without logging into the AWS account with the necessary role.
•	KUBECONFIG: Example configuration to enforce IAM role-based access:apiVersion: v1
	clusters:
	- cluster:
	    server: https://<cluster-endpoint>
	    certificate-authority-data: <base64-encoded-ca-cert>
	  name: eks-cluster
	contexts:
	- context:
	    cluster: eks-cluster
	    user: eks-user
	  name: eks-context
	current-context: eks-context
	kind: Config
	preferences: {}
	users:
	- name: eks-user
	  user:
	    exec:
	      apiVersion: client.authentication.k8s.io/v1beta1
	      command: aws
	      args:
	        - "eks"
	        - "get-token"
	        - "--cluster-name"
	        - "<cluster-name>"

Problem

When I start a background proxy Docker container to connect to a vcluster, it doesn't have the necessary AWS credentials, leading to authentication issues. It misses the AWS-CLI as well. I need a way to mount AWS credentials and environment variables (like AWS_PROFILE) into the container.
Error Messages

When running vcluster connect -n --background-proxy, I encounter errors:

08:03:32 warn Error retrieving vclusters: find vcluster: Get "https://<eks-host-endpoint>/apis/apps/v1/namespaces/<vcluster-ns>/statefulsets?labelSelector=app%3Dvcluster": getting credentials: exec: executable aws not found

It looks like you are trying to use a client-go credential plugin that is not installed.
To learn more about this feature, consult the documentation available at:
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
08:03:34 fatal couldn't find vcluster vcluster-ns

Steps to Reproduce

1.	Create an EKS Cluster protected by an IAM role.
2.	Login to the AWS account from your terminal with the necessary role.
3.	Create a vcluster within the EKS Cluster.
4.	Run vcluster connect using the --background-proxy flag:vcluster connect <vcluster-name> -n <vcluster-ns> --background-proxy

Current Workaround

Using simple port-forwarding works fine, but it fails with the background-proxy Docker container due to missing AWS credentials.
Needed Solution

I need a way to configure extra volume mounts for AWS credentials and environment variables in the background proxy Docker container. This configuration might be added to the relevant script in the vcluster repository here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant