You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My parent cluster, which hosts vclusters, is inside an EKS Cluster and requires AWS credentials for access (kubeconfig executess aws cli to authenticate. However, the background proxy Docker container lacks AWS account configuration and the AWS CLI. Additionally, it needs a method to add extra volume mounts.
Which solution do you suggest?
Suggested Solutions
1. Add AWS CLI:
a. Either include AWS CLI inside the vcluster-cli docker image by default.
b. Or provide a flag to indicate that the vcluster is inside an EKS cluster on AWS, so the background-proxy can start a container with a non-default Docker image that includes AWS CLI.
2. Extend to Other Cloud Providers:
• point 1.b feature could be extended to support other cloud providers as well.
3. Environment Variables:
• Allow setting environment variables inside docker container via the CLI.
4. Extra Volume Mounts:
• Allow configuring extra volume mounts inside docker container for AWS credentials and other necessary files.
The changes could be made in the following file: configure.go.
Which alternative solutions exist?
No response
Additional context
Issue Summary
• Setup: My parent cluster (EKS Cluster) contains multiple virtual clusters (vclusters).
• Security Requirement: Access to the EKS Cluster should be restricted to users with a specific IAM role. Even if someone gets the KUBECONFIG, they shouldn't access the cluster without logging into the AWS account with the necessary role.
• KUBECONFIG: Example configuration to enforce IAM role-based access:apiVersion: v1
clusters:
- cluster:
server: https://<cluster-endpoint>
certificate-authority-data: <base64-encoded-ca-cert>
name: eks-cluster
contexts:
- context:
cluster: eks-cluster
user: eks-user
name: eks-context
current-context: eks-context
kind: Config
preferences: {}
users:
- name: eks-user
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: aws
args:
- "eks"
- "get-token"
- "--cluster-name"
- "<cluster-name>"
Problem
When I start a background proxy Docker container to connect to a vcluster, it doesn't have the necessary AWS credentials, leading to authentication issues. It misses the AWS-CLI as well. I need a way to mount AWS credentials and environment variables (like AWS_PROFILE) into the container.
Error Messages
When running vcluster connect -n --background-proxy, I encounter errors:
08:03:32 warn Error retrieving vclusters: find vcluster: Get "https://<eks-host-endpoint>/apis/apps/v1/namespaces/<vcluster-ns>/statefulsets?labelSelector=app%3Dvcluster": getting credentials: exec: executable aws not found
It looks like you are trying to use a client-go credential plugin that is not installed.
To learn more about this feature, consult the documentation available at:
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
08:03:34 fatal couldn't find vcluster vcluster-ns
Steps to Reproduce
1. Create an EKS Cluster protected by an IAM role.
2. Login to the AWS account from your terminal with the necessary role.
3. Create a vcluster within the EKS Cluster.
4. Run vcluster connect using the --background-proxy flag:vcluster connect <vcluster-name> -n <vcluster-ns> --background-proxy
Current Workaround
Using simple port-forwarding works fine, but it fails with the background-proxy Docker container due to missing AWS credentials.
Needed Solution
I need a way to configure extra volume mounts for AWS credentials and environment variables in the background proxy Docker container. This configuration might be added to the relevant script in the vcluster repository here.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem?
My parent cluster, which hosts vclusters, is inside an EKS Cluster and requires AWS credentials for access (kubeconfig executess
aws
cli to authenticate. However, the background proxy Docker container lacks AWS account configuration and the AWS CLI. Additionally, it needs a method to add extra volume mounts.Which solution do you suggest?
Suggested Solutions
The changes could be made in the following file: configure.go.
Which alternative solutions exist?
No response
Additional context
Issue Summary
Problem
When I start a background proxy Docker container to connect to a vcluster, it doesn't have the necessary AWS credentials, leading to authentication issues. It misses the AWS-CLI as well. I need a way to mount AWS credentials and environment variables (like AWS_PROFILE) into the container.
Error Messages
When running vcluster connect -n --background-proxy, I encounter errors:
Steps to Reproduce
Current Workaround
Using simple port-forwarding works fine, but it fails with the background-proxy Docker container due to missing AWS credentials.
Needed Solution
I need a way to configure extra volume mounts for AWS credentials and environment variables in the background proxy Docker container. This configuration might be added to the relevant script in the vcluster repository here.
The text was updated successfully, but these errors were encountered: