Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add NodePoolGroupLimit CRD for limits that span NodePools #1747

Open
JacobHenner opened this issue Oct 11, 2024 · 4 comments
Open

Add NodePoolGroupLimit CRD for limits that span NodePools #1747

JacobHenner opened this issue Oct 11, 2024 · 4 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@JacobHenner
Copy link

Description

What problem are you trying to solve?

Today, limits can only be specified on individual NodePools. While this is fine for simple situations, it is insufficient when multiple NodePools comprise a logical grouping of compute that should share a limit. This happens most often when there are important variations in a NodePool's properties beyond its requirements that mandate the use of multiple NodePools, but when they are otherwise related in some way relevant to limits (e.g. same department, team, application, budget line-item).

For example, an organization might group limits by team. A team might require nodes labelled in two distinct ways, necessitating the use of two NodePools. Splitting the team's limit in half for each NodePool might not be sufficient if the balance of nodes between the NodePools varies over time.

I propose a NodePoolGroupLimit CRD (or a similar appropriate name) that would allow a defined limit to apply to NodePools chosen by a selector. If multiple NodePoolGroupLimit objects select the same NodePool, the most prohibitive limit should take precedence.

It might look something like this:

apiVersion: karpenter.sh/v1
kind: NodePoolGroupLimit
metadata:
  name: frontend-bingo
  labels:
    team: frontend
    service: bingo
spec:
  selector:
    # label selector for NodePool labels
    team: frontend
    service: bingo
  limits:
    cpu: "100"
    memory: 128Gi

How important is this feature to you?

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@JacobHenner JacobHenner added the kind/feature Categorizes issue or PR as related to a new feature. label Oct 11, 2024
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Oct 11, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Karpenter contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@njtran
Copy link
Contributor

njtran commented Oct 14, 2024

Based on your request, it seems like it's common for teams to have multiple NodePools, where there's overarching org-wide constraints across the cluster. It seems like you also don't want a necessarily global limit across the cluster too, you want something in between.

Are you willing to open up an RFC to talk about your proposed solution and alternatives to the solution that you've explored?

@njtran njtran removed the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Oct 14, 2024
@JacobHenner
Copy link
Author

Based on your request, it seems like it's common for teams to have multiple NodePools, where there's overarching org-wide constraints across the cluster.

Not quite - in my case there are team constraints that need to be applied across multiple NodePools belonging to each team. It'd be insufficient for there to be one NodePool per team, as teams require several different configurations that cannot be expressed using a single NodePool.

It seems like you also don't want a necessarily global limit across the cluster too, you want something in between.

Correct

Are you willing to open up an RFC to talk about your proposed solution and alternatives to the solution that you've explored?

Yes

@stevehipwell
Copy link

@njtran as it is currently very unlikely that a single node pool could represent even a basic group of compute (even support for AMD64 & ARM64 or support for on-demand & spot can't be handled by a single node pool without significant effort), I'd suggest that this is required for almost all real world scenarios where limits are required.

If done correctly (support {} selector) this approach could also work for #745.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

4 participants