You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We want to gather information about what users want from Watch capabilities
The current user wants to get change events for resources in multiple clusters and needs to use informer to list-watch the resources in each cluster.
The disadvantages of this are also relatively obvious.
First, as the number of multi-cluster components increases, if each component lists-watches every member cluster, the number of watch long connections will become uncontrollable and will put unknown pressure on the member apiserver.
In addition, we also need to consider that when using the traditional informer, resources need to be cached into memory, and this time, with the expansion of the number of clusters, the memory consumption will also become huge, and with the increase of multiple cluster components again aggravate the memory pressure on the control plane.
Clusterpedia plans to provide a watch capability that allows users to sense changes in resources across multiple clusters using the same principle as Informer, replacing < N Informers > => < N Member KubeAPIServer > with <1 informer> => <1 Clusterpedia APIServer> to get resource change events across N member clusters.
This avoids uncontrollable pressure on the Member APIServer by connecting to the clusterpedia.
Note: Of course the delay of events may be higher compared to connecting directly to the Member KubeAPIServer, but the Informer itself can be seen as an ultimately consistent design
In addition, with the Watch capability, Clusterpedia will also consider providing a new Informer in the same way as the native one, but instead of caching all the data in memory, we will only keep the metadata for user filtering and go to Clusterpedia when the user needs to get the full resource.
We have implemented Watch capability in memory storage and hope to provide a generic solution to provide Watch capability for most of the storage layers.
If you feel that the Watch capabilities of multiple clusters would be useful to you, comment on your usage scenarios and suggestions to drive the design and development of Watch features
Hi @Iceber,
Thanks for opening an issue!
We will look into it as soon as possible.
Details
Instructions for interacting with me using comments are available here.
If you have questions or suggestions related to my behavior, please file an issue against the gh-ci-bot repository.
We want to gather information about what users want from Watch capabilities
The current user wants to get change events for resources in multiple clusters and needs to use informer to list-watch the resources in each cluster.
The disadvantages of this are also relatively obvious.
Clusterpedia plans to provide a watch capability that allows users to sense changes in resources across multiple clusters using the same principle as Informer, replacing < N Informers > => < N Member KubeAPIServer > with <1 informer> => <1 Clusterpedia APIServer> to get resource change events across N member clusters.
This avoids uncontrollable pressure on the Member APIServer by connecting to the clusterpedia.
In addition, with the Watch capability, Clusterpedia will also consider providing a new Informer in the same way as the native one, but instead of caching all the data in memory, we will only keep the metadata for user filtering and go to Clusterpedia when the user needs to get the full resource.
We have implemented Watch capability in memory storage and hope to provide a generic solution to provide Watch capability for most of the storage layers.
If you feel that the Watch capabilities of multiple clusters would be useful to you, comment on your usage scenarios and suggestions to drive the design and development of Watch features
We can also discuss the Watch implementation in Implementations of Watch that can be used for most storage layers
The text was updated successfully, but these errors were encountered: