-
Notifications
You must be signed in to change notification settings - Fork 7
feat: implement filter and score #23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
|
||
| type PlacementPolicyPodInfos map[types.UID]*PlacementPolicyPodInfo | ||
|
|
||
| type PlacementPolicyPodInfo struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can use cache instead of adding annotation and create a new addEventHandler from podInformer and update the cache with the unassigned pods. An example can be found here capacity scheduling.
| } | ||
|
|
||
| // nodeWithMatchingLabels is a group of nodes that have the same labels as defined in the placement policy | ||
| nodeWithMatchingLabels := groupNodesWithLabels(nodeList, pp.Spec.NodeSelector.MatchLabels) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if we introduce an internal counter let's call it currentTargetSize and update it everytime we assign a pod to a node?
Then filter the pods with the pp.PodSelector from (nodeList).Get(i).NodeInfo.Requested to know how many pods have already been scheduled and compare the number with the currentTargetSize, wdyt?
I'm trying to get rid of annotating every pod we will schedule.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO, having the annotation on every single is more deterministic because we are annotating the required action on it. The annotation indicates where the plugin wanted the pod to end up and we can use to validate if that did get honored. Even if we maintain the internal counter, we're still making the same set of API calls to construct the state?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I get the value of adding annotations for observability reasons but I am opposed to depending on them in the scheduler core logic.
Also, trying to optimize the query we are asking apiServer to perform has +ve weight. Especially if we have a large cluster with many nodes and thousand of pods.
Signed-off-by: Anish Ramasekar <[email protected]>
Signed-off-by: Anish Ramasekar <[email protected]>
Signed-off-by: Anish Ramasekar <[email protected]>
Signed-off-by: Anish Ramasekar <[email protected]>
Signed-off-by: Anish Ramasekar <[email protected]>
helayoty
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aramase
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Merging this PR with #25 so we have core logic to use with integration and e2e tests.
Signed-off-by: Anish Ramasekar [email protected]
DockerfilePreFilterandPreScoreto perform the required computation one time and annotate the pod with the node preference and placement policy used.FilterandScoreto make a decision for the node