-
Notifications
You must be signed in to change notification settings - Fork 921
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change score logic from serial to parallel #3090
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: Poor12 <[email protected]>
3cb8edc
to
da4272d
Compare
} | ||
|
||
// Option for the frameworkImpl. | ||
type Option func(*frameworkOptions) | ||
|
||
// WithParallelism sets parallelism for the scheduling frameworkImpl. | ||
func WithParallelism(parallelism int) Option { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where is this function used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think parallelesium is a configurable parameter. So I set aside this scaffolding method.
s, res := frw.runScorePlugin(ctx, p, placement, spec, cluster) | ||
if !res.IsSuccess() { | ||
return nil, framework.AsResult(fmt.Errorf("plugin %q failed with: %w", p.Name(), res.AsError())) | ||
err := fmt.Errorf("plugin %q failed with: %w", p.Name(), res.AsError()) | ||
errCh.SendErrorWithCancel(err, cancel) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
errCh imports new code, how about just using a channel, like:
errCh := make(chan error, 1)
...
errCh <- err
...
if len(errCh) > 0 {
xxx
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact, Kubernetes is also implemented in this way.
// Run NormalizeScore method for each ScorePlugin in parallel. | ||
frw.Parallelizer().Until(ctx, len(frw.scorePlugins), func(index int) { | ||
p := frw.scorePlugins[index] | ||
clusterScoreList := pluginToClusterScores[p.Name()] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need to check whether the key exists?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need. All scoring plugins have been allowed to score the cluster before. If the scoring fails, an error will be reported and this string of logic will not be entered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, almost identical as kubernetes codes.
/lgtm
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: Garrybest The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Signed-off-by: Poor12 [email protected]
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
Incorrect uasge of variables:
pluginToClusterScores := make(framework.PluginToClusterScores, len(frw.filterPlugins)) => pluginToClusterScores := make(framework.PluginToClusterScores, len(frw.scorePlugins))
Change score logic from serial to parallel.
It will improve performance when the number of clusters is large or there are many custom scoring plugins.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
None