Home
Reading
Searching
Subscribe
Sponsors
Statistics
Posting
Contact
Spam
Lists
Links
About
Hosting
Filtering
Features Download
Marketing
Archives
FAQ
Blog
 
Gmane
From: Alex Shi <alex.shi <at> intel.com>
Subject: [PATCH v5 0/7] use runnable load avg in load balance
Newsgroups: gmane.linux.kernel
Date: Monday 6th May 2013 01:45:04 UTC (over 3 years ago)
This patchset bases on tip/sched/core.

It fixed a UP config bug. And the last of patch changed, it insert the
runnable load avg into effective_load(), thus the wake_affine consider 
load avg via effective_load.

I retested on Intel core2, NHM, SNB, IVB, 2 and 4 sockets machines with
benchmark kbuild, aim7, dbench, tbench, hackbench, fileio-cfq(sysbench),
and tested pthread_cond_broadcast on SNB.

The test result is similar with last version. So, clear changes is same:
On SNB EP 4 sockets machine, the hackbench increased about 50%, and result
become stable. on other machines, hackbench increased about 2~5%.
no clear performance change on other benchmarks.

Since the change is small, and my results is similar with last, guess
Michael
and Morten still can keep the advantages. 

Anyway, for last version result of them, you can find:
https://lkml.org/lkml/2013/4/2/1022
Michael Wang had tested previous version on pgbench on his box:

http://comments.gmane.org/gmane.linux.kernel/1463371
Morten tested previous version with some benchmarks.
 
Thanks again for Peter's comments!

Regards!
Alex

 [PATCH v5 1/7] Revert "sched: Introduce temporary FAIR_GROUP_SCHED
 [PATCH v5 2/7] sched: remove SMP cover for runnable variables in
 [PATCH v5 3/7] sched: set initial value of runnable avg for new
 [PATCH v5 4/7] sched: update cpu load after task_tick.
 [PATCH v5 5/7] sched: compute runnable load avg in cpu_load and
 [PATCH v5 6/7] sched: consider runnable load average in move_tasks
 [PATCH v5 7/7] sched: consider runnable load average in
 
CD: 4ms