Subject: [PATCH -v5 0/5] x86,smp: make ticket spinlock proportional backoff w/ auto tuning
Date: Wednesday 6th February 2013 20:03:11 UTC (over 3 years ago)
Many spinlocks are embedded in data structures; having many CPUs pounce on the cache line the lock is in will slow down the lock holder, and can cause system performance to fall off a cliff. The paper "Non-scalable locks are dangerous" is a good reference: http://pdos.csail.mit.edu/papers/linux:lock.pdf In the Linux kernel, spinlocks are optimized for the case of there not being contention. After all, if there is contention, the data structure can be improved to reduce or eliminate lock contention. Likewise, the spinlock API should remain simple, and the common case of the lock not being contended should remain as fast as ever. However, since spinlock contention should be fairly uncommon, we can add functionality into the spinlock slow path that keeps system performance from falling off a cliff when there is lock contention. Proportional delay in ticket locks is delaying the time between checking the ticket based on a delay factor, and the number of CPUs ahead of us in the queue for this lock. Checking the lock less often allows the lock holder to continue running, resulting in better throughput and preventing performance from dropping off a cliff. The test case has a number of threads locking and unlocking a semaphore. With just one thread, everything sits in the CPU cache and throughput is around 2.6 million operations per second, with a 5-10% variation. Once a second thread gets involved, data structures bounce from CPU to CPU, and performance deteriorates to about 1.25 million operations per second, with a 5-10% variation. However, as more and more threads get added to the mix, performance with the vanilla kernel continues to deteriorate. Once I hit 24 threads, on a 24 CPU, 4 node test system, performance is down to about 290k operations/second. With a proportional backoff delay added to the spinlock code, performance with 24 threads goes up to about 400k operations/second with a 50x delay, and about 900k operations/second with a 250x delay. However, with a 250x delay, performance with 2-5 threads is worse than with a 50x delay. Making the code auto-tune the delay factor results in a system that performs well with both light and heavy lock contention, and should also protect against the (likely) case of the fixed delay factor being wrong for other hardware. The attached graph shows the performance of the multi threaded semaphore lock/unlock test case, with 1-24 threads, on the vanilla kernel, with 10x, 50x, and 250x proportional delay, as well as the v1 patch series with autotuning for 2x and 2.7x spinning before the lock is obtained, and with the v2 series. The v2 series integrates several ideas from Michel Lespinasse and Eric Dumazet, which should result in better throughput and nicer behaviour in situations with contention on multiple locks. For the v3 series, I tried out all the ideas suggested by Michel. They made perfect sense, but in the end it turned out they did not work as well as the simple, aggressive "try to make the delay longer" policy I have now. Several small bug fixes and cleanups have been integrated. For the v4 series, I added code to keep the maximum spinlock delay to a small value when running in a virtual machine. That should solve the performance regression seen in virtual machines. The v5 series has cleanups suggested by Ingo Molnar and Borislav Petkov. Performance is within the margin of error of v2, so the graph has not been update. Please let me know if you manage to break this code in any way, so I can fix it... -- All rights reversed.