Home
Reading
Searching
Subscribe
Sponsors
Statistics
Posting
Contact
Spam
Lists
Links
About
Hosting
Filtering
Features Download
Marketing
Archives
FAQ
Blog
 
Gmane
From: Ingo Molnar <mingo <at> kernel.org>
Subject: [PATCH 0/3, v2] mprotect() and working set sampling optimizations
Newsgroups: gmane.linux.kernel
Date: Wednesday 14th November 2012 09:18:48 UTC (over 4 years ago)
Ok, people suggested to split out the change_protection() modification
into a third patch.

This series implements an mprotect() optimization that also
helps improve the quality of working set scanning:

  - working set scanning gets faster

  - we can scan with a touched-page rate, instead of with a
    virtual-memory proportional rate (within limits).

This is already part of numa/core, but wanted to send it out
separately as well, to get specific feedback for the mprotect()
bits.

Thanks,

	Ingo

---
Ingo Molnar (1):
  mm: Optimize the TLB flush of sys_mprotect() and change_protection()
    users

Peter Zijlstra (2):
  mm: Count the number of pages affected in change_protection()
  sched, numa, mm: Count WS scanning against present PTEs, not virtual
    memory ranges

 include/linux/hugetlb.h |  8 ++++++--
 include/linux/mm.h      |  6 +++---
 kernel/sched/fair.c     | 37 +++++++++++++++++++++----------------
 mm/hugetlb.c            | 10 ++++++++--
 mm/mprotect.c           | 46
++++++++++++++++++++++++++++++++++------------
 5 files changed, 72 insertions(+), 35 deletions(-)

-- 
1.7.11.7

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email:  email@kvack.org 
 
CD: 3ms