Home
Reading
Searching
Subscribe
Sponsors
Statistics
Posting
Contact
Spam
Lists
Links
About
Hosting
Filtering
Features Download
Marketing
Archives
FAQ
Blog
 
Gmane
From: Nitin Gupta <nitingupta910 <at> gmail.com>
Subject: [PATCH 0/3] compressed in-memory swapping take4
Newsgroups: gmane.linux.kernel
Date: Sunday 29th March 2009 03:43:22 UTC (over 7 years ago)
Hi,

Project home: http://compcache.googlecode.com

It allows creating a RAM based block device which acts as swap disk.
Pages swapped to this device are compressed and stored in memory itself.
This is a big win over swapping to slow hard-disk which are typically used
as swap disk. For flash, these suffer from wear-leveling issues when used
as swap disk - so again its helpful. For swapless systems, it allows more
apps to run.

* Changelog: take4 vs take3
xvmalloc changes:
  - Fixed regression in take3 that caused ramzswap write failures.
    This happened due to error in find_block() where we did not do
    explicit cast to 'unsigned long' when checking for bits set in
    bitmap. Now changed it to use kernel build-in test_bit().
  - Fix divide by zero error in proc read function.
ramzswap changes:
  - Forward write requests to backing swap device if allocation for
    compressed page fails.
  - Code cleanups.

(Please also see testing notes below).

* Changelog: take3 vs take2
xvmalloc changes:
  - Use kernel defined macros and constants in xvmalloc and remove
    equivalent defines for ALIGN, roundup etc.
  - Use kernel bitops (set_bit, clear_bit)
  - Moved it to drivers/block since its not clear if it has any other
    user.
ramzswap changes:
  - All instances of compcache renamed to ramzswap.
    Also renamed module to ramzswap
  - Renamed "backing_dev" parameter to "backing_swap"
  - Documentation changes to reflect above changes.
  - Remove "table index" from object header (4 bytes). This will be
    needed when memory defragmentation is implemented. So, avoid this
    (small) overhead for now.

* Changelog: take2 vs initial revision:
xvmalloc changes:
  - Use Linux kernel coding style for xvmalloc
  - Collapse all individual flag test/set/get to generic
{test_set_get}_flag
  - Added BLOCK_NEXT() macro to reach next contiguous block
  - Other minor cleanups - no functional changes
compcache block device code:
  - compcache core changes due to change in xvmalloc interface names

* Testing notes:
  - Multiple cycles of 'scan' benchmark available at:
http://linux-mm.org/PageReplacementTesting
It does scans of anonymous mapped memory, both cyclic and use once.

Config:
Arch: x86 and x64
CPUs: 1/2, RAM: 512MB
backing swap: 768MB, ramzswap memlimit: 76MB (15% of RAM).

Continuously run 'scan' till it triggers 200K R/W operations on ramzswap.
Any incompressible pages were correctly forwarded to backing swap device.
cmd: ./scan 450 20 # scan over 450MB, 20 times.

  - Links to more performance numbers, use cases can be found at:
http://lkml.org/lkml/2009/3/17/116

Thanks to Ed Tomlinson for reporting bug in 'take3' patches
and to reviewers of previous versions.

Thanks,
Nitin
 
CD: 4ms