Home
Reading
Searching
Subscribe
Sponsors
Statistics
Posting
Contact
Spam
Lists
Links
About
Hosting
Filtering
Features Download
Marketing
Archives
FAQ
Blog
 
Gmane
From: Chris Mason <chris.mason <at> oracle.com>
Subject: Re: [PATCH 0/4] (RESEND) ext3[34] barrier changes
Newsgroups: gmane.comp.file-systems.ext4
Date: Monday 19th May 2008 17:16:26 UTC (over 9 years ago)
On Monday 19 May 2008, Andrew Morton wrote:
> On Sun, 18 May 2008 21:29:30 -0500 Eric Sandeen 
wrote:
> > Theodore Tso wrote:
> > ...
> >
> > > Given how rarely people have reported problems, I think it's a really
> > > good idea to understand what exactly our exposure is for
> > > $COMMON_HARDWARE.
> >
> > I'll propose that very close to 0% of users will ever report "having
> > barriers off seems to have corrupted my disk on power loss!" even if
> > that's exactly what happened.  And it'd be very tricky to identify in a
> > post-mortem.  Instead we'd probably see other weird things caught down
> > the road during some later fsck or during filesystem use, and then
> > suggest that they go check their cables, run memtest86 or something...
> >
> > Perhaps it's not the intent of this reply, Ted, but various other bits
> > of this thread have struck me as trying to rationalize away the
problem.
>
> Not really.  It's a matter of understanding how big the problem is.  We
> know what the cost of the solution is, and it's really large.
>
> It's a tradeoff, and it is unobvious where the ideal answer lies,
> especially when not all the information is available.

I think one mistake we (myself included) have made all along with the
barrier 
code is intermixing discussions about the cost of the solution with 
discussions about needing barriers at all.  Everyone thinks the barriers
are 
slow because we also think running without barriers is mostly safe.

Barriers are actually really fast, at least when you compare them to
running 
with the writecache off.  Making them faster in general may be possible,
but 
they are somewhat pushed off to the side right now because so few people
are 
running them.

Here's a test workload that corrupts ext3 50% of the time on power fail 
testing for me.  The machine in this test is my poor dell desktop (3ghz,
dual 
core, 2GB of ram), and the power controller is me walking over and ripping 
the plug out the back.

In other words, this is not a big automated setup doing randomized power
fails 
on 64 nodes over 16 hours and many TB of data.  The data working set for
this 
script is 32MB, and it takes about 10 minutes per run.

 The workload has 4 parts:

1) A directory tree full of empty files with very long names (160 chars)
2) A process hogging a significant percent of system ram.  This must  be
    enough to force constant metadata writeback due to memory pressure, and
is
    controlled with -p size_in_mb
3) A process constantly writing, fsyncing and truncating to zero a single
64k
    file
4) A process constantly renaming the files with very long names from (1)
    between long-named-file.0 and long-named-file.1

The idea was to simulate a loaded mailserver, and to find the corruptions
by 
reading through the directory tree and finding files long-named-file.0 and 
long-named-file.1 existing at the same time.  In practice, it is faster to 
just run fsck -f on the FS after a crash.

In order to consistently cause corruptions, the size of the directory from
(1) needs to be at least as large as the ext3 log.  This is controlled with
the -s command line option.  Smaller sizes may work for the impatient, but
it 
is more likely to corrupt for larger ones.

The program first creates the files in a directory called barrier-test
then it starts procs to pin ram and run the constant fsyncs.  After
each phase has run long enough, they print out a statement about
being ready, along with some other debugging output:

Memory pin ready
fsyncs ready
Renames ready

Example run:

# make 500,000 inodes on a 2GB partition.  The results in a 32MB log
mkfs.ext3 -N 500000 /dev/sda2
mount /dev/sda2 /mnt
cd /mnt

# my machine has 2GB of ram, -s 1500 will pin ~1.5GB
barrier-test -s 32 -p 1500

Run init, don't cut the power yet
10000 files 1 MB total
 ... these lines repeat for a bit
200000 files 30 MB total
Starting metadata operations now
r:1000
Memory pin ready
f:100 r:2000 f:200 r:3000 f:300
fsyncs ready
r:4000 f:400 r:5000 f:500 r:6000 f:600 r:7000 f:700 r:8000 f:800 r:9000
f:900 
r:10000
Renames ready

# I pulled the plug here
# After boot:

root@opti:~# fsck -f /dev/sda2
fsck 1.40.8 (13-Mar-2008)
e2fsck 1.40.8 (13-Mar-2008)
/dev/sda2: recovering journal
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Problem in HTREE directory inode 281377 (/barrier-test): bad block number 
13543.
Clear HTree index?    

< 246 other errors are here >

-chris
 
CD: 3ms