Home
Reading
Searching
Subscribe
Sponsors
Statistics
Posting
Contact
Spam
Lists
Links
About
Hosting
Filtering
Features Download
Marketing
Archives
FAQ
Blog
 
Gmane
From: Szabolcs Szakacsits <szaka-IyvsvuGDJ8VAfugRpC6u6w <at> public.gmane.org>
Subject: ntfs-3g-1.910-RC released
Newsgroups: gmane.comp.file-systems.ntfs-3g.devel
Date: Monday 10th September 2007 00:25:45 UTC (over 10 years ago)
Hi,

There were some feedbacks about coping many files is slow. We took a look 
and improved the speed typically by 50-1000%. It's not a typo. The higher 
value we have measured was indeed one thousand percentage.

Here it is what was going on. 

When applications wrote files and the file size were not a multiply of the 
block size (over 99.995% of all cases) then when ntfs-3g made a write 
request to the Linux kernel, the kernel sought to the relevant disk sector 
and read the remaining bytes to fill the end of the buffer, instead doing 
the job asynchronously. This has caused disk head seek storms and very 
inefficient write performance (it can be reproduce with other file systems 
as well). We don't know yet if this is a bug or feature in the Linux 
kernel.

This release candinate aims to solved the above problem, hereby greatly
improve related performances.

For example unpacking the Linux kernel source tree (21,000+ files) is
usually 3-6 times faster now, depending on the hardware.

Please note that this fix can't decrease the CPU usage. In fact, just the 
opposite. Beforehand the time was spent waiting for the slow disk I/O. By 
eliminating most of the disk seeks now, the CPU can achieve more useful 
work which will result higher CPU usage. Of course that will be improved 
too at some point in the future.

The benchmarks are done on Linux, the performance impact is not known on
other OSes but it should not be worse.

This speed enhancement wouldn't have happened without David Fox's 
continuous help from week after week. Thank you David!

Concurrent write performance is improved as well, moreover the performance 
of writing multi-GB size files, especially after the creation of thousands 
of other files. This only helps if the disk space is defragmented (file 
level defragmention is not enough). As far as we know, there is no free 
utility which could do this, so we plan to release one in the near future.

The release candidate is available at

	http://ntfs-3g.org/

If no problem will be reported then the next stable release will be made 
earliest on late Wednesday, UTC. Please test intensively. Here are some 
help how one can do it without Windows and without [using] existing NTFS 
partitions,

	http://ntfs-3g.org/quality.html#howtotest

Thank you for your attention and support,

	Szabolcs

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
 
CD: 3ms