Home
Reading
Searching
Subscribe
Sponsors
Statistics
Posting
Contact
Spam
Lists
Links
About
Hosting
Filtering
Features Download
Marketing
Archives
FAQ
Blog
 
Gmane
From: Linus Torvalds <torvalds <at> linux-foundation.org>
Subject: Re: 2.6.32-rc3: low mem - only 378MB on x86_32 with 64GB. Why?
Newsgroups: gmane.linux.kernel
Date: Saturday 10th October 2009 18:37:55 UTC (over 7 years ago)
On Sat, 10 Oct 2009, [email protected] wrote:
>
> When the x86 went 64-bit, the register pressure relief from the 
> additional registers usually more then outweighs the additional memory 
> bandwidth (basically, if you're spending twice as much time on each 
> load/store, but only doing it 40% as often, you come out ahead...)

That's mainly stack traffic, and x86 has always been good at it. More 
registers makes for simpler (and fewer) instructions due to less reloads, 
but for kernel loads, it's not the biggest advantage.

If you have 8GB of RAM or more, the biggest advantage _by_far_ for the 
kernel is that you don't spend 25% of your system time playing with 
k[un]map() and the TLB flushing that goes along with it. You also have 
much more freedom to allocate (and thus cache) inodes, dentries and 
various other fundamental kernel data structures.

Also, the reason MIPS and Sparc had a slowdown for 64-bit code was only 
partially the bigger cache footprint (and that depends a lot on the app 
anyway: many applications aren't that pointer-intensive. The kernel is 
_very_ pointer-intensive, but even for something like that, most data 
structures tend to blow up by 50%, not 100%).

The other reason for slowdown is that generating those pointers (for 
function calls in particular) is more complex, and x86-64 is better at 
that than MIPS and Sparc. That complex instruction encoding with 
variable-size instructions means that you don't have to try to fit all 
constants in the instruction stream either in the fixed-sized instruction, 
or by doing indirect data access to memory through a GP register. 

So x86-64 not only had the register expansion advantage, it had less of a 
code generation downside to 64-bit mode to begin with. Want to have large 
constants in the code? No problem. Sure, it makes your code bigger, but 
you can still have them predecoded in the instruction stream rather than 
have to load them from memory. Much nicer for everybody.

And for the kernel, the bigger virtual address space really is a _huge_ 
deal. HIGHMEM accesses really are very slow.  You don't see that in user 
space, but I really have seen 25% performance differences between 
non-highmem builds and CONFIG_HIGHMEM4G enabled for things that try to put 
a lot of data in highmem (and the 64G one is even more expensive). And 
that was just with 2GB of RAM.

And when it makes the difference between doing IO or not doign IO (ie 
caching or not caching - when the dentry cache can't grow any more because 
it _needs_ to be in lowmem), you can literally see an order-of-magnitude 
difference.

With 8GB+ of ram, I guarantee you that the kernel spent tons of time on 
just mappign high pages, _and_ it couldn't grow inodes and dentry caches 
nearly as big as it would have wanted to. Going to x86-64 makes all those 
issues just go away entirely.

So it's not "you can save a few instructions by not spilling to stack as 
much". It's a much bigger deal than that. There's a reason I personally 
refuse to even care about >2GB 32-bit machines. There's just no excuse 
these days to do that.

			Linus
 
CD: 3ms