Home
Reading
Searching
Subscribe
Sponsors
Statistics
Posting
Contact
Spam
Lists
Links
About
Hosting
Filtering
Features Download
Marketing
Archives
FAQ
Blog
 
Gmane
From: Zachary Amsden <zamsden <at> redhat.com>
Subject: Re: [RFC] Unify KVM kernel-space and user-space code into a single project
Newsgroups: gmane.linux.kernel
Date: Thursday 18th March 2010 21:02:12 UTC (over 6 years ago)
On 03/18/2010 12:50 AM, Ingo Molnar wrote:
> * Avi Kivity  wrote:
>
>    
>>> The moment any change (be it as trivial as fixing a GUI detail or as
>>> complex as a new feature) involves two or more packages, development
speed
>>> slows down to a crawl - while the complexity of the change might be
very
>>> low!
>>>        
>> Why is that?
>>      
> It's very simple: because the contribution latencies and overhead
compound,
> almost inevitably.
>
> If you ever tried to implement a combo GCC+glibc+kernel feature you'll
know
> ...
>
> Even with the best-run projects in existence it takes forever and is very
> painful - and here i talk about first hand experience over many years.
>    

Ingo, what you miss is that this is not a bad thing.  Fact of the matter 
is, it's not just painful, it downright sucks.

This is actually a Good Thing (tm).  It means you have to get your 
feature and its interfaces well defined and able to version forwards and 
backwards independently from each other.  And that introduces some 
complexity and time and testing, but in the end it's what you want.  You 
don't introduce a requirement to have the feature, but take advantage of 
it if it is there.

It may take everyone else a couple years to upgrade the compilers, 
tools, libraries and kernel, and by that time any bugs introduced by 
interacting with this feature will have been ironed out and their 
patterns well known.

If you haven't well defined and carefully thought out the feature ahead 
of time, you end up creating a giant mess, possibly the need for nasty 
backwards compatibility (case in point: COMPAT_VDSO).  But in the end, 
you would have made those same mistakes on your internal tree anyway, 
and then you (or likely, some other hapless project maintainer for the 
project you forked) would have to go add the features, fixes and 
workarounds back to the original project(s).  However, since you 
developed in an insulated sheltered environment, those fixes and 
workarounds would not be robust and independently versionable from each 
other.

The result is you've kept your codebase version-neutral, forked in 
outside code, enhanced it, and left the hard work of backporting those 
changes and keeping them version-safe to the original package 
maintainers you forked from.  What you've created is no longer a single 
project, it is called a distro, and you're being short-sighted and 
anti-social to think you can garner more support than all of those 
individual packages you forked.  This is why most developers work 
upstream and let the goodness propagate down from the top like molten 
sugar of each granular package on a flan where it is collected from the 
rich custard channel sitting on a distribution plate below before the 
big hungry mouth of the consumer devours it and incorporates it into 
their infrastructure.

Or at least, something like that, until the last sentence.  In short, if 
project A has Y active developers, you better have Z >> Y active 
developers to throw at project A when you fork it into project B.

Zach
 
CD: 3ms