* Zachary Amsden <[email protected]> wrote:
> On 03/18/2010 11:15 AM, Ingo Molnar wrote:
> >* Zachary Amsden<[email protected]> wrote:
> >>On 03/18/2010 12:50 AM, Ingo Molnar wrote:
> >>>* Avi Kivity wrote:
> >>>>>The moment any change (be it as trivial as fixing a GUI detail or as
> >>>>>complex as a new feature) involves two or more packages, development
> >>>>>slows down to a crawl - while the complexity of the change might be
> >>>>Why is that?
> >>>It's very simple: because the contribution latencies and overhead
> >>>almost inevitably.
> >>>If you ever tried to implement a combo GCC+glibc+kernel feature you'll
> >>>Even with the best-run projects in existence it takes forever and is
> >>>painful - and here i talk about first hand experience over many years.
> >>Ingo, what you miss is that this is not a bad thing. Fact of the
> >>matter is, it's not just painful, it downright sucks.
> >Our experience is the opposite, and we tried both variants and report
> >our experience with both models honestly.
> >You only have experience about one variant - the one you advocate.
> >See the assymetry?
> >>This is actually a Good Thing (tm). It means you have to get your
> >>feature and its interfaces well defined and able to version forwards
> >>and backwards independently from each other. And that introduces
> >>some complexity and time and testing, but in the end it's what you
> >>want. You don't introduce a requirement to have the feature, but
> >>take advantage of it if it is there.
> >>It may take everyone else a couple years to upgrade the compilers,
> >>tools, libraries and kernel, and by that time any bugs introduced by
> >>interacting with this feature will have been ironed out and their
> >>patterns well known.
> >Sorry, but this is pain not true. The 2.4->2.6 kernel cycle debacle has
> >us that waiting long to 'iron out' the details has the following
> > - developer pain
> > - user pain
> > - distro pain
> > - disconnect
> > - loss of developers, testers and users
> > - grave bugs discovered months (years ...) down the line
> > - untested features
> > - developer exhaustion
> >It didnt work, trust me - and i've been around long enough to have
> >through the whole 2.5.x misery. Some of our worst ABIs come from that
> You're talking about a single project and comparing it to my argument
> multiple independent projects. In that case, I see no point in the
> discussion. If you want to win the argument by strawman, you are welcome
> do so.
The kernel is a very complex project with many ABI issues, so all those
arguments apply to it as well. The description you gave:
| This is actually a Good Thing (tm). It means you have to get your
| and its interfaces well defined and able to version forwards and
| independently from each other. And that introduces some complexity and
| time and testing, but in the end it's what you want. You don't
| requirement to have the feature, but take advantage of it if it is
matches the kernel too. We have many such situations. (Furthermore, the
tools/perf/ situation, which relates to ABIs and user-space/kernel-space
interactions is similar as well.)
Do you still think i'm making a straw-man argument?
> > Sorry, but i really think you are really trying to rationalize a
> > disadvantage here ...
> This could very well be true, but until someone comes forward with
> compelling numbers (as in, developers committed to working on the
> number of patches and total amount of code contribution), there is no
> in having an argument, there really isn't anything to discuss other than
> opinion. My opinion is you need a really strong justification to have a
> successful fork and I don't see that justification.
I can give you rough numbers for tools/perf - if that counts for you.
For the first four months of its existence, when it was a separate project,
had a single external contributor IIRC.
The moment it went into the kernel repo the number of contributors and
contributions skyrocketed and basically all contributions were top-notch.
are at 60+ separate contributors now (after about 8 months upstream) -
is still small compared to the kernel or to Qemu, but huge for a relatively
isolated project like instrumentation.
So in my estimation tools/kvm/ would certainly be popular. Whether it would
more popular than current Qemu is hard to tell - it would be pure
Any reliable numbers for the other aspect, whether a split project creates
more fragile and less developed ABI would be extremely hard to get. I
it to be true, but that's my opinion based on my experience with other
projects, extrapolated to KVM/Qemu.
Anyway, the issue is moot as there's clear opposition to the unification
Too bad - there was heavy initial opposition to the arch/x86 unification as
well [and heavy opposition to tools/perf/ as well], still both worked out
extremely well :-)
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html