On Friday, March 21, 2014 09:18:07 PM ron minnich wrote:
On Fri, Mar 21, 2014 at 6:52 PM, Alex G. mr.nuke.me@gmail.com wrote:
The bad-ness or good-ness of motives is relative. Note that I'm not
using "bad" in the sense of "evil". Let's look at the six gatekeeper idea:
- Easier for commercial entities to upstream code, therefore faster
progress for coreboot (good motive). (a)
- Easier for commercial entities to upstream code, therefore we can get
lazy even if code quality drops (bad motive). (b)
That's not the intent. The way you are stating this has a lot of built-in assumptions, and you're mixing some things up. That's our fault; the old rule is, that if someone did not understand what you said, it's your fault, not theirs. So, speaking as one of the guys who reviewed the email, I'm sorry it was not clearer.
Let's be honest. Upstreaming contributions from Google has been a long and tedious O(n^2) process. Even though we've improved a bit in terms of jenkins build time (optimizations to abuild et. cetera) so that we don't have to wait one week for a bomb to verify, the workflow of one review per patch is still sub-optimal.
So, first, the intent of the six gatekeeper idea is, in part, to be sure we're being very careful about what goes in. [...]
Six seems a very arbitrary, even number.
[...]
So it's not about "Easier for commercial entities to upstream code" -- it's about not having to do the full review process *twice*, given that the code has been picked to pieces by experienced long-time coreboot coders, most of whom (no offense intended) are better qualified to review the code than almost anyone else. That's why I claim what you are saying as not quite right.
I think we need to restart this discussion without the implicit assumption that gerrit is a part of the workflow. We can do what Linux does. Different individuals maintain different sub-somethings of the coreboot tree. I'm not trying to establish any criteria for maintainer status.
Short answer: 1* Google team (*) assigns a coreboot representative 2* Rep decides it's time to merge some good branch 3* Rep takes internal branch, and works on top of branch 4* Rep makes branch mergeable 5* Rep submits pull/merge request to maintainer (**) of touched code 6* Maintainer reviews branch as one unit and requests changes 7* Rep applies changes on top of branch 8* Repeat 6 and 7 as needed 9* Maintainer likes what he's seeing, merges branch to his tree
(*) They're still welcome to hang on IRC, ML, and/or submit individual patches. (**) Maintainer and rep must be different persons, preferably not from the same team and/or not from teams working on the same thing.
This process eliminates the *twice* review. Individual patches are no longer reviewed upstream; however, a review is still done, and corrections are still applied. Since a logical change is a branch rather than a patch, the overhead is reduced significantly without eliminating the "community scrutiny" phase. You keep all the goodies and reduce review complexity from O(n^2) to O(1).
Sound good so far?
But wait! There's more! As a bonus, considering that individual patches are no longer altered, we have the hashes of shipping firmware.
So, why is this better than the gatekeepers idea? * We don't bottleneck patches by limiting number of submitters/maintainers * Maintainers can focus on the part of tree matching their expertise * Reduces patch bikeshedding as maintainers have the final word
Take this a step further: maintainer trees are not the master tree. Maintainers still have to periodically get their branches merged with master.
Finally, a simple question here: how many gatekeepers are there for the final full released version of each version of the Linux kernel?
Do you want me to answer that in binary, ternary, octal, or hexadecimal? Those gatekeeprs don't deal with individual patches, do they?
My $.02 anyway.
Keep on sending them. A few hundred more of them, and I'll have enough for a cigar.
Alex