On 26.01.20 19:46, David Hendricks wrote:
Of course, there'll always be a gap when a new platform is added. We could make it a rule, though, that no commit should be merged to the master branch, before the one that hooks the build test up is reviewed.
This means that such code isn't build tested for even longer, effectively the whole development cycle until all the ducks are lined up.
you mean it's less tested when it's sitting in the review queue, waiting for the hook-up? I don't see any difference in that aspect. As long as it's not hooked up for Jenkins, only the developers that work on the new platform will do build tests. And they can only spot conflicts when/ after rebasing the patch train, no matter if parts of it are already merged. However, in case of conflicts, one could update patches that aren't merged yet. But for patches that are already merged, one would have to write fixups (more patches) and would have room to repeat the error (I feel like I've seen fixups for fixups before).
For an individual developer this might make sense, but for large projects I think that will make automation and coordination across multiple companies unfeasible.
We are still talking about adding new platform support to coreboot and build testing the code, right? I try to always keep the whole process in mind. And all I've said so far was to make it possible to work better together on such an endeavor even across multiple companies and the coreboot community. I have no idea how you got the impression that my view is too narrow.
I just don't think the model you propose is realistic. IMO the best case is that it results in huge patch chains that churn a lot and are frustrating to work with. And in the worst (and most likely) case it will result in new silicon and platforms being developed out upstream entirely.
Is it like with the Underpants Gnomes?
Speaking of weird arguments...
So what happens when somebody in one company rebases and has to fix a line somewhere; will they commit and push the patch imme- diately so everyone else working on the new platform can fetch it? I highly doubt that.
That's the current expectation - pull from master, rebase, and push fixes when needed.
However, if the queue stays on Gerrit, it would only cause build breakage when that queue is rebased and it would be much easier to share the resulting fixes.
I think this model will result in huge patch chains on gerrit, and in my experience it's never easy to shuffle patches and fixes that way.
I simply disagree with you here, and based on the reaction of others in the community it appears I'm not alone in thinking that your proposed workflow is unfeasible.
But what am I talking about. David, please share experience and explain your workflow and argue why build tests would break it.
Then the community moans about not having a voice earlier in the process and wonder why companies aren't doing development upstream. In other words we go back to 2012 when Chromebooks started appearing upstream (hundreds of patches at a time being pushed from an internal repo to upstream) or 2020 when massive coreboot patches based on an old commit get posted elsewhere (like https://github.com/teslamotors/coreboot).
Please explain your argumentation before making wild claims about consequences. This reads like propaganda.
Here's ya go: https://mail.coreboot.org/hyperkitty/list/coreboot@coreboot.org/message/6YV7...
To put it another way, coreboot needs upstream development under the current model more than it needs your idealistic (and unrealistic) development model. Not that things in the current model are perfect, but I'd rather see the AMD code get merged in an imperfect state, developed toward maturity, and eventually hooked into the build system rather than have it developed out of tree.