Hi David,
I'm a bit confused by some of your arguments, it just seems weird. So my apologies in advance if I misinterpreted you.
On 26.01.20 19:46, David Hendricks wrote:
Of course, there'll always be a gap when a new platform is added. We could make it a rule, though, that no commit should be merged to the master branch, before the one that hooks the build test up is reviewed.
This means that such code isn't build tested for even longer, effectively the whole development cycle until all the ducks are lined up.
you mean it's less tested when it's sitting in the review queue, waiting for the hook-up? I don't see any difference in that aspect. As long as it's not hooked up for Jenkins, only the developers that work on the new platform will do build tests. And they can only spot conflicts when/ after rebasing the patch train, no matter if parts of it are already merged. However, in case of conflicts, one could update patches that aren't merged yet. But for patches that are already merged, one would have to write fixups (more patches) and would have room to repeat the error (I feel like I've seen fixups for fixups before).
For an individual developer this might make sense, but for large projects I think that will make automation and coordination across multiple companies unfeasible.
We are still talking about adding new platform support to coreboot and build testing the code, right? I try to always keep the whole process in mind. And all I've said so far was to make it possible to work better together on such an endeavor even across multiple companies and the coreboot community. I have no idea how you got the impression that my view is too narrow.
The likely outcome is that large projects are developed internally and eventually we see a huge code drop after the product has been released (when the "ducks are lined up" as Patrick says), and by then nobody involved with development really cares about fixing up the code since their paycheck depends on launching the next thing.
How? Why? I really miss a step in your argumentation. Is it like with the Underpants Gnomes?
1. Build tests 2. ... 3. Downstream development
What I proposed was to merge patches later. Not to review them later. Technically the only difference would be when is a commit copied to another branch. It's still the same commit, just on a different branch. So what's the difference for working together?
So I really don't see how that would push people away from Gerrit. I would even expect the exact opposite. If you merge patches that are not hooked up for build testing early, they are more likely to break. So what happens when somebody in one company rebases and has to fix a line somewhere; will they commit and push the patch imme- diately so everyone else working on the new platform can fetch it? I highly doubt that. However, if the queue stays on Gerrit, it would only cause build breakage when that queue is rebased and it would be much easier to share the resulting fixes.
But what am I talking about. David, please share experience and explain your workflow and argue why build tests would break it.
Then the community moans about not having a voice earlier in the process and wonder why companies aren't doing development upstream. In other words we go back to 2012 when Chromebooks started appearing upstream (hundreds of patches at a time being pushed from an internal repo to upstream) or 2020 when massive coreboot patches based on an old commit get posted elsewhere (like https://github.com/teslamotors/coreboot).
Please explain your argumentation before making wild claims about consequences. This reads like propaganda.
Maybe even in a designated area within src/mainboard? "Staging" perhaps?
Probably overkill, how about simply having a warning instead of the board name in the Kconfig prompt?
Agreed, and moving code around also makes history more difficult to follow so it's best to get the code structure in place early on if possible.
It's code, so it's always possible.
Nico