Hi Patrick,
thank you for your response. I really appreciate that somebody helps to clear things up.
On 07.11.19 15:58, Patrick Georgi wrote:
On Thu, Nov 07, 2019 at 12:05:44PM +0100, Nico Huber wrote:
- Few people seem to take the might of Git into account. We have a
Git is rather limited in some respects: For example it has no notion of a copy (except by using interesting merge hacks: create a branch where you rename, merge the two branches, keeping both copies in conflict resolution and you'll get a copy operation that git understands).
Yeah, Git isn't perfect. But that doesn't mean we can't use what it provides, does it?
tool that can significantly increase the efficiency of development and can help to preempt bugs.
Interesting claim, but no: in the end it's up to developers. Git serves the function of a (rather stupid) librarian who keeps track of things so they don't get lost.
What are we bikeshedding here? That I said "[it] can" and should have said "[it] can be used to"?
But many people would accept a process that breaks Git benefits. Especially with the growth of the coreboot community, I believe this gains importance.
As Stefan explained yesterday, the copy & adapt flow was established in part in response to issues with high velocity development overwhelming the project's review capacity.
There are two approaches, each with their upsides and downsides, but from my point of view the review capacity issue is a real issue, while "git can't track copies" is not: it's easier to improve git (I'd expect they accept patches) than to ramp up on reviewers (that aren't in the next step accused of rubber stamping commits).
Is review capacity still an issue? From what you write it seems that it was back in 2012/2013 before I got really involved with the community. But during the last 3~4 years I witnessed the opposite. There were several occasions where I begged people to let me do a review and I was pushed away. At least in one case, I didn't even ask to break a big commit up. It was just decided that a review would be too much to bear for the submitter. So it would seem the capacity issue is on the other side:
Do we lack capacities to write reviewable patches?
If that is already the case, I fear it will turn into a race to the bottom. Less reviews, or
less thorough reviews => more bugs => less time
to write decent commits.
One approach is to bring up a new chipset feature by feature. This might happen by bringing over code from another chipset that sufficiently similar, editing it and only then putting it up for review.
This means that every instance of the code will look slightly different (because devs actively have to take apart .c files and reassemble them as they reassemble the various features in the chipset), and every line of code will have to be reviewed. That's where things broke down.
On top of that, since every instance of the code will look oh-so-slightly different, it becomes harder to port fixes around along the lineage of a chipset.
I agree, that is an issue I've seen, too. Though, without any guideline to keep older copies up to speed, we have that in any case (currently, everything is growing uncontrolled).
The other approach is to replicate what the chip designers (probably) did: Take the previous generation, make a copy and modify, under the assumption that the old stuff isn't the worst place to start from.
This means that the first code drop for chip(X+1) in coreboot likely is more suitable for initializing chip(X) instead of chip(X+1), as that's what the later follow-up commits provide.
The main risk here seems to be that old stuff that isn't needed anymore isn't cut out of the code.
That is one issue but I see a much bigger one...
On the upside, when applying transitive trust (that is: the assumption that the original chip(X+1) code isn't worse than the chip(X) code it came from),
... transitive trust is a fallacy. While the copied code isn't worse in terms of coding style and whatever poison the programming language has to offer, it can be worse for the new chip. If we don't keep a record and review what has to be changed for the new chip, more differences between old and new will be overlooked.
I can't prove it. But I assume that the effort to find overlooked details afterwards exceeds what we saved during the initial review.
the main concern is looking what changed between the generations, reducing the load on the project during review.
You assert that the first approach I listed is superior to the other, but you ignore the issues we had with it.
I'm not ignoring the past, sorry if I made it look like that. It just came as a surprise to me because I didn't witness any discussion of the review-capacity trouble. I have no idea, though, how to measure if we still have that problem. If reviewing turns out too slow, it's hard to tell, which side, author or reviewer, has to invest more time (if the authors know what they are doing, it's easier for them to speed things up, IMHO).
And on the project management side it's a hard sell (to put it mildly) when you expect everybody to follow your preferred scheme (without acknowledging its downsides) while claiming that it's project policy (it's not: we don't have a policy on that).
Well, I was told that we review the code before it's merged. IIRC, by you :) That we make exceptions due to lack of reviewing capacities, that I wasn't told. Later, our Gerrit guidelines were introduced and they show no sign of exceptions. I guess we should be more careful about what we write down and even more so about what we tell people in private without writing it down.
We can decide to put up a policy, but we should be aware of the costs of whatever approach we mandate.
Given that the code is often not readable anymore, lacks reason and reasoning, I believe that the costs of the copy-first approach are too high. Let's try something else?
Nico