On 5/6/21 1:35 AM, Julius Werner wrote:
As an open source project, coreboot doesn't have anywhere near the resources to do enough QA to guarantee that the tip of the master branch (or any branch or tag, for that matter) was stable enough to be shipped in a product at any point in time... even Linux cannot do that, and they have way more resources than we do. It's always best effort, and if you want to put this on something you want to sell for money, you'll have to pick a point where you take control of it (i.e. cut a branch) and then do your own QA on that.
This is what we doing right now.
I wonder if community and leadership agrees with "setup your own QA" approach.
We will advocate for improved and extended QA for coreboot and any other OSF, since without that working and doing business is nightmare simply blocking growth.
I mean, I don't think anyone here is going to argue against better QA, it's just hard to do in practice. This is definitely not a burden you can just push on developers -- coreboot supports hundreds of mainboards and most of us only own a handful of those. It's just not practically possible to make everyone who checks in a patch guarantee that they don't break anyone else on any board with it. We all do our best but accidents do and always will happen. The only way to get consistently more confidence in the code base is through automated systems, and those are expensive... we already have good build testing, at least, and our recent unit test efforts should also help a lot. But we don't have any real hardware testing other than those few machines 9elements sponsored which only run after the patch is merged (and which many committers don't pay attention to, I think). If you want a big lab where every patch gets tested on every mainboard, someone needs to set all that up and someone needs to pay for it. I'm actually involved in a similar thing with the trustedfirmware.org project right now who are in the process of setting up such a lab, and I'm not sure if I'm allowed to share exact numbers, but you're quickly in the range of thousands of dollars per mainboard per year just for maintaining it (to say nothing of the upfront development cost).
We are not in favor of centralization. We would advertise decentralized approach with known interface to which every company lab can connect. I do not recall you discussing contest and testing systems during OSFC, but it seem you may have lot of important insights.
If 3mdeb maintains some boards, we already testing those and would be glad to hook, in secure way, to patch testing system, but I would like to know where is interface documentation so I can evaluate cost of integration and convince customers to go that path. This was expressed many times in various communication channels (conferences, slack).
There are many reasons for rebasing or updating firmware to name few security and maintainability. Second case is interesting since, if you maintain 5 projects which are 4.12 based it is way different then maintain 4.0, 4.6, 4.9 etc.
3mdeb doing Linux maintenance for industrial embedded systems, so we can easily compare efforts related to coreboot and Linux maintanance. IMO Linux doing quite well and setting up stable, LTS and SLTS (10 years support) is huge win and clear understanding expansion of project to the realms where stability is key factor. Linux can be maintained way easier and came with more and more QA guarantees.
So, I actually get the feeling that what you really want is well-maintained stable/LTS branches for coreboot releases (like Linux has)?
I think it can be expensive to go all-in in that direction, but if we could go in that direction it would be great.
Because for security and bug fixes in a real product, always rebasing onto master is just a bad idea in general. coreboot changes all the time, features get added, changed, serialized data formats differ... you really don't want to keep pushing all those changes onto your finished product and figure out how they affect it every time. You really just want to stick with what you have and only pull in security and bug fixes as they come up, I think.
As I said this is old-time UEFI firmware development approach with forks. Personally I disagree, because it seem to be approach for quickly dying products <5 years, not something what should last for 10+ years or more (AMD CPU from PC Engines was introduced in 2014 and EOL is 2024, but product will last longer). That's why we test things and detect regressions in user visible scenarios. Please note that we release PC Engines firmware every month for almost 4 years, so it is definitely possible.
Question is why coreboot change so dramatically in all mentioned areas? Does projects with similar lifetime also change in so significant way?
To that I would say: yeah, stable branches are great! It would be really cool if we had them! The problem is just... someone has to step up and do it. This is a volunteer project so generally things don't get done unless someone who wants them to happen takes the time and does it. Linux has stable branch maintainers who do a lot of work pulling in all security/bugfix patches and backporting them as necessary. If we want the same for coreboot, we'll need someone to step up and do that job. Maybe patch contributors can help a bit -- e.g. in Linux, submitters add a `cc: stable` or `should be backported up to 3.4` to their commit message, which then tells the stable branch maintainers to pick that up. We could probably do something similar. But we still need someone setting up and maintaining the branch first.
I think Linux model is not bad and seem to work for many projects. My point is not sudden, another dramatic switch to new way, my point is to consider that problem and discuss what small steps we can take to improve things. Of course if we agree this is correct direction and important problem to solve.
P.S. We doing vPub tomorrow, https://twitter.com/3mdeb_com/status/1387017457118875651 Feel free to join and discuss that topic live.
Best Regards,