Hi Denis, Just a quick question about your last reply:
On 1/4/23 09:33, Denis 'GNUtoo' Carikli wrote:
On Wed, 4 Jan 2023 04:37:01 +0100 (CET) Martin Roth gaumless@tutanota.com wrote:
Hey Denis, My reply got out of control here. Apologies for the length, feel free to ignore the top and just skip to my replies. :)
I think that getting the background as well is important here.
Unfortunately, that's simply not enough people to maintain all of the platforms and chips in the tree the way things are currently. We could do something like add a requirement that each platform added needs to donate two (or choose an arbitrary number) boards to the coreboot project that we install somewhere and test on a (daily/weekly/monthly) basis to make sure that they're working.
Currently we don't have the funds or a place to do that. It also puts a burden on an individual who wants to add a board to the coreboot tree. It creates problems for companies that use coreboot for an embedded system that isn't easy to test.
Here I'm not advocating for magically getting things supported without putting efforts or resources behind, I'm rather trying to understand, given the current constraints in place, what can be done.
I think you're right that we need some way to guarantee that boards added to coreboot aren't going to be dropped again in a short period of time, but that requires investment from companies using coreboot to make sure that it happens.
The investment could also come from the people and organizations that want the boards to stay in Coreboot. My questions were mainly directed at trying to understand how much resources that would require to see if this is doable or not.
There are people and organizations who want the Asus KGPE-D16 to continue to be supported in some form, so we are currently evaluating our options with that, with the constraint that the code needs to keep working over time. And upstream Coreboot is one of the options being evaluated.
A French association (Libre en Communs) already applied for funding through NLnet to bring The KGPE-D16 back to upstream Coreboot.
So if this is accepted, we will also need a plan to make sure that the board isn't removed again. And for that we'd need not only to review patches, but also to actually get work done so it doesn't get removed again.
So for instance we could get donations through an association to fund someone to keep the board supported, but to do that we do need to have at least a vague idea of the budget required for that.
And given that there is no way to guarantee that a Mainboard doesn't get removed, we'd also need to watch what is happening with the GM45 Thinkpads as well, as this problem also affect these computers.
Could you elaborate on this? I'm not aware of any planned changes that would result in GM45 being dropped. Is it just about finding people to provide support for those boards/chipset? And if so, does the same not apply to other older chipsets like i945 or x4x? Or even Sandy/Ivy Bridge?
And here too I guess that we could get some funding if needed, but we'd probably need to make that work for the D16 first and extend it to more mainboards.
When a chip is added to coreboot, how long is a company expected to maintain that chip in the coreboot repository? If we make it too short, nobody wants to develop mainboards for it. If we make that time too long, chip vendors aren't going to want to add their chips to the tree. I'm not sure how to balance that.
To keep supporting a given mainboard, I only know these options:
- Do like Linux and put the burden on people sending patches not to break the existing mainboards. This reduces the amount of work needed to maintain a given mainboard, but it still requires maintainers.
- Doing it like it's done now, putting most of the burden on people wanting to continue maintaining existing mainboards, but making it easier to cleanup code and add new mainboards.
U-boot is probably somewhere in between both.
Another option would be to have different rules for different code quality. For instance require very good code quality (like Linux) for newer contributions, but also allow to rework very old code (like it's done now). This way at some point we'd end up with good code quality and we'd be able to follow the Linux model if people want to go this route.
In all these cases we do need maintainers to review patches, but the burden is put at different places, so it has different effects.
[...]
Since there are different policies in Coreboot, I'm trying to understand if it's possible to somehow find ways to get the same kind of assurances than with Linux so that specific mainboards will not be removed, and if so how to do it, how much resources it requires, etc.
We don't currently have any way to make this assurance. It depends on a number of things. Imagine that there's a mainboard maintainer, but no chipset maintainer. If the chipset isn't being maintained and is being removed from ToT because it's not supported and has become problematic, it's not like the mainboard can stay independently.
As far as the resources needed, that's difficult for me to say. It depends on the chipset and the changes that are needed.
With the KGPE-D16 I mostly had in mind the chipsets drivers, because as I understand, this is probably where most of the work will happen.
Here the people making sure that the board isn't removed would also have to maintain all the chipsets. And if more boards (KCMA-D8) are also added back, the work of maintaining the chipsets will also benefit them.
That's why I was wondering if there was a way to understand how invasive the future changes could be.
Some of the most problematic issues like requiring a special compiler for boards that don't have enough cache for using cache as RAM were already fixed by removing the affected boards. And the boards we want to continue supporting have cache as RAM.
So if there are policies that require to do changes, but that the changes are possible to do without too much work, then it might work too.
The issue here is to somehow get assurances that a too drastic change will not make it impossible to continue supporting that mainboard.
Remember that the boards that are removed from ToT are not gone. We've created branches, and the boards that have been removed from ToT can be maintained on those branches. We have builders set up to test patches pushed to those branches for that reason. Those boards may not get all of the latest features automatically, but do they really need all of the latest features? Does the KGPE-D16 need an eSPI driver? No - it doesn't support eSPI.
That's also what I'd like to evaluate, but I've not started looking into it yet.
Here the concern is to be able to maintain that mainboard for an extremely long time, so we need to also take care of:
- keeping crossgcc updated (that's probably easy)
- Making sure that the interfaces between coreboot and payloads still work
- Making sure that packaged coreboot utilities (nvramtool, cbmem) still work.
- Fixing security issues if they arise
- etc.
I assume that other branches than 4.11 also have the same issues.
Here I'm also concerned about getting access to community support. For instance if we need to ask questions about how to best fix a security issue in 4.11, would we get help on the mailing lists? Or would nobody be interested because 4.11 is not the latest version and so much changed between both?
Because coreboot is a firmware project, it's very different than something that's pure software and can be tested easily. If people want to keep the chipsets and boards on ToT, we need to have people actually working to keep them there. There are a very few people at coreboot doing a LOT of work to keep the project running. We can't do everything for every board on our own. We need other volunteers who own the boards to test them, submit bugs and patches, and help with the project. Without that, coreboot has no way of even knowing if a board is actually working.
Indeed here the question is if we provide the people, can we manage somehow to keep supporting some boards. And can it work with the current policies? And at what cost?
If I look at Linux, in the worst case scenario I can look at the number of maintainers for a given driver and expect that the maintainers will at worse, work full time on reviewing patches, so that can be roughly calculated.
There is also information on how to setup automatic test infrastructure (for instance with LAVA) so the resources needed for that can also be predicted somehow.
Here this interest me because I'm trying to evaluate if it makes sense for me and other interested people to contribute big changes to Coreboot, for instance by adding back a mainboard like the Asus KGPE-D16. If we do a lot of work and that this work is removed right after, it's not worth it, so it's best to check beforehand than to have some drama if the huge work is removed.
The issue with Coreboot is that I'm unsure if I can do the same kind of calculation than with Linux: The APU1 was removed despite the fact that the mainboard was "supported".
As mentioned in the scenario above, the mainboard was marked as being supported, but the chipset was not. Unless someone is willing to step up as the supporter of the chipset, I don't know what to tell you. We can't really keep the mainboard without the chipset. I don't know if there's something similar in the linux hierarchy to look at, but if there is, I'd be interested in knowing how it's handled there.
Linux wound't drop chipsets unless there are no users remaining. For instance if all the D16 are destroyed and that there are no more users at all, then Linux could drop it.
People sending new patches would have not to break existing chipset drivers.
The changes would at least be tested for compilation. There are also automatic boot tests for some single board computers with Linux.
And if it breaks and that no one notices, then it could be up to people that found the breakage to bisect it and bugreport.
And at the end the chipset drivers would need maintainers to review the patches, but not necessarily to adapt them to newer APIs as it would be up to the people who introduce these API not to break the existing chipsets. So ideally they would have to keep the old API as well until the chipsets are converted to the newer API.
This also makes bisecting issues easier, though Linux history is not linear (running gitk on a linux tree shows that) so bisecting isn't always easy anyway.
So this is why I'm trying to understand how much work it is to convert boards and for that understanding how deprecation are decided could give me hints of the amount of work to expect in the worst case.
The usual reason is progress: Infrastructure module X has been replaced byinfrastructure module X+1. Removing X helps keep the sources manageableand likely opens opportunities to improve the codebase even more.
Here I am wondering precisely how the deprecation of module X is decided. For instance can anyone convert some code in Rust and expect everybody else to learn Rust and rewrite everything in Rust?
Is it up to maintainers of each module to decide when to deprecate module X in favor of module X+1?
This is typically suggested on the mailing list, beaten to a pulp, then discussed in the coreboot leadership meeting for a decision
OK. I guess that mainboard and chipset maintainers can also give some input then.
If so what are usually the factors taken into account? I assume that if there is too much change to do (like completely rewriting Coreboot in Rust), I assume that they won't be able to force a change like that on the rest of the Coreboot contributors. If that's the case it gives some indication of the amount of work required to convert to newer modules.
It also brings the question of what is taken into account when deciding to deprecate module X. For instance is the maintainability of each "modules" the only concern, or are the amount of supported mainboards and end users a concern too?
Is the amount of work needed to convert from module X to module X+1 taken into account? And is there usually some documentation that explains how to do the conversion? Or are do people need to find out by looking at the release notes and other boards being converted?
We don't currently have a written policy about it.
In that case, are the decisions justified somehow? If so we could just read them to understand what was taken into account.
And that work could probably be reused later on if one day people want to make a policy.
ifneq ($(CONFIG_LEGACY_SMP_INIT),) @echo "+----------------------- /!\ WARNING /!\ ---------------------+" @echo "| CONFIG_LEGACY_SMP_INIT is deprecated. |" @echo "| It will be removed in Coreboot release 4.18 (November 2022).|" @echo "| Please migrate your mainboard to CONFIG_PARALLEL_MP, |" @echo "| or it will be removed right after the Coreboot 4.18 release.|" @echo "| See the Coreboot release notes[1] for more details. |" @echo "| [1]Documentation/releases/coreboot-4.16-relnotes.md |" @echo "+-------------------------------------------------------------+" endif
[...]
I'm all in favor of that. I'll add it to the discussion of the coreboot leadership meeting on the 11th.
Thanks.
Denis.
coreboot mailing list -- coreboot@coreboot.org To unsubscribe send an email to coreboot-leave@coreboot.org
Cheers, Nicholas