Juan is not available now, and Anthony asked for agenda to be sent early. So here comes:
Agenda for the meeting Tue, May 28:
- Generating acpi tables
- Switching the call to a bi-weekly schedule
Please, send any topic that you are interested in covering.
Thanks, MST
在 2013-05-23四的 15:41 +0300,Michael S. Tsirkin写道:
Juan is not available now, and Anthony asked for agenda to be sent early. So here comes:
Agenda for the meeting Tue, May 28:
Generating acpi tables
Switching the call to a bi-weekly schedule
Please, send any topic that you are interested in covering.
"add ACPI Embedded Controller"
it can work as an event carrier, so ACPI events can be passed between platform(like QEMU emulated) and OS, OS can execute _Qxx method which defined by yourself. this mechanism can be very flexible, and you can avoid bothering to create some special devices for QEMU and drivers for linux kernel. I don't have many examples, e.g. CPU hotplug, we can pass hotplug event by _Qxx method, and, Embedded Controller driver of linux kernel can get this event, so, it can trigger cpu_up process, and you'll don't need "echo 1 > /sys/devices/system/cpu/cpu1/online"
may you think of it,
Thanks!
On Thu, May 23, 2013 at 03:41:32PM +0300, Michael S. Tsirkin wrote:
Juan is not available now, and Anthony asked for agenda to be sent early. So here comes:
Agenda for the meeting Tue, May 28:
- Generating acpi tables
I didn't see any meeting notes, but I thought it would be worthwhile to summarize the call. This is from memory so correct me if I got anything wrong.
Anthony believes that the generation of ACPI tables is the task of the firmware. Reasons cited include security implications of running more code in qemu vs the guest context, complexities in running iasl on big-endian machines, possible complexity of having to regenerate tables on a vm reboot, overall sloppiness of doing it in QEMU. Raised that QOM interface should be sufficient.
Kevin believes that the bios table code should be moved up into QEMU. Reasons cited include the churn rate in SeaBIOS for this QEMU feature (15-20% of all SeaBIOS commits since integrating with QEMU have been for bios tables; 20% of SeaBIOS commits in last year), complexity of trying to pass all the content needed to generate the tables (eg, device details, power tree, irq routing), complexity of scheduling changes across different repos and synchronizing their rollout, complexity of implemeting the code in both OVMF and SeaBIOS. Kevin wasn't aware of a requirement to regenerate acpi tables on a vm reboot.
There were discussions on potentially introducing a middle component to generate the tables. Coreboot was raised as a possibility, and David thought it would be okay to use coreboot for both OVMF and SeaBIOS. The possibility was also raised of a "rom" that lives in the qemu repo, is run in the guest, and generates the tables (which is similar to the hvmloader approach that Xen uses).
Anthony requested that patches be made that generate the ACPI tables in QEMU for the upcoming hotplug work, so that they could be evaluated to see if they truly do need to live in QEMU or if the code could live in the firmware. There were no objections.
-Kevin
On Tue, May 28, 2013 at 07:53:09PM -0400, Kevin O'Connor wrote:
On Thu, May 23, 2013 at 03:41:32PM +0300, Michael S. Tsirkin wrote:
Juan is not available now, and Anthony asked for agenda to be sent early. So here comes:
Agenda for the meeting Tue, May 28:
- Generating acpi tables
I didn't see any meeting notes, but I thought it would be worthwhile to summarize the call. This is from memory so correct me if I got anything wrong.
Anthony believes that the generation of ACPI tables is the task of the firmware. Reasons cited include security implications of running more code in qemu vs the guest context, complexities in running iasl on big-endian machines, possible complexity of having to regenerate tables on a vm reboot, overall sloppiness of doing it in QEMU. Raised that QOM interface should be sufficient.
Kevin believes that the bios table code should be moved up into QEMU. Reasons cited include the churn rate in SeaBIOS for this QEMU feature (15-20% of all SeaBIOS commits since integrating with QEMU have been for bios tables; 20% of SeaBIOS commits in last year), complexity of trying to pass all the content needed to generate the tables (eg, device details, power tree, irq routing), complexity of scheduling changes across different repos and synchronizing their rollout, complexity of implemeting the code in both OVMF and SeaBIOS. Kevin wasn't aware of a requirement to regenerate acpi tables on a vm reboot.
I think this last one is based on a misunderstanding: it's based on assumption that we we change hardware by hotplug we should regenerate the tables to match. But there's no management that can take advantage of this. Two possible reasonable things we can tell management: - hotplug for device XXX is not supported: restart qemu to make guest use the device - hotplug for device XXX is supported
What is proposed here instead is a third option: - hotplug is supported but device is not functional. reboot guest to make it fully functional
This will naturally lead to requirement to regenerate tables on reset.
And this is what would happen with guest-generated tables, and I consider this a bug, not a feature.
If you really wanted to update tables dynamically, without restarting qemu, don't stop there, add an interface for guest to update them without reset. I think that's over-endineering and a requirement that's best avoided.
There were discussions on potentially introducing a middle component to generate the tables. Coreboot was raised as a possibility, and David thought it would be okay to use coreboot for both OVMF and SeaBIOS. The possibility was also raised of a "rom" that lives in the qemu repo, is run in the guest, and generates the tables (which is similar to the hvmloader approach that Xen uses).
Anthony requested that patches be made that generate the ACPI tables in QEMU for the upcoming hotplug work, so that they could be evaluated to see if they truly do need to live in QEMU or if the code could live in the firmware. There were no objections.
-Kevin
I volunteered to implement this.
It was also mentioned that this patch does not yet have to fix the cross-version migration issue with fw_cfg. If we agree on a direction, we will fix it then.
Lastly, a proposal was made by Michael to make the call bi-weekly instead of weekly, as we were cancelling it too much. There were no objections.
Thus, the next call is planned for June 11, 2013.
"Michael S. Tsirkin" mst@redhat.com writes:
On Tue, May 28, 2013 at 07:53:09PM -0400, Kevin O'Connor wrote:
On Thu, May 23, 2013 at 03:41:32PM +0300, Michael S. Tsirkin wrote:
Juan is not available now, and Anthony asked for agenda to be sent early. So here comes:
Agenda for the meeting Tue, May 28:
- Generating acpi tables
I didn't see any meeting notes, but I thought it would be worthwhile to summarize the call. This is from memory so correct me if I got anything wrong.
Anthony believes that the generation of ACPI tables is the task of the firmware. Reasons cited include security implications of running more code in qemu vs the guest context, complexities in running iasl on big-endian machines, possible complexity of having to regenerate tables on a vm reboot, overall sloppiness of doing it in QEMU. Raised that QOM interface should be sufficient.
Kevin believes that the bios table code should be moved up into QEMU. Reasons cited include the churn rate in SeaBIOS for this QEMU feature (15-20% of all SeaBIOS commits since integrating with QEMU have been for bios tables; 20% of SeaBIOS commits in last year), complexity of trying to pass all the content needed to generate the tables (eg, device details, power tree, irq routing), complexity of scheduling changes across different repos and synchronizing their rollout, complexity of implemeting the code in both OVMF and SeaBIOS. Kevin wasn't aware of a requirement to regenerate acpi tables on a vm reboot.
I think this last one is based on a misunderstanding: it's based on assumption that we we change hardware by hotplug we should regenerate the tables to match. But there's no management that can take advantage of this. Two possible reasonable things we can tell management:
- hotplug for device XXX is not supported: restart qemu to make guest use the device
- hotplug for device XXX is supported
This introduces an assumption: that the device model never radically changes across resets.
Why should this be true? Shouldn't we be allowed to increase the amount of memory the guest has across reboots? That's equivalent to adding another DIMM after power off.
Not generating tables on reset does limit what we can do in a pretty fundamental way. Even if you can argue it in the short term, I don't think it's viable in the long term.
Regards,
Anthony Liguori
On Wed, May 29, 2013 at 11:12:06AM -0500, Anthony Liguori wrote:
"Michael S. Tsirkin" mst@redhat.com writes:
On Tue, May 28, 2013 at 07:53:09PM -0400, Kevin O'Connor wrote:
On Thu, May 23, 2013 at 03:41:32PM +0300, Michael S. Tsirkin wrote:
Juan is not available now, and Anthony asked for agenda to be sent early. So here comes:
Agenda for the meeting Tue, May 28:
- Generating acpi tables
I didn't see any meeting notes, but I thought it would be worthwhile to summarize the call. This is from memory so correct me if I got anything wrong.
Anthony believes that the generation of ACPI tables is the task of the firmware. Reasons cited include security implications of running more code in qemu vs the guest context, complexities in running iasl on big-endian machines, possible complexity of having to regenerate tables on a vm reboot, overall sloppiness of doing it in QEMU. Raised that QOM interface should be sufficient.
Kevin believes that the bios table code should be moved up into QEMU. Reasons cited include the churn rate in SeaBIOS for this QEMU feature (15-20% of all SeaBIOS commits since integrating with QEMU have been for bios tables; 20% of SeaBIOS commits in last year), complexity of trying to pass all the content needed to generate the tables (eg, device details, power tree, irq routing), complexity of scheduling changes across different repos and synchronizing their rollout, complexity of implemeting the code in both OVMF and SeaBIOS. Kevin wasn't aware of a requirement to regenerate acpi tables on a vm reboot.
I think this last one is based on a misunderstanding: it's based on assumption that we we change hardware by hotplug we should regenerate the tables to match. But there's no management that can take advantage of this. Two possible reasonable things we can tell management:
- hotplug for device XXX is not supported: restart qemu to make guest use the device
- hotplug for device XXX is supported
This introduces an assumption: that the device model never radically changes across resets.
Why should this be true? Shouldn't we be allowed to increase the amount of memory the guest has across reboots? That's equivalent to adding another DIMM after power off.
You can argue the same thing about non hotpluggable devices: you might be able to replace them when guest is powered off.
It's not supported ATM and if/when it is, there's a bunch of code to be written.
Not generating tables on reset does limit what we can do in a pretty fundamental way. Even if you can argue it in the short term, I don't think it's viable in the long term.
Regards,
Anthony Liguori
No because that's not "at reset". We need a separate state for power off.
You power off the machine, add DIMM, restart it.
Its not something you can do from inside the guest, unlike reset.
At the moment, the only way to implement this is by exiting from QEMU. So we are not introducing any regressions here. When qemu gains power off state we can add a handler and regenerate the tables.
Hi,
Why should this be true? Shouldn't we be allowed to increase the amount of memory the guest has across reboots? That's equivalent to adding another DIMM after power off.
poweroff is equivalent to exiting qemu, not to guest reset.
Not generating tables on reset does limit what we can do in a pretty fundamental way. Even if you can argue it in the short term, I don't think it's viable in the long term.
I don't think so. The procedure for adding/removing non-hotpluggable hardware is: poweroff, plugin/-out hardware (change config in qemu), boot. Hotpluggable hardware doesn't need acpi table updates.
cheers, Gerd
On Wed, May 29, 2013 at 11:45:44AM +0300, Michael S. Tsirkin wrote:
On Tue, May 28, 2013 at 07:53:09PM -0400, Kevin O'Connor wrote:
On Thu, May 23, 2013 at 03:41:32PM +0300, Michael S. Tsirkin wrote:
Juan is not available now, and Anthony asked for agenda to be sent early. So here comes:
Agenda for the meeting Tue, May 28:
- Generating acpi tables
I didn't see any meeting notes, but I thought it would be worthwhile to summarize the call. This is from memory so correct me if I got anything wrong.
Anthony believes that the generation of ACPI tables is the task of the firmware. Reasons cited include security implications of running more code in qemu vs the guest context, complexities in running iasl on big-endian machines, possible complexity of having to regenerate tables on a vm reboot, overall sloppiness of doing it in QEMU. Raised that QOM interface should be sufficient.
Kevin believes that the bios table code should be moved up into QEMU. Reasons cited include the churn rate in SeaBIOS for this QEMU feature (15-20% of all SeaBIOS commits since integrating with QEMU have been for bios tables; 20% of SeaBIOS commits in last year), complexity of trying to pass all the content needed to generate the tables (eg, device details, power tree, irq routing), complexity of scheduling changes across different repos and synchronizing their rollout, complexity of implemeting the code in both OVMF and SeaBIOS. Kevin wasn't aware of a requirement to regenerate acpi tables on a vm reboot.
I think this last one is based on a misunderstanding: it's based on assumption that we we change hardware by hotplug we should regenerate the tables to match. But there's no management that can take advantage of this. Two possible reasonable things we can tell management:
- hotplug for device XXX is not supported: restart qemu to make guest use the device
- hotplug for device XXX is supported
What is proposed here instead is a third option:
- hotplug is supported but device is not functional. reboot guest to make it fully functional
This will naturally lead to requirement to regenerate tables on reset.
And this is what would happen with guest-generated tables, and I consider this a bug, not a feature.
+1. This will probably break guest resume too.
If you really wanted to update tables dynamically, without restarting qemu, don't stop there, add an interface for guest to update them without reset. I think that's over-endineering and a requirement that's best avoided.
There were discussions on potentially introducing a middle component to generate the tables. Coreboot was raised as a possibility, and David thought it would be okay to use coreboot for both OVMF and SeaBIOS. The possibility was also raised of a "rom" that lives in the qemu repo, is run in the guest, and generates the tables (which is similar to the hvmloader approach that Xen uses).
Anthony requested that patches be made that generate the ACPI tables in QEMU for the upcoming hotplug work, so that they could be evaluated to see if they truly do need to live in QEMU or if the code could live in the firmware. There were no objections.
-Kevin
I volunteered to implement this.
Why hotplug should generate ACPI code? It does not do so on real HW.
It was also mentioned that this patch does not yet have to fix the cross-version migration issue with fw_cfg. If we agree on a direction, we will fix it then.
Lastly, a proposal was made by Michael to make the call bi-weekly instead of weekly, as we were cancelling it too much. There were no objections.
Thus, the next call is planned for June 11, 2013.
-- MST
SeaBIOS mailing list SeaBIOS@seabios.org http://www.seabios.org/mailman/listinfo/seabios
-- Gleb.
On Sun, Jun 02, 2013 at 06:05:42PM +0300, Gleb Natapov wrote:
On Wed, May 29, 2013 at 11:45:44AM +0300, Michael S. Tsirkin wrote:
On Tue, May 28, 2013 at 07:53:09PM -0400, Kevin O'Connor wrote:
On Thu, May 23, 2013 at 03:41:32PM +0300, Michael S. Tsirkin wrote:
Juan is not available now, and Anthony asked for agenda to be sent early. So here comes:
Agenda for the meeting Tue, May 28:
- Generating acpi tables
I didn't see any meeting notes, but I thought it would be worthwhile to summarize the call. This is from memory so correct me if I got anything wrong.
Anthony believes that the generation of ACPI tables is the task of the firmware. Reasons cited include security implications of running more code in qemu vs the guest context, complexities in running iasl on big-endian machines, possible complexity of having to regenerate tables on a vm reboot, overall sloppiness of doing it in QEMU. Raised that QOM interface should be sufficient.
Kevin believes that the bios table code should be moved up into QEMU. Reasons cited include the churn rate in SeaBIOS for this QEMU feature (15-20% of all SeaBIOS commits since integrating with QEMU have been for bios tables; 20% of SeaBIOS commits in last year), complexity of trying to pass all the content needed to generate the tables (eg, device details, power tree, irq routing), complexity of scheduling changes across different repos and synchronizing their rollout, complexity of implemeting the code in both OVMF and SeaBIOS. Kevin wasn't aware of a requirement to regenerate acpi tables on a vm reboot.
I think this last one is based on a misunderstanding: it's based on assumption that we we change hardware by hotplug we should regenerate the tables to match. But there's no management that can take advantage of this. Two possible reasonable things we can tell management:
- hotplug for device XXX is not supported: restart qemu to make guest use the device
- hotplug for device XXX is supported
What is proposed here instead is a third option:
- hotplug is supported but device is not functional. reboot guest to make it fully functional
This will naturally lead to requirement to regenerate tables on reset.
And this is what would happen with guest-generated tables, and I consider this a bug, not a feature.
+1. This will probably break guest resume too.
If you really wanted to update tables dynamically, without restarting qemu, don't stop there, add an interface for guest to update them without reset. I think that's over-endineering and a requirement that's best avoided.
There were discussions on potentially introducing a middle component to generate the tables. Coreboot was raised as a possibility, and David thought it would be okay to use coreboot for both OVMF and SeaBIOS. The possibility was also raised of a "rom" that lives in the qemu repo, is run in the guest, and generates the tables (which is similar to the hvmloader approach that Xen uses).
Anthony requested that patches be made that generate the ACPI tables in QEMU for the upcoming hotplug work, so that they could be evaluated to see if they truly do need to live in QEMU or if the code could live in the firmware. There were no objections.
-Kevin
I volunteered to implement this.
Why hotplug should generate ACPI code? It does not do so on real HW.
Hotplug should not generate ACPI code. What is meant here is adding ACPI code to support hotplug of devices behind a PCI to PCI bridge.
It was also mentioned that this patch does not yet have to fix the cross-version migration issue with fw_cfg. If we agree on a direction, we will fix it then.
Lastly, a proposal was made by Michael to make the call bi-weekly instead of weekly, as we were cancelling it too much. There were no objections.
Thus, the next call is planned for June 11, 2013.
-- MST
SeaBIOS mailing list SeaBIOS@seabios.org http://www.seabios.org/mailman/listinfo/seabios
-- Gleb.
On Sun, Jun 02, 2013 at 06:09:50PM +0300, Michael S. Tsirkin wrote:
On Sun, Jun 02, 2013 at 06:05:42PM +0300, Gleb Natapov wrote:
On Wed, May 29, 2013 at 11:45:44AM +0300, Michael S. Tsirkin wrote:
On Tue, May 28, 2013 at 07:53:09PM -0400, Kevin O'Connor wrote:
On Thu, May 23, 2013 at 03:41:32PM +0300, Michael S. Tsirkin wrote:
Juan is not available now, and Anthony asked for agenda to be sent early. So here comes:
Agenda for the meeting Tue, May 28:
- Generating acpi tables
I didn't see any meeting notes, but I thought it would be worthwhile to summarize the call. This is from memory so correct me if I got anything wrong.
Anthony believes that the generation of ACPI tables is the task of the firmware. Reasons cited include security implications of running more code in qemu vs the guest context, complexities in running iasl on big-endian machines, possible complexity of having to regenerate tables on a vm reboot, overall sloppiness of doing it in QEMU. Raised that QOM interface should be sufficient.
Kevin believes that the bios table code should be moved up into QEMU. Reasons cited include the churn rate in SeaBIOS for this QEMU feature (15-20% of all SeaBIOS commits since integrating with QEMU have been for bios tables; 20% of SeaBIOS commits in last year), complexity of trying to pass all the content needed to generate the tables (eg, device details, power tree, irq routing), complexity of scheduling changes across different repos and synchronizing their rollout, complexity of implemeting the code in both OVMF and SeaBIOS. Kevin wasn't aware of a requirement to regenerate acpi tables on a vm reboot.
I think this last one is based on a misunderstanding: it's based on assumption that we we change hardware by hotplug we should regenerate the tables to match. But there's no management that can take advantage of this. Two possible reasonable things we can tell management:
- hotplug for device XXX is not supported: restart qemu to make guest use the device
- hotplug for device XXX is supported
What is proposed here instead is a third option:
- hotplug is supported but device is not functional. reboot guest to make it fully functional
This will naturally lead to requirement to regenerate tables on reset.
And this is what would happen with guest-generated tables, and I consider this a bug, not a feature.
+1. This will probably break guest resume too.
If you really wanted to update tables dynamically, without restarting qemu, don't stop there, add an interface for guest to update them without reset. I think that's over-endineering and a requirement that's best avoided.
There were discussions on potentially introducing a middle component to generate the tables. Coreboot was raised as a possibility, and David thought it would be okay to use coreboot for both OVMF and SeaBIOS. The possibility was also raised of a "rom" that lives in the qemu repo, is run in the guest, and generates the tables (which is similar to the hvmloader approach that Xen uses).
Anthony requested that patches be made that generate the ACPI tables in QEMU for the upcoming hotplug work, so that they could be evaluated to see if they truly do need to live in QEMU or if the code could live in the firmware. There were no objections.
-Kevin
I volunteered to implement this.
Why hotplug should generate ACPI code? It does not do so on real HW.
Hotplug should not generate ACPI code. What is meant here is adding ACPI code to support hotplug of devices behind a PCI to PCI bridge.
Ah, OK. This one does not change on reset.
-- Gleb.
On Sun, Jun 02, 2013 at 06:40:43PM +0300, Gleb Natapov wrote:
On Sun, Jun 02, 2013 at 06:09:50PM +0300, Michael S. Tsirkin wrote:
On Sun, Jun 02, 2013 at 06:05:42PM +0300, Gleb Natapov wrote:
On Wed, May 29, 2013 at 11:45:44AM +0300, Michael S. Tsirkin wrote:
On Tue, May 28, 2013 at 07:53:09PM -0400, Kevin O'Connor wrote:
On Thu, May 23, 2013 at 03:41:32PM +0300, Michael S. Tsirkin wrote:
Juan is not available now, and Anthony asked for agenda to be sent early. So here comes:
Agenda for the meeting Tue, May 28:
- Generating acpi tables
I didn't see any meeting notes, but I thought it would be worthwhile to summarize the call. This is from memory so correct me if I got anything wrong.
Anthony believes that the generation of ACPI tables is the task of the firmware. Reasons cited include security implications of running more code in qemu vs the guest context, complexities in running iasl on big-endian machines, possible complexity of having to regenerate tables on a vm reboot, overall sloppiness of doing it in QEMU. Raised that QOM interface should be sufficient.
Kevin believes that the bios table code should be moved up into QEMU. Reasons cited include the churn rate in SeaBIOS for this QEMU feature (15-20% of all SeaBIOS commits since integrating with QEMU have been for bios tables; 20% of SeaBIOS commits in last year), complexity of trying to pass all the content needed to generate the tables (eg, device details, power tree, irq routing), complexity of scheduling changes across different repos and synchronizing their rollout, complexity of implemeting the code in both OVMF and SeaBIOS. Kevin wasn't aware of a requirement to regenerate acpi tables on a vm reboot.
I think this last one is based on a misunderstanding: it's based on assumption that we we change hardware by hotplug we should regenerate the tables to match. But there's no management that can take advantage of this. Two possible reasonable things we can tell management:
- hotplug for device XXX is not supported: restart qemu to make guest use the device
- hotplug for device XXX is supported
What is proposed here instead is a third option:
- hotplug is supported but device is not functional. reboot guest to make it fully functional
This will naturally lead to requirement to regenerate tables on reset.
And this is what would happen with guest-generated tables, and I consider this a bug, not a feature.
+1. This will probably break guest resume too.
If you really wanted to update tables dynamically, without restarting qemu, don't stop there, add an interface for guest to update them without reset. I think that's over-endineering and a requirement that's best avoided.
There were discussions on potentially introducing a middle component to generate the tables. Coreboot was raised as a possibility, and David thought it would be okay to use coreboot for both OVMF and SeaBIOS. The possibility was also raised of a "rom" that lives in the qemu repo, is run in the guest, and generates the tables (which is similar to the hvmloader approach that Xen uses).
Anthony requested that patches be made that generate the ACPI tables in QEMU for the upcoming hotplug work, so that they could be evaluated to see if they truly do need to live in QEMU or if the code could live in the firmware. There were no objections.
-Kevin
I volunteered to implement this.
Why hotplug should generate ACPI code? It does not do so on real HW.
Hotplug should not generate ACPI code. What is meant here is adding ACPI code to support hotplug of devices behind a PCI to PCI bridge.
Ah, OK. This one does not change on reset.
It wouldn't if QEMU generates it. With bios generating the tables it might depending on how it's implemented. To make it not change across resets we'd need an interface in QEMU to tell guest whether a device was added since QEMU start.
-- Gleb.
Il 02/06/2013 17:05, Gleb Natapov ha scritto:
Anthony requested that patches be made that generate the ACPI tables in QEMU for the upcoming hotplug work, so that they could be evaluated to see if they truly do need to live in QEMU or if the code could live in the firmware. There were no objections.
I volunteered to implement this.
Why hotplug should generate ACPI code? It does not do so on real HW.
Hotplug can do a LoadTable and merge it into the existing ones. But then you do not need QEMU-time generation of tables to do the same thing for cold-plug.
Paolo
On 05/29/13 01:53, Kevin O'Connor wrote:
On Thu, May 23, 2013 at 03:41:32PM +0300, Michael S. Tsirkin wrote:
Juan is not available now, and Anthony asked for agenda to be sent early. So here comes:
Agenda for the meeting Tue, May 28:
- Generating acpi tables
I didn't see any meeting notes, but I thought it would be worthwhile to summarize the call. This is from memory so correct me if I got anything wrong.
Anthony believes that the generation of ACPI tables is the task of the firmware. Reasons cited include security implications of running more code in qemu vs the guest context,
I fail to see the security issues here. It's not like the apci table generation code operates on untrusted input from the guest ...
complexities in running iasl on big-endian machines,
We already have a bunch of prebuilt blobs in the qemu repo for simliar reasons, we can do that with iasl output too.
possible complexity of having to regenerate tables on a vm reboot,
Why tables should be regenerated at reboot? I remember hotplug being mentioned in the call. Hmm? Which hotplugged component needs acpi table updates to work properly? And what is the point of hotplugging if you must reboot the guest anyway to get the acpi updates needed? Details please.
Also mentioned in the call: "architectural reasons", which I understand as "real hardware works that way". Correct. But qemu's virtual hardware is configurable in more ways than real hardware, so we have different needs. For example: pci slots can or can't be hotpluggable. On real hardware this is fixed. IIRC this is one of the reasons why we have to patch acpi tables.
overall sloppiness of doing it in QEMU.
/me gets the feeling that this is the *main* reason, given that the other ones don't look very convincing to me.
Raised that QOM interface should be sufficient.
Agree on this one. Ideally the acpi table generation code should be able to gather all information it needs from the qom tree, so it can be a standalone C file instead of being scattered over all qemu.
There were discussions on potentially introducing a middle component to generate the tables. Coreboot was raised as a possibility, and David thought it would be okay to use coreboot for both OVMF and SeaBIOS.
Certainly an option, but that is a long-term project.
The possibility was also raised of a "rom" that lives in the qemu repo, is run in the guest, and generates the tables (which is similar to the hvmloader approach that Xen uses).
Also simliar to the coreboot idea.
Also in the call: The idea of having some library for acpi table generation provided by qemu which the firmware can use. Has license compatibility issues. Also difficult due to the fact that there is no libc in firmware, so such a library would need firmware-specific abstraction layers even for simple stuff such as memory allocation.
Anthony requested that patches be made that generate the ACPI tables in QEMU for the upcoming hotplug work, so that they could be evaluated to see if they truly do need to live in QEMU or if the code could live in the firmware.
Good. I think having qemu generate the tables is also quite useful for evaluating the move to coreboot:
(1) make qemu generate the acpi tables. (2a) make seabios use the qemu-generated tables. (2b) make ovmf use the qemu-generated tables. (2c) make coreboot use the qemu-generated tables.
Now we can look where we stand when using coreboot+seabios or coreboot+tianocore compared to bare seabios / bare ovmf. I expect there are quite a few things to fix until the coreboot+seabios combo runs without regressions compared to bare seabios. But maybe not when qemu provides the acpi tables to coreboot.
In case the coreboot testdrive works out well we can continue with:
(3) use coreboot+seabios by default. (4) move acpi table generation from qemu to coreboot.
cheers, Gerd
On Wed, May 29, 2013 at 10:49:27AM +0200, Gerd Hoffmann wrote:
On 05/29/13 01:53, Kevin O'Connor wrote:
On Thu, May 23, 2013 at 03:41:32PM +0300, Michael S. Tsirkin wrote:
Juan is not available now, and Anthony asked for agenda to be sent early. So here comes:
Agenda for the meeting Tue, May 28:
- Generating acpi tables
I didn't see any meeting notes, but I thought it would be worthwhile to summarize the call. This is from memory so correct me if I got anything wrong.
Anthony believes that the generation of ACPI tables is the task of the firmware. Reasons cited include security implications of running more code in qemu vs the guest context,
I fail to see the security issues here. It's not like the apci table generation code operates on untrusted input from the guest ...
complexities in running iasl on big-endian machines,
We already have a bunch of prebuilt blobs in the qemu repo for simliar reasons, we can do that with iasl output too.
possible complexity of having to regenerate tables on a vm reboot,
Why tables should be regenerated at reboot? I remember hotplug being mentioned in the call. Hmm? Which hotplugged component needs acpi table updates to work properly? And what is the point of hotplugging if you must reboot the guest anyway to get the acpi updates needed? Details please.
I think it's a mistake. I sent a mail explaining this part.
Also mentioned in the call: "architectural reasons", which I understand as "real hardware works that way". Correct.
Not exactly. Real hardware is very likely to have most of the tables pre-generated in ROM, load them and tweak them in the minor way.
That's exactly what patches I sent do.
But qemu's virtual hardware is configurable in more ways than real hardware, so we have different needs. For example: pci slots can or can't be hotpluggable. On real hardware this is fixed. IIRC this is one of the reasons why we have to patch acpi tables.
overall sloppiness of doing it in QEMU.
/me gets the feeling that this is the *main* reason, given that the other ones don't look very convincing to me.
Raised that QOM interface should be sufficient.
Agree on this one. Ideally the acpi table generation code should be able to gather all information it needs from the qom tree, so it can be a standalone C file instead of being scattered over all qemu.
Did you look at the patchset I posted? Generation is in a standalone C file there.
However, if you mean we should do things like
if (Device_id == foobar) { }
in once central place, I disagree. I think that's nasty, adding devices would mean touching this central registry.
There were discussions on potentially introducing a middle component to generate the tables. Coreboot was raised as a possibility, and David thought it would be okay to use coreboot for both OVMF and SeaBIOS.
Certainly an option, but that is a long-term project.
The possibility was also raised of a "rom" that lives in the qemu repo, is run in the guest, and generates the tables (which is similar to the hvmloader approach that Xen uses).
Also simliar to the coreboot idea.
Also in the call: The idea of having some library for acpi table generation provided by qemu which the firmware can use. Has license compatibility issues. Also difficult due to the fact that there is no libc in firmware, so such a library would need firmware-specific abstraction layers even for simple stuff such as memory allocation.
Anthony requested that patches be made that generate the ACPI tables in QEMU for the upcoming hotplug work, so that they could be evaluated to see if they truly do need to live in QEMU or if the code could live in the firmware.
Good. I think having qemu generate the tables is also quite useful for evaluating the move to coreboot:
(1) make qemu generate the acpi tables. (2a) make seabios use the qemu-generated tables. (2b) make ovmf use the qemu-generated tables. (2c) make coreboot use the qemu-generated tables.
Now we can look where we stand when using coreboot+seabios or coreboot+tianocore compared to bare seabios / bare ovmf. I expect there are quite a few things to fix until the coreboot+seabios combo runs without regressions compared to bare seabios. But maybe not when qemu provides the acpi tables to coreboot.
In case the coreboot testdrive works out well we can continue with:
(3) use coreboot+seabios by default. (4) move acpi table generation from qemu to coreboot.
cheers, Gerd
Hi,
possible complexity of having to regenerate tables on a vm reboot,
Why tables should be regenerated at reboot? I remember hotplug being mentioned in the call. Hmm? Which hotplugged component needs acpi table updates to work properly? And what is the point of hotplugging if you must reboot the guest anyway to get the acpi updates needed? Details please.
I think it's a mistake. I sent a mail explaining this part.
Saw it meanwhile.
Also mentioned in the call: "architectural reasons", which I understand as "real hardware works that way". Correct.
Not exactly. Real hardware is very likely to have most of the tables pre-generated in ROM, load them and tweak them in the minor way.
From a quick look it seems coreboot has a static (iasl-compiled) dsdt
and generates everything else.
http://review.coreboot.org/gitweb?p=coreboot.git;a=blob;f=src/mainboard/emul...
Agree on this one. Ideally the acpi table generation code should be able to gather all information it needs from the qom tree, so it can be a standalone C file instead of being scattered over all qemu.
Did you look at the patchset I posted?
Very briefly only.
Generation is in a standalone C file there.
Good.
However, if you mean we should do things like
if (Device_id == foobar) { } in once central place, I disagree. I think that's nasty, adding devices would mean touching this central registry.
No, I mean more "lookup PIIX4_PM object + check disable_s3 property" instead of having code for it in hw/acpi/piix4.c or using global variables.
cheers, Gerd
On Wed, May 29, 2013 at 11:42:34AM +0200, Gerd Hoffmann wrote:
Hi,
possible complexity of having to regenerate tables on a vm reboot,
Why tables should be regenerated at reboot? I remember hotplug being mentioned in the call. Hmm? Which hotplugged component needs acpi table updates to work properly? And what is the point of hotplugging if you must reboot the guest anyway to get the acpi updates needed? Details please.
I think it's a mistake. I sent a mail explaining this part.
Saw it meanwhile.
Also mentioned in the call: "architectural reasons", which I understand as "real hardware works that way". Correct.
Not exactly. Real hardware is very likely to have most of the tables pre-generated in ROM, load them and tweak them in the minor way.
From a quick look it seems coreboot has a static (iasl-compiled) dsdt
and generates everything else.
http://review.coreboot.org/gitweb?p=coreboot.git;a=blob;f=src/mainboard/emul...
Agree on this one. Ideally the acpi table generation code should be able to gather all information it needs from the qom tree, so it can be a standalone C file instead of being scattered over all qemu.
Did you look at the patchset I posted?
Very briefly only.
Generation is in a standalone C file there.
Good.
However, if you mean we should do things like
if (Device_id == foobar) { } in once central place, I disagree. I think that's nasty, adding devices would mean touching this central registry.
No, I mean more "lookup PIIX4_PM object + check disable_s3 property" instead of having code for it in hw/acpi/piix4.c or using global variables.
cheers, Gerd
So that would make code PIIX specific. Instead I'm passing in guest_info structure to each object and that describes itself, e.g. sets disable_s3.
Gerd Hoffmann kraxel@redhat.com writes:
On 05/29/13 01:53, Kevin O'Connor wrote:
On Thu, May 23, 2013 at 03:41:32PM +0300, Michael S. Tsirkin wrote:
Juan is not available now, and Anthony asked for agenda to be sent early. So here comes:
Agenda for the meeting Tue, May 28:
- Generating acpi tables
I didn't see any meeting notes, but I thought it would be worthwhile to summarize the call. This is from memory so correct me if I got anything wrong.
Anthony believes that the generation of ACPI tables is the task of the firmware. Reasons cited include security implications of running more code in qemu vs the guest context,
I fail to see the security issues here. It's not like the apci table generation code operates on untrusted input from the guest ...
But possibly untrusted input from a malicious user. You can imagine something like a IaaS provider that let's a user input arbitrary values for memory, number of nics, etc.
It's a stretch of an example, I agree, but the general principle I think is sound: we should push as much work as possible to the least privileged part of the stack. In this case, firmware has much less privileges than QEMU.
complexities in running iasl on big-endian machines,
We already have a bunch of prebuilt blobs in the qemu repo for simliar reasons, we can do that with iasl output too.
possible complexity of having to regenerate tables on a vm reboot,
Why tables should be regenerated at reboot? I remember hotplug being mentioned in the call. Hmm? Which hotplugged component needs acpi table updates to work properly? And what is the point of hotplugging if you must reboot the guest anyway to get the acpi updates needed? Details please.
See my response to Michael.
Also mentioned in the call: "architectural reasons", which I understand as "real hardware works that way". Correct. But qemu's virtual hardware is configurable in more ways than real hardware, so we have different needs. For example: pci slots can or can't be hotpluggable. On real hardware this is fixed. IIRC this is one of the reasons why we have to patch acpi tables.
It's not really fixed. Hardware supports PCI expansion chassises. Multi-node NUMA systems also affect the ACPI tables.
overall sloppiness of doing it in QEMU.
/me gets the feeling that this is the *main* reason, given that the other ones don't look very convincing to me.
Raised that QOM interface should be sufficient.
Agree on this one. Ideally the acpi table generation code should be able to gather all information it needs from the qom tree, so it can be a standalone C file instead of being scattered over all qemu.
Ack. So my basic argument is why not expose the QOM interfaces to firmware and move the generation code there? Seems like it would be more or less a copy/paste once we had a proper implementation in QEMU.
There were discussions on potentially introducing a middle component to generate the tables. Coreboot was raised as a possibility, and David thought it would be okay to use coreboot for both OVMF and SeaBIOS.
Certainly an option, but that is a long-term project.
Out of curiousity, are there other benefits to using coreboot as a core firmware in QEMU?
Is there a payload we would ever plausibly use besides OVMF and SeaBIOS?
Regards,
Anthony Liguori
On Wed, May 29, 2013 at 11:18:03AM -0500, Anthony Liguori wrote:
Gerd Hoffmann kraxel@redhat.com writes:
On 05/29/13 01:53, Kevin O'Connor wrote:
On Thu, May 23, 2013 at 03:41:32PM +0300, Michael S. Tsirkin wrote:
Juan is not available now, and Anthony asked for agenda to be sent early. So here comes:
Agenda for the meeting Tue, May 28:
- Generating acpi tables
I didn't see any meeting notes, but I thought it would be worthwhile to summarize the call. This is from memory so correct me if I got anything wrong.
Anthony believes that the generation of ACPI tables is the task of the firmware. Reasons cited include security implications of running more code in qemu vs the guest context,
I fail to see the security issues here. It's not like the apci table generation code operates on untrusted input from the guest ...
But possibly untrusted input from a malicious user. You can imagine something like a IaaS provider that let's a user input arbitrary values for memory, number of nics, etc.
It's a stretch of an example, I agree, but the general principle I think is sound: we should push as much work as possible to the least privileged part of the stack. In this case, firmware has much less privileges than QEMU.
It's a big stretch. We have to draw the line somewhere, and I think when *all* firmware people tell us that QEMU is a pain to work with and should just supply ACPI table to BIOS, that line has been crossed.
complexities in running iasl on big-endian machines,
We already have a bunch of prebuilt blobs in the qemu repo for simliar reasons, we can do that with iasl output too.
possible complexity of having to regenerate tables on a vm reboot,
Why tables should be regenerated at reboot? I remember hotplug being mentioned in the call. Hmm? Which hotplugged component needs acpi table updates to work properly? And what is the point of hotplugging if you must reboot the guest anyway to get the acpi updates needed? Details please.
See my response to Michael.
Also mentioned in the call: "architectural reasons", which I understand as "real hardware works that way". Correct. But qemu's virtual hardware is configurable in more ways than real hardware, so we have different needs. For example: pci slots can or can't be hotpluggable. On real hardware this is fixed. IIRC this is one of the reasons why we have to patch acpi tables.
It's not really fixed. Hardware supports PCI expansion chassises.
These normally aren't reported in ACPI, so no hotplug, or only native hotplug.
Multi-node NUMA systems also affect the ACPI tables.
In a very minor way.
overall sloppiness of doing it in QEMU.
/me gets the feeling that this is the *main* reason, given that the other ones don't look very convincing to me.
Raised that QOM interface should be sufficient.
Agree on this one. Ideally the acpi table generation code should be able to gather all information it needs from the qom tree, so it can be a standalone C file instead of being scattered over all qemu.
Ack. So my basic argument is why not expose the QOM interfaces to firmware and move the generation code there? Seems like it would be more or less a copy/paste once we had a proper implementation in QEMU.
Because that's just insanely rick interface we have no chance to keep stable across versions. Because it's a ton of QEMU specific firmware. Because firmware devs don't want to maintain the ACPI that *is* there either.
There were discussions on potentially introducing a middle component to generate the tables. Coreboot was raised as a possibility, and David thought it would be okay to use coreboot for both OVMF and SeaBIOS.
Certainly an option, but that is a long-term project.
Out of curiousity, are there other benefits to using coreboot as a core firmware in QEMU?
Is there a payload we would ever plausibly use besides OVMF and SeaBIOS?
Regards,
Anthony Liguori
The easier it is to switch firmware the better.
Gives us choice, we switched firmware several times, we will do it again.
If firmware only has a simple loader for QEMU specific stuff and is mostly generic, then it's easy. If there's a lot of code for walking QOM, etc - it's very painful.
On Wed, May 29, 2013 at 07:28:05PM +0300, Michael S. Tsirkin wrote:
Because that's just insanely rick interface
s/rick/rich/. Sorry about the typo.
Anthony Liguori anthony@codemonkey.ws writes:
Gerd Hoffmann kraxel@redhat.com writes:
On 05/29/13 01:53, Kevin O'Connor wrote:
On Thu, May 23, 2013 at 03:41:32PM +0300, Michael S. Tsirkin wrote:
Juan is not available now, and Anthony asked for agenda to be sent early. So here comes:
Agenda for the meeting Tue, May 28:
- Generating acpi tables
I didn't see any meeting notes, but I thought it would be worthwhile to summarize the call. This is from memory so correct me if I got anything wrong.
Anthony believes that the generation of ACPI tables is the task of the firmware. Reasons cited include security implications of running more code in qemu vs the guest context,
I fail to see the security issues here. It's not like the apci table generation code operates on untrusted input from the guest ...
But possibly untrusted input from a malicious user. You can imagine something like a IaaS provider that let's a user input arbitrary values for memory, number of nics, etc.
It's a stretch of an example, I agree, but the general principle I think is sound: we should push as much work as possible to the least privileged part of the stack. In this case, firmware has much less privileges than QEMU.
Firmware runs in a primitive, unforgiving environment, and should do as little as humanly possible. For an instructive example of deviating from that rule, check out UEFI.
[...]
On Wed, May 29, 2013 at 11:18:03AM -0500, Anthony Liguori wrote:
Gerd Hoffmann kraxel@redhat.com writes:
On 05/29/13 01:53, Kevin O'Connor wrote:
Raised that QOM interface should be sufficient.
Agree on this one. Ideally the acpi table generation code should be able to gather all information it needs from the qom tree, so it can be a standalone C file instead of being scattered over all qemu.
Ack. So my basic argument is why not expose the QOM interfaces to firmware and move the generation code there?
I remain doubtful that QOM has all the info needed to generate the BIOS tables. Does QOM describe how the 5th pci device uses global interrupt 11 when using global interrupts, legacy interrupt 5 when not using global interrupts, and that the legacy interrupt can be changed by writing to the 0x60 address of the 1st pci device's config space? Does QOM state that the machine supports S3 sleep mode? Does QOM indicate that an IPMI device supports the 3rd version of the IPMI device specification?
I don't see how exporting QOM to the firmware will help. I predict we would continue to see most of the BIOS tables hardcoded in the firmware and that all but the most minor changes to those tables would require synchronizing code patches to both QEMU and the firmware. I also suspect we would end up adding fields to QOM that only the BIOS tables cared about, and that ever increasing code would be needed in both QEMU and the firmware to juggle to/from QOM so that the BIOS tables could be created.
-Kevin
On Wed, 2013-05-29 at 21:12 -0400, Kevin O'Connor wrote:
I remain doubtful that QOM has all the info needed to generate the BIOS tables. Does QOM describe how the 5th pci device uses global interrupt 11 when using global interrupts, legacy interrupt 5 when not using global interrupts, and that the legacy interrupt can be changed by writing to the 0x60 address of the 1st pci device's config space? Does QOM state that the machine supports S3 sleep mode? Does QOM indicate that an IPMI device supports the 3rd version of the IPMI device specification?
Does it indicate whether this particular version of qemu has correctly implemented the hard reset at 0xcf9? If so, we need to put that in as the ACPI RESET_REG.
It seems that there's a *lot* which isn't fully described in the QOM tree. Do we really want to add it all, just so that ACPI tables can be reliably generated from it?
As we add new types of hardware and even fix/adjust features like the examples above, we'll also have to implement the translation from QOM to ACPI tables. And we'll have to do so in more than one place, in projects with a completely different release cycle. This would be *so* much easier if the code which actually generates the ACPI tables was *in* the qemu tree along with the "hardware" that those tables describe.
Hi,
Raised that QOM interface should be sufficient.
Agree on this one. Ideally the acpi table generation code should be able to gather all information it needs from the qom tree, so it can be a standalone C file instead of being scattered over all qemu.
Ack. So my basic argument is why not expose the QOM interfaces to firmware and move the generation code there? Seems like it would be more or less a copy/paste once we had a proper implementation in QEMU.
Well, no. Firmware is a quite simple environment without standard libc etc, so moving code from qemu to firmware certainly isn't as easy as copying over a file.
There were discussions on potentially introducing a middle component to generate the tables. Coreboot was raised as a possibility, and David thought it would be okay to use coreboot for both OVMF and SeaBIOS.
Certainly an option, but that is a long-term project.
Out of curiousity, are there other benefits to using coreboot as a core firmware in QEMU?
Short-term it's alot of work as we have to bring coreboot's qemu support to feature parity with seabios. I suspect most of this is acpi related though, so when qemu provides the tables and coreboot uses them we could be pretty close already.
Long-term it should simplify firmware maintainance as we have only *one* place which handles the hardware bringup, and seabios/ovmf have less work to do.
Is there a payload we would ever plausibly use besides OVMF and SeaBIOS?
I wouldn't be surprised if people start using other coreboot payloads and/or features such as direct linux kernel boot once it works well on qemu.
We might even run qemu test suites as coreboot payload.
cheers, Gerd
On Wed, 2013-05-29 at 11:18 -0500, Anthony Liguori wrote:
Certainly an option, but that is a long-term project.
Out of curiousity, are there other benefits to using coreboot as a core firmware in QEMU?
Is there a payload we would ever plausibly use besides OVMF and SeaBIOS?
I like the idea of using Coreboot on the UEFI side — if the most actively used TianoCore platform is CorebootPkg instead of OvmfPkg, that makes it a lot easier for people using *real* hardware with Coreboot to use TianoCore.
And it helps to dispel the stupid misconception in some quarters that Coreboot *competes* with UEFI and thus cannot possibly be supported because helping something that competes with UEFI would be bad.
Is there a payload we would ever plausibly use besides OVMF and SeaBIOS?
For my part I want to get to the point where the default firmware shipped with qemu can be OVMF. We have SeaBIOS-as-CSM working, which was one of the biggest barriers. There are a few more things (like NV variable storage, in particular) that I need to fix before I can actually make that suggestion with a straight face though...
On 05/30/13 11:23, David Woodhouse wrote:
On Wed, 2013-05-29 at 11:18 -0500, Anthony Liguori wrote:
Certainly an option, but that is a long-term project.
Out of curiousity, are there other benefits to using coreboot as a core firmware in QEMU?
Is there a payload we would ever plausibly use besides OVMF and SeaBIOS?
I like the idea of using Coreboot on the UEFI side — if the most actively used TianoCore platform is CorebootPkg instead of OvmfPkg, that makes it a lot easier for people using *real* hardware with Coreboot to use TianoCore.
Where is CorebootPkg available from?
And it helps to dispel the stupid misconception in some quarters that Coreboot *competes* with UEFI and thus cannot possibly be supported because helping something that competes with UEFI would be bad.
I'm not sure who do you mean by "some quarters", but for some distributions Coreboot would be yet another component (package) to support, for no obvious benefit.
(Gerd said it better than I possibly could: http://thread.gmane.org/gmane.comp.bios.coreboot.seabios/5685/focus=5705.)
Is there a payload we would ever plausibly use besides OVMF and SeaBIOS?
For my part I want to get to the point where the default firmware shipped with qemu can be OVMF.
For some distributions this is a licensing question as well. See "FatBinPkg/License.txt". (The same applies if you rebuild it from source (FatPkgDev), based on "FatBinPkg/ReadMe.txt".) For example Fedora can't ship OVMF because of it.
If you implement a UEFI FAT driver based on Microsoft's official specification, you're bound by the same restrictions on use and redistribution.
If you implement a private UEFI FAT driver from scratch, or port a free software FAT implementation (eg. the r/o one in grub or the r/w one in mtools), you could still run into legal problems, I've been told.
If you rip out the FAT driver, then OVMF won't be UEFI compliant and no installer media will boot on it.
Interestingly, Ubuntu has OVMF in "Universe" http://packages.ubuntu.com/raring/ovmf. I think they missed the FatBinPkg license (I would have missed it too, after all you have to track down the licenses of every module included in the FDF file -- it was Paolo who told me about it) and I believe they should actually ship OVMF in Multiverse or Restricted https://help.ubuntu.com/community/Repositories/Ubuntu.
We have SeaBIOS-as-CSM working, which was one of the biggest barriers.
Agreed, and I could have never done that. Thanks for implementing it with Kevin.
We need at least one out-of-tree edk2 patch for now (from you) but apparently that's no problem.
There are a few more things (like NV variable storage, in particular) that I need to fix before I can actually make that suggestion with a straight face though...
As far as I understand: - Jordan is nearing completion of flash support on KVM, - he also has WIP NvVar storage on top of qemu flash.
http://thread.gmane.org/gmane.comp.emulators.qemu/213690 http://thread.gmane.org/gmane.comp.bios.tianocore.devel/2781/focus=2798
("Please coordinate" I guess :))
Laszlo
On Thu, 2013-05-30 at 13:13 +0200, Laszlo Ersek wrote:
Where is CorebootPkg available from?
https://github.com/pgeorgi/edk2/tree/coreboot-pkg
And it helps to dispel the stupid misconception in some quarters that Coreboot *competes* with UEFI and thus cannot possibly be supported because helping something that competes with UEFI would be bad.
I'm not sure who do you mean by "some quarters", but for some distributions Coreboot would be yet another component (package) to support, for no obvious benefit.
(Gerd said it better than I possibly could: http://thread.gmane.org/gmane.comp.bios.coreboot.seabios/5685/focus=5705.)
Yeah, but if we're shoving a lot of hardware-specific ACPI table generation into the guest's firmware, instead of just doing it on the qemu side where a number of us seem to think it belongs, then there *is* a benefit to using Coreboot. When stuff changes on the qemu side and we have to update the table generation to match, you end up having to update just the Coreboot package, and *not* having to patch both SeaBIOS and OVMF.
The extra package in the distro really isn't painful to handle, and I suspect it would end up *reducing* the amount of work that we have to do to update. You update *just* Coreboot, not *both* of SeaBIOS and OVMF.
If you implement a private UEFI FAT driver from scratch, or port a free software FAT implementation (eg. the r/o one in grub or the r/w one in mtools), you could still run into legal problems, I've been told.
That has been said, and it's been said for the FAT implementation in the kernel too. If a distribution is happy to ship the kernel without ripping out its FAT support, then I see no reason why that distribution wouldn't also be happy to ship a version of OVMF with a clean implementation of FAT support. It doesn't make sense to be happy with one but not the other.
We need at least one out-of-tree edk2 patch for now (from you) but apparently that's no problem.
That'll get merged soon. We are working on the corresponding spec update...
As far as I understand:
- Jordan is nearing completion of flash support on KVM,
- he also has WIP NvVar storage on top of qemu flash.
http://thread.gmane.org/gmane.comp.emulators.qemu/213690 http://thread.gmane.org/gmane.comp.bios.tianocore.devel/2781/focus=2798
("Please coordinate" I guess :))
Ooh, shiny. Yeah, when I get back to actually working on it rather than just heckling, I'll make sure I do :)
On Thu, May 30, 2013 at 01:19:18PM +0100, David Woodhouse wrote:
Yeah, but if we're shoving a lot of hardware-specific ACPI table generation into the guest's firmware, instead of just doing it on the qemu side where a number of us seem to think it belongs,
Hopefully this is not yet set in stone.
then there *is* a benefit to using Coreboot. When stuff changes on the qemu side and we have to update the table generation to match, you end up having to update just the Coreboot package, and *not* having to patch both SeaBIOS and OVMF.
We have all kind of logic in qemu. Some of it can thinkably be moved to a separate VM - it doesn't even need to run in the same VM as the guest - we could do it e.g. like kvm unit-test does, with less pain than running it in firmware. Not clear why would generating ACPI tables - which merely fills up an array of bytes from internal QEMU datastructures - should be the part where we start this modularization.
On 05/30/13 14:19, David Woodhouse wrote:
Yeah, but if we're shoving a lot of hardware-specific ACPI table generation into the guest's firmware, instead of just doing it on the qemu side where a number of us seem to think it belongs, then there *is* a benefit to using Coreboot. When stuff changes on the qemu side and we have to update the table generation to match, you end up having to update just the Coreboot package, and *not* having to patch both SeaBIOS and OVMF.
The extra package in the distro really isn't painful to handle, and I suspect it would end up *reducing* the amount of work that we have to do to update. You update *just* Coreboot, not *both* of SeaBIOS and OVMF.
I can't deny there's logic in that, but it still feels like tying ourselves up in some intricate bondage choreography. "We'd like to move ACPI tables out of firmware, but we can't move them to qemu due to project direction disagreement, so let's adopt a middleman." (I'm not trying to denigrate coreboot -- I don't know it at all --, but introducing it in a (granted, distro-specific) stack just for this purpose seems quite arbitrary.)
If you implement a private UEFI FAT driver from scratch, or port a free software FAT implementation (eg. the r/o one in grub or the r/w one in mtools), you could still run into legal problems, I've been told.
That has been said, and it's been said for the FAT implementation in the kernel too. If a distribution is happy to ship the kernel without ripping out its FAT support, then I see no reason why that distribution wouldn't also be happy to ship a version of OVMF with a clean implementation of FAT support. It doesn't make sense to be happy with one but not the other.
Under my *personal* impression, logic and Common Law don't really mix, especially not in the US. Still under my *personal* impression, someone might not feel convenient suing you for redistributing code that already exists in the upstream Linux kernel, but might happily drag you to court for an original clean implementation, and then you can explain it's illogical for them to do so.
The best I can do with your suggestion is to take it to our legal dept. I would be happy to work on a new FAT driver. (I used to feel differently earlier but I've changed my mind.)
We need at least one out-of-tree edk2 patch for now (from you) but apparently that's no problem.
That'll get merged soon. We are working on the corresponding spec update...
Great news!
Thanks, Laszlo
On Thu, May 30, 2013 at 5:19 AM, David Woodhouse dwmw2@infradead.org wrote:
On Thu, 2013-05-30 at 13:13 +0200, Laszlo Ersek wrote:
Where is CorebootPkg available from?
Is the license on this actually BSD as the License.txt indicates?
Is this planned to be upstreamed?
Does this support UEFI variables?
Does this support UEFI IA32 / X64?
And it helps to dispel the stupid misconception in some quarters that Coreboot *competes* with UEFI and thus cannot possibly be supported because helping something that competes with UEFI would be bad.
Coreboot and EDK II both provide a good infrastructure for initializing hardware. So, they compete on that point.
Coreboot then focuses on booting coreboot payloads, while EDK II focuses on UEFI support. On that point they don't compete, but the focus is different.
Of course, you can build a layer of EDK II => Coreboot payload support, or Coreboot => EDK II (CorebootPkg, I guess?), but the match will not be perfect. (That is not to say it can't work.)
I'm not sure who do you mean by "some quarters", but for some distributions Coreboot would be yet another component (package) to support, for no obvious benefit.
(Gerd said it better than I possibly could: http://thread.gmane.org/gmane.comp.bios.coreboot.seabios/5685/focus=5705.)
Yeah, but if we're shoving a lot of hardware-specific ACPI table generation into the guest's firmware, instead of just doing it on the qemu side where a number of us seem to think it belongs, then there *is* a benefit to using Coreboot. When stuff changes on the qemu side and we have to update the table generation to match, you end up having to update just the Coreboot package, and *not* having to patch both SeaBIOS and OVMF.
I think ACPI table generation lives in firmware on real products, because on real products the firmware is the point that best understands the actual hardware layout for the machine. In qemu, I would say that qemu best knows the hardware layout, given that the firmware is generally a slightly separate project from qemu.
I don't think adding a coreboot layer into the picture helps, if it brings along the coreboot payload boot interface as a requirement.
Then again, I don't really understand how firmware could be swapped out in this case. What would -bios do? How would the coreboot ACPI shim layer be specified to qemu?
-Jordan
On Tue, May 28, 2013 at 07:53:09PM -0400, Kevin O'Connor wrote:
On Thu, May 23, 2013 at 03:41:32PM +0300, Michael S. Tsirkin wrote:
Juan is not available now, and Anthony asked for agenda to be sent early. So here comes:
Agenda for the meeting Tue, May 28:
- Generating acpi tables
I didn't see any meeting notes, but I thought it would be worthwhile to summarize the call. This is from memory so correct me if I got anything wrong.
Anthony believes that the generation of ACPI tables is the task of the firmware. Reasons cited include security implications of running more code in qemu vs the guest context, complexities in running iasl on big-endian machines,
Forgot to mention: my patchset actually solves this by keeping pre-generated ACPI tables in QEMU. This means you need to have iasl to do ACPI development but that's nothing new.
However, generating the tables in QEMU actually opens up the possibility of linking in a library for generating ACPI tables, if such surfaces, and dropping the iasl dependency.
While my patchset does not do this, it's not unheard of.
This would not be practical for bios.
possible complexity of having to regenerate tables on a vm reboot, overall sloppiness of doing it in QEMU. Raised that QOM interface should be sufficient.
Kevin believes that the bios table code should be moved up into QEMU. Reasons cited include the churn rate in SeaBIOS for this QEMU feature (15-20% of all SeaBIOS commits since integrating with QEMU have been for bios tables; 20% of SeaBIOS commits in last year), complexity of trying to pass all the content needed to generate the tables (eg, device details, power tree, irq routing), complexity of scheduling changes across different repos and synchronizing their rollout, complexity of implemeting the code in both OVMF and SeaBIOS. Kevin wasn't aware of a requirement to regenerate acpi tables on a vm reboot.
There were discussions on potentially introducing a middle component to generate the tables. Coreboot was raised as a possibility, and David thought it would be okay to use coreboot for both OVMF and SeaBIOS. The possibility was also raised of a "rom" that lives in the qemu repo, is run in the guest, and generates the tables (which is similar to the hvmloader approach that Xen uses).
Anthony requested that patches be made that generate the ACPI tables in QEMU for the upcoming hotplug work, so that they could be evaluated to see if they truly do need to live in QEMU or if the code could live in the firmware. There were no objections.
-Kevin
On Tue, May 28, 2013 at 07:53:09PM -0400, Kevin O'Connor wrote:
There were discussions on potentially introducing a middle component to generate the tables. Coreboot was raised as a possibility, and David thought it would be okay to use coreboot for both OVMF and SeaBIOS. The possibility was also raised of a "rom" that lives in the qemu repo, is run in the guest, and generates the tables (which is similar to the hvmloader approach that Xen uses).
Given the objections to implementing ACPI directly in QEMU, one possible way forward would be to split the current SeaBIOS rom into two roms: "qvmloader" and "seabios". The "qvmloader" would do the qemu specific platform init (pci init, smm init, mtrr init, bios tables) and then load and run the regular seabios rom. With this split, qvmloader could be committed into the QEMU repo and maintained there. This would be analogous to Xen's hvmloader with the seabios code used as a starting point to implement it.
With both the hardware implementation and acpi descriptions for that hardware in the same source code repository, it would be possible to implement changes to both in a single patch series. The fwcfg entries used to pass data between qemu and qvmloader could also be changed in a single patch and thus those fwcfg entries would not need to be considered a stable interface. The qvmloader code also wouldn't need the 16bit handlers that seabios requires and thus wouldn't need the full complexity of the seabios build. Finally, it's possible that both ovmf and seabios could use a single qvmloader implementation.
On the down side, reboots can be a bit goofy today in kvm, and that would need to be settled before something like qvmloader could be implemented. Also, it may be problematic to support passing of bios tables from qvmloader to seabios for guests with only 1 meg of ram.
Thoughts? -Kevin
On Thu, May 30, 2013 at 7:34 PM, Kevin O'Connor kevin@koconnor.net wrote:
On Tue, May 28, 2013 at 07:53:09PM -0400, Kevin O'Connor wrote:
There were discussions on potentially introducing a middle component to generate the tables. Coreboot was raised as a possibility, and David thought it would be okay to use coreboot for both OVMF and SeaBIOS. The possibility was also raised of a "rom" that lives in the qemu repo, is run in the guest, and generates the tables (which is similar to the hvmloader approach that Xen uses).
Given the objections to implementing ACPI directly in QEMU, one possible way forward would be to split the current SeaBIOS rom into two roms: "qvmloader" and "seabios". The "qvmloader" would do the qemu specific platform init (pci init, smm init, mtrr init, bios tables) and then load and run the regular seabios rom. With this split, qvmloader could be committed into the QEMU repo and maintained there. This would be analogous to Xen's hvmloader with the seabios code used as a starting point to implement it.
I think hvmloader is more closely tied to Xen, than the Xen firmware. I could be wrong, but thought it could do things like add memory to guest machine. ?? I don't think this model is analogous to Xen's model. I view the hvmloader as just a part of Xen. (Not part of the 'firmware' stack.)
In adding this pre-firmware firmware, wouldn't Anthony's concern of iasl still be an issue?
Why is updating the ACPI tables in seabios viewed as such a burden? Either qemu does it, or seabios... (And, OVMF too, but I don't think you guys are concerned with that. :)
On the flip side, why is moving the ACPI tables to QEMU such an issue? It seems like Xen and virtualbox both already do this. Why is running iasl not an issue for them?
I think overall I prefer the tables being built in the firmware, despite the extra thrash. Some things, such as the addresses where devices are configured at are re-programmable in QEMU, so a firmware can decide to use a different address, and thus invalidate the address qvmloader had set in the tables.
Maybe we are doing lots of things horribly wrong in our OVMF ACPI tables :), but I haven't seen it as much of a burden. (Of course, Laszlo has helped out with many of the ACPI changes in OVMF, so his opinion should be taken into consideration too. :)
-Jordan
Kevin O'Connor kevin@koconnor.net writes:
On Tue, May 28, 2013 at 07:53:09PM -0400, Kevin O'Connor wrote:
There were discussions on potentially introducing a middle component to generate the tables. Coreboot was raised as a possibility, and David thought it would be okay to use coreboot for both OVMF and SeaBIOS. The possibility was also raised of a "rom" that lives in the qemu repo, is run in the guest, and generates the tables (which is similar to the hvmloader approach that Xen uses).
Given the objections to implementing ACPI directly in QEMU, one possible way forward would be to split the current SeaBIOS rom into two roms: "qvmloader" and "seabios". The "qvmloader" would do the qemu specific platform init (pci init, smm init, mtrr init, bios tables) and then load and run the regular seabios rom. With this split, qvmloader could be committed into the QEMU repo and maintained there. This would be analogous to Xen's hvmloader with the seabios code used as a starting point to implement it.
What about a small change to the SeaBIOS build system to allow ACPI table generation to be done via a "plugin".
This could be as simple as moving acpi.c and *.dsl into the QEMU build tree and then having a way to point the SeaBIOS makefiles to our copy of it.
Then the logic is maintained stays in firmware but the churn happens in the QEMU tree instead of the SeaBIOS tree.
Regards,
Anthony Liguori
With both the hardware implementation and acpi descriptions for that hardware in the same source code repository, it would be possible to implement changes to both in a single patch series. The fwcfg entries used to pass data between qemu and qvmloader could also be changed in a single patch and thus those fwcfg entries would not need to be considered a stable interface. The qvmloader code also wouldn't need the 16bit handlers that seabios requires and thus wouldn't need the full complexity of the seabios build. Finally, it's possible that both ovmf and seabios could use a single qvmloader implementation.
On the down side, reboots can be a bit goofy today in kvm, and that would need to be settled before something like qvmloader could be implemented. Also, it may be problematic to support passing of bios tables from qvmloader to seabios for guests with only 1 meg of ram.
Thoughts? -Kevin
On Fri, 2013-05-31 at 07:58 -0500, Anthony Liguori wrote:
What about a small change to the SeaBIOS build system to allow ACPI table generation to be done via a "plugin".
SeaBIOS already accepts ACPI tables from Coreboot or UEFI, and queries them to find things that it needs.
This could be as simple as moving acpi.c and *.dsl into the QEMU build tree and then having a way to point the SeaBIOS makefiles to our copy of it.
Then the logic is maintained stays in firmware but the churn happens in the QEMU tree instead of the SeaBIOS tree.
Even if you get this working such that SeaBIOS and OVMF can both be built with ACPI tables that match the last qemu you built, that doesn't solve the issue of running a firmware that *wasn't* built to precisely match the version of qemu you're running today.
On Fri, May 31, 2013 at 07:58:36AM -0500, Anthony Liguori wrote:
Kevin O'Connor kevin@koconnor.net writes:
Given the objections to implementing ACPI directly in QEMU, one possible way forward would be to split the current SeaBIOS rom into two roms: "qvmloader" and "seabios". The "qvmloader" would do the qemu specific platform init (pci init, smm init, mtrr init, bios tables) and then load and run the regular seabios rom.
What about a small change to the SeaBIOS build system to allow ACPI table generation to be done via a "plugin".
Using a runtime plugin (eg, "qplugin") would require a more complex handoff then qvmloader. With qplugin, seabios would need to know what memory qplugin is compiled to run in and make sure it didn't allocate anything there. Similarly, qplugin would need to not stomp on seabios while it runs, and it would need to coordinate with seabios where to place the final tables. With qvmloader, there is no need to coordinate memory addresses, so it can run anywhere, deploy the tables in their final location, and then launch seabios.
This could be as simple as moving acpi.c and *.dsl into the QEMU build tree and then having a way to point the SeaBIOS makefiles to our copy of it.
I don't see how that would work. It would complicate the seabios build (as it would require a copy of qemu source to compile), and the resulting seabios binary would be strongly tied to the qemu version it was compiled with and vice-versa. This would break distro seabios rpms. It would also cause great pain when bisecting and would be confusing even during regular compile/debug cycles. Internal seabios calls (eg, memory allocations, pci config accesses) would need to be static interfaces, etc.
-Kevin
On Thu, May 30, 2013 at 10:34:26PM -0400, Kevin O'Connor wrote:
On Tue, May 28, 2013 at 07:53:09PM -0400, Kevin O'Connor wrote:
There were discussions on potentially introducing a middle component to generate the tables. Coreboot was raised as a possibility, and David thought it would be okay to use coreboot for both OVMF and SeaBIOS. The possibility was also raised of a "rom" that lives in the qemu repo, is run in the guest, and generates the tables (which is similar to the hvmloader approach that Xen uses).
Given the objections to implementing ACPI directly in QEMU,
I don't think that's a given, just yet.
So far Anthony asked to be shown the kind of project that ACPI generation in QEMU would enable. Since qemu community wasn't directly exposed to the ACPI-related patches it's easy to see how qemu maintainers won't be aware of the churn and maintainance overhead caused by generating them on the guest side.
That seems reasonable, so please hang on just a little bit longer until I post acpi hotplug support for pci bridges based on this code.
Then we can discuss.