Hi Ivan,
On 03.04.2018 20:03, Ivan Ivanov wrote:
> I have noticed that both coreboot and seabios are using the very old
> versions of LZMA SDK.
True. I introduced the lzma code in coreboot (back when it was called
LinuxBIOS) when we were working on OLPC XO-1 support.
> If we will upgrade our LZMA libraries from the
> outdated-by-12-years 4.42 to the current version 18.04 , speed and
> compression ratio should improve and maybe a few bugs will be fixed.
Do you have any numbers for this? An improved compression ratio and
improved speed would be nice indeed, but how does the size of the
decompression code change? If the decompression code grows more than the
size reduction from better compression, it would be a net loss. A
significantly reduced decompression speed would also be a problem.
Decompression speed would have to be measured both for stream
decompression (i.e. the decompressor gets the compressed data in
single-byte or multibyte chunks) as well as full-size decompression
(i.e. the decompressor can access all compressed data at once). We also
have to make sure that stream decompression still works after the change.
> Do you think it should be done, or you are OK with using such an
> outdated version?
A size benefit for the resulting image is a good reason to switch.
Regards,
Carl-Daniel
This patch serial is about QEMU resource reserve capability finding
in firmware.
Firstly, this fixes a logic bug. When the capability is truncated,
return zero instead of the truncated offset. Secondly, this modified
the debug messages when the capability is not found and when the vendor
ID or device Id doesn't match REDHAT special ones.
Last, this enables the firmware recongizing the REDHAT PCI BRIDGE device ID,
and changes the debug level to 3 when it is non-qemu bridge.
Change Log:
v2 -> v1
* Add two patches for fixing small bugs
v3 -> v2
* Change the debug level
Jing Liu (3):
pci: fix the return value for truncated capability
pci: clean up the debug message for pci capability found
pci: recognize RH PCI legacy bridge resource reservation capability
src/fw/pciinit.c | 45 ++++++++++++++++++++++++++++-----------------
src/hw/pci_ids.h | 1 +
2 files changed, 29 insertions(+), 17 deletions(-)
--
1.8.3.1
Hello Friends ,
I compiled Coreboot 4.5 and seabios for Minnow board max and see this
message in consol.
"Memory Configure Data Hob is not present.
Not updating MRC data in flash."
what is Data Hob?
what is MRC data؟
Cold you help me how to solve it?
Best wishes ,
Zhara
Hello
I notice 1.11.2 was only released a few weeks ago so it is probably too
early for another release, but can maintainers at least consider
backporting 8c3f57ea12 to 1.11-stable branch?
Thanks,
Wei.
Dear Zhara,
On 08/29/18 19:19, zahra rahimkhani wrote:
> I compiled Coreboot 4.5 and seabios for Minnow board max and see this
> message in consol.
>
> "Memory Configure Data Hob is not present.
> Not updating MRC data in flash."
>
> what is Data Hob?
> what is MRC data؟
> Could you help me how to solve it?
Unfortunately, you wrote to the wrong list. SeaBIOS is the payload, and
the messages you quoted are from coreboot.
Please contact the coreboot mailing list. I advise, to test the latest
version 4.8 before contacting the coreboot project though.
Kind regards,
Paul
PS: In the future, always attach the whole log.
Hi Gerd
On 08/28/2018 07:12 AM, Zihan Yang wrote:
> Gerd Hoffmann <kraxel(a)redhat.com> 于2018年8月27日周一 上午7:04写道:
>> Hi,
>>
>>>> However, QEMU only binds port 0xcf8 and 0xcfc to
>>>> bus pcie.0. To avoid bus confliction, we should use other port pairs for
>>>> busses under new domains.
>>> I would skip support for IO based configuration and use only MMCONFIG
>>> for extra root buses.
>>>
>>> The question remains: how do we assign MMCONFIG space for
>>> each PCI domain.
Thanks for your comments!
>> Allocation-wise it would be easiest to place them above 4G. Right after
>> memory, or after etc/reserved-memory-end (if that fw_cfg file is
>> present), where the 64bit pci bars would have been placed. Move the pci
>> bars up in address space to make room.
>>
>> Only problem is that seabios wouldn't be able to access mmconfig then.
>>
>> Placing them below 4G would work at least for a few pci domains. q35
>> mmconfig bar is placed at 0xb0000000 -> 0xbfffffff, basically for
>> historical reasons. Old qemu versions had 2.75G low memory on q35 (up
>> to 0xafffffff), and I think old machine types still have that for live
>> migration compatibility reasons. Modern qemu uses 2G only, to make
>> gigabyte alignment work.
>>
>> 32bit pci bars are placed above 0xc0000000. The address space from 2G
>> to 2.75G (0x8000000 -> 0xafffffff) is unused on new machine types.
>> Enough room for three additional mmconfig bars (full size), so four
>> pci domains total if you add the q35 one.
> Maybe we can support 4 domains first before we come up
> with a better solution. But I'm not sure if four domains are
> enough for those who want too many devices?
(Adding Michael)
Since we will not use all 256 buses of an extra PCI domain,
I think this space will allow us to support more PCI domains.
How will the flow look like ?
1. QEMU passes to SeaBIOS information of how many extra
PCI domains needs, and how many buses per domain.
How it will pass this info? A vendor specific capability,
some PCI registers or modifying extra-pci-roots fw_cfg file?
2. SeaBIOS assigns the address for each PCI Domain and
returns the information to QEMU.
How it will do that? Some pxb-pcie registers? Or do we model
the MMCFG like a PCI BAR?
3. Once QEMU gets the MMCFG addresses, it can answer to
mmio configuration cycles.
4. SeaBIOS queries all PCI domains devices, computes
and assigns IO/MEM resources (for PCI domains > 0 it will
use MMCFG to configure the PCI devices)
5. QEMU uses the IO/MEM information to create the CRS for each
extra PCI host bridge.
6. SeaBIOS gets the ACPI tables from QEMU and passes them to the
guest OS.
Thanks,
Marcel
>> cheers,
>> Gerd
>>
CCed to the wrong mailing list... resend here
---------- Forwarded message ---------
From: Zihan Yang <whois.zihan.yang(a)gmail.com>
Date: 2018年8月28日周二 上午4:12
Subject: Re: [SeaBIOS] [RFC v2 0/3] Support multiple pci domains in pci_device
To: Gerd Hoffmann <kraxel(a)redhat.com>
Cc: <qemu-devel(a)nongnu.org>, Marcel Apfelbaum <marcel.apfelbaum(a)gmail.com>
Gerd Hoffmann <kraxel(a)redhat.com> 于2018年8月27日周一 上午7:04写道:
>
> Hi,
>
> > > However, QEMU only binds port 0xcf8 and 0xcfc to
> > > bus pcie.0. To avoid bus confliction, we should use other port pairs for
> > > busses under new domains.
> >
> > I would skip support for IO based configuration and use only MMCONFIG
> > for extra root buses.
> >
> > The question remains: how do we assign MMCONFIG space for
> > each PCI domain.
>
> Allocation-wise it would be easiest to place them above 4G. Right after
> memory, or after etc/reserved-memory-end (if that fw_cfg file is
> present), where the 64bit pci bars would have been placed. Move the pci
> bars up in address space to make room.
>
> Only problem is that seabios wouldn't be able to access mmconfig then.
>
> Placing them below 4G would work at least for a few pci domains. q35
> mmconfig bar is placed at 0xb0000000 -> 0xbfffffff, basically for
> historical reasons. Old qemu versions had 2.75G low memory on q35 (up
> to 0xafffffff), and I think old machine types still have that for live
> migration compatibility reasons. Modern qemu uses 2G only, to make
> gigabyte alignment work.
>
> 32bit pci bars are placed above 0xc0000000. The address space from 2G
> to 2.75G (0x8000000 -> 0xafffffff) is unused on new machine types.
> Enough room for three additional mmconfig bars (full size), so four
> pci domains total if you add the q35 one.
Maybe we can support 4 domains first before we come up
with a better solution. But I'm not sure if four domains are
enough for those who want too many devices?
> cheers,
> Gerd
>
Excuse me, is there some case of successful attempts on Windows 10? Could you provide me some technical docs? Thanks!
> -----原始邮件-----
> 发件人: "Marc-André Lureau" <marcandre.lureau(a)gmail.com>
> 发送时间: 2018-08-23 16:37:36 (星期四)
> 收件人: tangfu(a)gohighsec.com
> 抄送: "Kevin O'Connor" <kevin(a)koconnor.net>, seabios(a)seabios.org
> 主题: Re: [SeaBIOS] vTPM 2.0 is recognized as vTPM 1.2 on the Win 10 virtual machine with seabios
>
> Hi
>
> On Thu, Aug 23, 2018 at 9:29 AM 汤福 <tangfu(a)gohighsec.com> wrote:
> >
> > Hi,
> > I am sorry, I bothered you. Still vTPM 2.0 for win 10 problem, I downloaded the latest qemu source from git, the version is V3.0.50. I think this is the latest code of qemu upstream. I also downloaded seabios upstream and bulid it with tpm2 support. Unfortunately, I tried both passthrough and emulator, and I didn’t get the expected results.
> >
> > For emulator, I did it like this:
> > #mkdir /tmp/mytpm2/
> > #chown tss:root /tmp/mytpm2
> > #swtpm_setup --tpmstate /tmp/mytpm2 --create-ek-cert --create-platform-cert --allow-signing --tpm2
> > #swtpm socket --tpmstate dir=/tmp/mytpm2 --ctrl type=unixio,path=/tmp/mytpm2/swtpm-sock --log level=20 --tpm2
> >
> > No errors occurred, suggesting that the certificate was also generated successfully.Then I created a blank img file named win10.img,and install win10 virtual machine as follows:
> > #qemu-system-x86_64 -display sdl -enable-kvm -cdrom win10.iso -serial stdio -m 2048 -boot d -bios bios.bin -boot menu=on -chardev socket,id=chrtpm,path=/tmp/mytpm2/swtpm-sock -tpmdev emulator,id=tpm0,chardev=chrtpm -device tpm-crb,tpmdev=tpm0 win10-ovmf.img
> > Enter the system when the system is successfully installed,I found that the TPM 2.0 device was not found in the System Device Manager. If I replace -device tpm-crb with -device tpm-tis and reboot the system,The TPM device can be found in the device manager.But the vTPM 2.0 is recognized as vTPM 1.2.
> >
> > I also tried passthrough mode, The result is the same as emulator. So, what could be the problem?
> >
>
> Try with OVMF. According to some technical docs, it seems Windows
> requires UEFI & CRB for TPM 2. That's also what testing suggestsTry.
> We are able to pass most WLK TPM tests with this setup.
>
> >
> >
> > > -----原始邮件-----
> > > 发件人: "Kevin O'Connor" <kevin(a)koconnor.net>
> > > 发送时间: 2018-08-21 12:08:59 (星期二)
> > > 收件人: "汤福" <tangfu(a)gohighsec.com>
> > > 抄送: seabios(a)seabios.org
> > > 主题: Re: [SeaBIOS] vTPM 2.0 is recognized as vTPM 1.2 on the Win 10 virtual machine with seabios
> > >
> > > On Mon, Aug 13, 2018 at 04:45:43PM +0800, 汤福 wrote:
> > > > Hi,
> > > >
> > > > I want to use the vTPM in a qemu Windows image. Unfortunately, it didn't work.
> > > > First, the equipment:
> > > > TPM 2.0 hardware
> > > > CentOS 7.2
> > > > Qemu v2.10.2
> > > > SeaBIOS 1.11.0
> > > > libtpm and so on
> > >
> > > If you retry with the latest SeaBIOS code from the master branch, does
> > > the problem still exist?
> > >
> > > See:
> > > https://mail.coreboot.org/pipermail/seabios/2018-August/012384.html
> > >
> > > -Kevin
> > _______________________________________________
> > SeaBIOS mailing list
> > SeaBIOS(a)seabios.org
> > https://mail.coreboot.org/mailman/listinfo/seabios
>
>
>
> --
> Marc-André Lureau
Currently seabios assumes there is only one pci domain(0), and almost
everything operates on pci domain 0 by default. This patch aims to add
multiple pci domain support for pci_device, while reserve the original
API for compatibility.
The reason to get seabios involved is that the pxb-pcie host bus created
in QEMU is now in a different PCI domain, and its bus number would start
from 0 instead of bus_nr. Actually bus_nr should not be used when in
another non-zero domain. However, QEMU only binds port 0xcf8 and 0xcfc to
bus pcie.0. To avoid bus confliction, we should use other port pairs for
busses under new domains.
Current issues:
* when trying to read config space of pcie_pci_bridge, it actually reads
out the result of mch. I'm working on this weird behavior.
Changelog:
v2 <- v1:
- Fix bugs in filtering domains when traversing pci devices
- Reformat some hardcoded codes, such as probing the pci device in pci_setup
Zihan Yang (3):
fw/pciinit: Recognize pxb-pcie-dev device
pci_device: Add pci domain support
pci: filter undesired domain when traversing pci
src/fw/coreboot.c | 2 +-
src/fw/csm.c | 2 +-
src/fw/mptable.c | 1 +
src/fw/paravirt.c | 3 +-
src/fw/pciinit.c | 276 ++++++++++++++++++++++++++++++---------------------
src/hw/ahci.c | 1 +
src/hw/ata.c | 1 +
src/hw/esp-scsi.c | 1 +
src/hw/lsi-scsi.c | 1 +
src/hw/megasas.c | 1 +
src/hw/mpt-scsi.c | 1 +
src/hw/nvme.c | 1 +
src/hw/pci.c | 69 +++++++------
src/hw/pci.h | 42 +++++---
src/hw/pci_ids.h | 6 +-
src/hw/pcidevice.c | 11 +-
src/hw/pcidevice.h | 8 +-
src/hw/pvscsi.c | 1 +
src/hw/sdcard.c | 1 +
src/hw/usb-ehci.c | 1 +
src/hw/usb-ohci.c | 1 +
src/hw/usb-uhci.c | 1 +
src/hw/usb-xhci.c | 1 +
src/hw/virtio-blk.c | 1 +
src/hw/virtio-scsi.c | 1 +
src/optionroms.c | 3 +
26 files changed, 268 insertions(+), 170 deletions(-)
--
2.7.4
This patch serial is about QEMU resource reserve capability finding
in firmware.
Firstly, this fixes a logic bug. When the capability is truncated,
return zero instead of the truncated offset. Secondly, this modified
the debug messages when the capability is not found and when the vendor
ID or device Id doesn't match REDHAT special ones.
Last, this enables the firmware recongizing the REDHAT PCI BRIDGE device ID,
so that QEMU can also reserve PCI bridge resource capability.
Change Log:
v2 -> v1
* Add two patches for fixing small bugs
Jing Liu (3):
pci: fix the return value for truncated capability
pci: clean up the debug message for pci capability found
pci: recognize RH PCI legacy bridge resource reservation capability
src/fw/pciinit.c | 45 ++++++++++++++++++++++++++++-----------------
src/hw/pci_ids.h | 1 +
2 files changed, 29 insertions(+), 17 deletions(-)
--
1.8.3.1