Hi Ivan,
On 03.04.2018 20:03, Ivan Ivanov wrote:
> I have noticed that both coreboot and seabios are using the very old
> versions of LZMA SDK.
True. I introduced the lzma code in coreboot (back when it was called
LinuxBIOS) when we were working on OLPC XO-1 support.
> If we will upgrade our LZMA libraries from the
> outdated-by-12-years 4.42 to the current version 18.04 , speed and
> compression ratio should improve and maybe a few bugs will be fixed.
Do you have any numbers for this? An improved compression ratio and
improved speed would be nice indeed, but how does the size of the
decompression code change? If the decompression code grows more than the
size reduction from better compression, it would be a net loss. A
significantly reduced decompression speed would also be a problem.
Decompression speed would have to be measured both for stream
decompression (i.e. the decompressor gets the compressed data in
single-byte or multibyte chunks) as well as full-size decompression
(i.e. the decompressor can access all compressed data at once). We also
have to make sure that stream decompression still works after the change.
> Do you think it should be done, or you are OK with using such an
> outdated version?
A size benefit for the resulting image is a good reason to switch.
Regards,
Carl-Daniel
On Thu, Sep 27, 2018 at 11:05:13PM +0800, Zihan Yang wrote:
> Laszlo Ersek <lersek(a)redhat.com> 于2018年9月26日周三 上午1:17写道:
> > First, I fail to see the use case where ~256 PCI bus numbers aren't
> > enough. If I strain myself, perhaps I can imagine using ~200 PCIe root
> > ports on Q35 (each of which requires a separate bus number), so that we
> > can independently hot-plug 200 devices then. And that's supposedly not
> > enough, because we want... 300? 400? A thousand? Doesn't sound realistic
> > to me. (This is not meant to be a strawman argument, I really have no
> > idea what the feature would be useful for.)
>
> It might not be very intuitive, but it indeed exists. The very
> beginning discussion
> about 4 months ago has mentioned a possible use case, and I paste it here
[...]
> Things might change in the future if we can figure out a better solution, and I
> hope we can have an easier and more elegant solution in OVMF. But now
> we are just trying to give a possible solution as a poc.
Thanks. I wasn't aware this was a proof of concept. (Nor have I been
following the discussions on the qemu list.) I don't think it makes
sense to merge this into the main SeaBIOS repository. The
QEMU/firmware interface is already complex and I don't think we should
complicate it further without a more concrete use case. In
particular, it seems unclear if 256 buses is enough or if 1024 buses
is too little.
-Kevin
* Zihan Yang (whois.zihan.yang(a)gmail.com) wrote:
> HI Laszlo
> Laszlo Ersek <lersek(a)redhat.com> 于2018年9月26日周三 上午1:17写道:
> >
> > On 09/25/18 17:38, Kevin O'Connor wrote:
> > > On Mon, Sep 17, 2018 at 11:02:59PM +0800, Zihan Yang wrote:
> > >> To support multiple pci domains of pxb-pcie device in qemu, we need to setup
> > >> mcfg range in seabios. We use [0x80000000, 0xb0000000) to hold new domain mcfg
> > >> table for now, and we need to retrieve the desired mcfg size of each pxb-pcie
> > >> from a hidden bar because they may not need the whole 256 busses, which also
> > >> enables us to support more domains within a limited range (768MB)
> > >
> > > At a highlevel, this looks okay to me. I'd like to see additional
> > > reviews from others more familiar with the QEMU PCI code, though.
> > >
> > > Is the plan to do the same thing for OVMF?
> >
> > I remain entirely unconvinced that this feature is useful. (I've stated
> > so before.)
> >
> > I believe the latest QEMU RFC posting (v5) is here:
> >
> > [Qemu-devel] [RFC v5 0/6] pci_expander_brdige: support separate pci
> > domain for pxb-pcie
> >
> > http://mid.mail-archive.com/1537196258-12581-1-git-send-email-whois.zihan.y…
> >
> > First, I fail to see the use case where ~256 PCI bus numbers aren't
> > enough. If I strain myself, perhaps I can imagine using ~200 PCIe root
> > ports on Q35 (each of which requires a separate bus number), so that we
> > can independently hot-plug 200 devices then. And that's supposedly not
> > enough, because we want... 300? 400? A thousand? Doesn't sound realistic
> > to me. (This is not meant to be a strawman argument, I really have no
> > idea what the feature would be useful for.)
>
> It might not be very intuitive, but it indeed exists. The very
> beginning discussion
> about 4 months ago has mentioned a possible use case, and I paste it here
>
> - We have Ray from Intel trying to use 1000 virtio-net devices
why that many?
> - We may have a VM managing some backups (tapes), we may have a lot of these.
I'm curious; what does tape backup have to do with the number of PCI
slots/busses?
Dave
> - We may want indeed to create a nested solution as Michael mentioned.
>
> The thread can be found in
> https://lists.gnu.org/archive/html/qemu-devel/2018-05/msg04667.html
>
> Also, a later post from a person in Dell stated in the list that he would need
> this feature for Intel VMD In Dell EMC? I have no idea about the details,
> but since they went here for help, I guess they do can benefit from it somehow.
>
> > Second, the v5 RFC doesn't actually address the alleged bus number
> > shortage. IIUC, it supports a low number of ECAM ranges under 4GB, but
> > those are (individually) limited in the bus number ranges they can
> > accommodate (due to 32-bit address space shortage). So more or less the
> > current approach just fragments the bus number space we already have, to
> > multiple domains.
> >
> > Third, should a subsequent iteration of the QEMU series put those extra
> > ECAMs above 4GB, with the intent to leave the enumeration of those
> > hierarchies to the "guest OS", it would present an incredible
> > implementation mess for OVMF. If people gained the ability to attach
> > storage or network to those domains, on the QEMU command line, they
> > would expect to boot off of them, using UEFI. Then OVMF would have to
> > make sure the controllers could be bound by their respective UEFI
> > drivers. That in turn would require functional config space access
> > (ECAM) at semi-random 64-bit addresses.
>
> I'm not familiar with OVMF, so I'm afraid I don't know how to make it easier
> for OVMF, the division of 64bit space in OVMF is out of my purpose. There
> is no plan to implement it in OVMF for now, we just want to make the
> seabios/qemu patch a proof of concept.
>
> As for the seabios, it access devices through port 0xcf8/0xcfc, which is binded
> to q35 host in qemu. If we want to change the mmconfig size of pxb-pcie
> (instead of using whole 256MB), we must know its desired size, which is passed
> as a hidden bar. Unfortunately the configuration space of pxb-pcie device
> cannot be accessed with 0xcf8/0xcfc because they are in different host bridge.
> At this time the ECAM is not configured yet, so no able to use MMIO too.
> In previous version, I tried to bind pxb host to other ports in qemu, so that we
> can use port io to access the config space of pxb-pcie, but it seems a
> little dirty,
>
> Another issue is how seabios initialize things. It will first do pci_setup when
> things like RSDP is not loaded. It is inconvenient to retrieve MCFG table and
> other information, so we cannot infer the mmconfig addr and size from MCFG
> table in seabios.
>
> Therefore we fall back to an alternative that we support 4x of devices as the
> first step, and let the guest os do the initialization. The inability to boot
> from devices in another domain is indeed an issue, and we don't have very
> good solution to it yet.
>
> Things might change in the future if we can figure out a better solution, and I
> hope we can have an easier and more elegant solution in OVMF. But now
> we are just trying to give a possible solution as a poc.
>
> Thanks
> Zihan
>
> _______________________________________________
> SeaBIOS mailing list
> SeaBIOS(a)seabios.org
> https://mail.coreboot.org/mailman/listinfo/seabios
--
Dr. David Alan Gilbert / dgilbert(a)redhat.com / Manchester, UK
The qemu part of this patch can be found in
https://lists.gnu.org/archive/html/qemu-devel/2018-09/msg01988.html
pxb-pcie device in qemu uses only one pci domain(0) so far, which means
there are at most 256 busses. However, pcie topology requires one bus per
device, which will use up the busses soon if there are more pcie devices.
To solve the problem, QEMU can put the pxb-pcie in separate pci domain so
we can get more devices. As QEMU relies on seabios to allocate space for
its mcfg table, we must modify the seabios to configure mcfg_base and
mcfg_size for qemu.
Since pxb-pcie may only uses a subset of all 256 busses in a domain,
we let qemu pass its desired mcfg_size as a hidden bar, seabios just
decides the mcfg_base for it.
Compared with previous design, mmconfig is not put above 4g anymore,
but inside [0x80000000, 0xb0000000), and leaves the major part of
configuration to guest OS, which makes the whole part of code much
simpler.
Changelog:
v3 <- v2
- Refactor design so that seabios only does minimal mcfg configure, and
leaves everything else to guest os. This makes the code much simpler.
- Does not put mcfg over 4g anymore, but inside [0x80000000, 0xb0000000)
Over 4g placement can be expected in future version.
v2 <- v1
- Fix bugs in filtering domains when traversing pci devices
- Reformat some hardcoded codes, such as probing the pci device in pci_setup
Zihan Yang (1):
pciinit: setup mcfg for pxb-pcie to support multiple pci domains
src/fw/dev-q35.h | 7 +++++++
src/fw/pciinit.c | 32 ++++++++++++++++++++++++++++++++
src/hw/pci_ids.h | 1 +
3 files changed, 40 insertions(+)
--
2.7.4
On Tue, Sep 11, 2018 at 09:29:39PM -0500, Matt DeVillier wrote:
> Commit 4b42cc4 [SeaVGABios/cbvga: Advertise correct pixel format] neglected
> to wrap the cbfb mask size components in GET_FARVAR(), which resulted in a
> bogus value for bpp, breaking output on most/all devices. Fix this by
> adding GET_FARVAR() as appropriate.
>
> Additionally, some newer ChromeOS devices still fail even with this fix,
> so fall back to using the coreboot reported bit depth if the calculated
> valid is invalid.
Thanks. I committed this change.
-Kevin
Hi folks,
I want to put forth this patch for your review.
In my opinion, it is very useful to know just after your machine has
POSTed how much system RAM the machine has access to.
This patch prints out to the screen just under the UUID the following line:
System RAM: X MB
where X is the number of whole megabytes available to the system.
Since the memory map is already available to SeaBIOS, I think it makes
sense to total up the sizes of regions marked as E820_RAM and print the
grand total.
I have tested this on a coreboot QEMU bios with custom SeaBIOS built:
SeaBIOS (version rel-1.11.0-46-g18e193d)
System RAM: 1022 MB
When I tried flashing a compatible image to my Lenovo X220 laptop it
bricked it. I'm not sure why, maybe I compiled it wrong or inserted it
into my image wrongly with cbfstool.
Cheers,
Damien