<div dir="ltr"><div dir="ltr"><div dir="ltr"><div>Hi,</div><div><br></div><div>Here are some additional details, but before I answer I share some preconceptions I had along the way:</div><div> - after some initial tries I thought, that the stock BIOS sets up the configuration in a special way for the iGPU and gDPU (PCI adress BARs, IO addresses etc) to get those working therefore, I tried to achieve a very similar setup using CB<br></div><div> - the problem is that the stock BIOS patches the raw VBIOS/Oprom files during execution - which Coreboot doesn't do. (ref: <a href="https://www.coreboot.org/Board:lenovo/x60/Installation">https://www.coreboot.org/Board:lenovo/x60/Installation</a>)</div><div> - when I started "hacking" months ago I got similar linux kernel panics like you got when pci bus 02.0 was enabled. I assumed there was a pci allocation problem behind (one additional pci device with big pci resource requirement), also I assumed (maybe wrongly), that pci BAR addresses for the dGPU - and also UMA memory - need to be below the first 4 GB of memory - for the dGPU to work. (details below)</div><div> - some of these preconceptions might not need to be considered any more, but so far I haven't made attempts to exclude any of these. By sharing this, I hope more people can help us cleaning the unnecessary stuff...<br></div><div><br><div class="gmail_quote"><div dir="ltr">On Sat, Nov 3, 2018 at 8:13 PM Mike Banon <<a href="mailto:mikebdp2@gmail.com">mikebdp2@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Dear HJK,<br>
<br>
My sincere huge congratulations to you for your incredible<br>
breakthrough! :-) As you could see from<br>
<a href="http://mail.coreboot.org/pipermail/coreboot/2018-November/087679.html" rel="noreferrer" target="_blank">http://mail.coreboot.org/pipermail/coreboot/2018-November/087679.html</a><br>
- I got stuck after loading both iGPU and dGPU OpROMs at kernel panic<br>
and thought its' only because of a broken PCI config. But after<br>
reading your good experience, it seems there were some other problems<br>
as well which you've successfully solved. Please could you clarify<br>
some details :<br>
1)<br>
> I modified the IO address & the PCI device ID in APU GPU VBIOS file (even though IO address change to 0x3000 might not required).<br>
> I modified the IO address in the dGPU VBIOS files to the actual CB - linux kernel provided (0x1000) address<br>
1a) Please confirm that you've modified the IO addresses as following:<br>
APU iGPU (integrated GPU) = 0x3000<br>
dGPU (discrete GPU) = 0x1000<br>
1b) Why these exact "IO address" values? How you came up with them,<br>
and why not any different addresses?<br>
1c) At what address offsets (in relation to the beginnings of VBIOS<br>
files) these "IO address" values are situated?<br>
1d) To what value you've changed the PCI device ID in APU iGPU VBIOS,<br>
and why this change is necessary?<br></blockquote><div><br></div><div>When CB boots the iGPU gets IO address 0x3000 and dGPU gets address 0x1000 from the resource allocator. Since CB doesn't patches the VBIOS files, I had to do this manually. Giving it a thought now, this might not be required any more (to mod the VBIOS), because 1. this was not the problem the dGPU didn't init, 2. if I'm not mistaken normally Oprom execution should do this, right?</div><div>Offset for IO address for iGPU is @ 0x1A4 (2 bytes), for dGPU @ 0x218. You can use radeon-bios-decode to check the result, and use atomcrc.exe to show the new checksum (@offset 0x21) - to be corrected with hexedit.</div><div>I have multiple variants of APUs, the iGPU pci device ID need to match within VBIOS (offset 0x1b8 for pci id (4 bytes)). additional remark: you can also set the subsystem ID @ 0x1a6 (4 bytes).</div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
2)<br>
> I had to add my SPI chip driver for EN25QH32=0x7016 to CB<br>
Why it was needed? My G505S also has this EN25QH32 chip and I never<br>
had to add any SPI chip drivers - this chip always "just worked" at<br>
coreboot. If this is indeed required (e.g. for the successful graphics<br>
initialization), where to get this SPI chip driver and how to add it?<br>
</blockquote><div><br></div><div>This is not required for graphics, but hybernation support needs that CB writes data to the SPI chip. You can see if SPI writes succeed or not in CB console log. I have multiple g505s boards - each with different SPI chips and my EON chip ID was not listed in CB SPI eon.c<br></div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">3)<br>
> I enabled pci 00:02.0 for dGPU in devicetree.cb → according to the CB logs this causes pci resource allocation problems<br>
Do you still have any part of this log, for the reference? I don't<br>
remember any resource allocation problems in 8 level CB logs<br></blockquote><div><br></div><div>Well, this is an interesting topic and I admit, that I don't fully understand all the details, so my phrasing below might not be too "academic"... Moreover, pls. check my preconceptions, and note the fact that pci 02.0 has to be enabled in CB to be able to fill the VFCT for the dGPU. <br></div><div>In the stock BIOS iGPU gets BAR address @0xd0000000. But iGPU UMA size=768M, therefore I assume iGPU also occupies +2x256M UMA memory starting from 0xb0000000. If you check the stock BIOS it is located in the 0x9xxxxxxx range, whereas in CB - depending on the pci resource allocator provided ranges, the AGESA stipulations and the UMA size - from normally end of the 0xb.... range. The problem is that it seems in AGESA Top of low memory (TOM) is always 0xe0000000 (set by the BSP through msr register), this is used by CB to calculate the relative offsets. In a g505s without dGPU pci addres BAR for iGPU is set @ 0xe0000000, and since CB UMA size=512MB, there is room for the 512 MB UMA (below 4GB) starting from 0xc... and the iGPU BAR from 0xe...</div><div>When I enabled the dGPU in CB (in another board), pci resource allocator provided pci address 0xd... to the iGPU which causes overlap with UMA memory (512 Mb from 0xc...). I got a black screen (and kernel panic). I "managed" to solve this by decreasing UMA size to 256M and also modified UMA_base to start 256M "lower". This way iGPU UMA memory started from 0xc.. (256MB), iGPU pci address was @0xd.. and dGPU pci address was @ 0xe.. Enabling the dGPU with its own 2GB RAM will potentially circumvent the negative performance effects using a lower iGPU UMA size.<br></div><div>In a recent modificaion, I changed this method: instead of setting UMA_base address starting at a lower address, I modified CB in a way, that when BSP TOM address is "announced" to the other CPUs, I write 0xd0000000 instead of AGESA 0xe0000000 to the msr register. <br></div><div><br></div><div>There could certainly be better ways to modify the CB code, to consider UMA_size, additional PCI resources and still allocate these memory resources below 4GB.</div><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
4)<br>
> I modified pci_device.c, function pci_dev_init to only enable pci_rom_probe and pci_rom load for the dGPU pci device (no oprom exec needed for PCI_CLASS_DISPLAY_OTHER)<br>
Why OpROM exec is not needed for dGPU device? And what would happen if<br>
I will execute it there? (would it break the things, or just no<br>
difference)<br></blockquote><div><br></div><div>My experience shows, that dGPU Oprom execution in itself didn't properly initiated the dGPU, kernel still complained. I had interim attempts, when CB or Seabios executed the dGPU Oprom, without any noticeable init results at those times. The important thing is though to prevent SB from creating the boot entry for the executable Oprom, because this always freezes the laptop at boot list (without any means to debug what has happened). I solved that by modifying SB to run the dGPU oprom using the vgarom_setup function and not the optionrom_setup function.</div><div><br></div><div>Remark: you can disable oprom execution an easy way by changing the 4th byte of the oprom from 0xe9 to 0xcb --> return without initial jump to the exec code.</div><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
5)<br>
> I modified pci_rom.c, so the VBIOS of the dGPU is copied to 0xd0000 from CBFS (..check_initialized..)<br>
Why this specific 0xd0000 address has been selected? Or it has been<br>
automatically chosen by coreboot when it wanted to load this OpROM to<br>
dGPU ?<br></blockquote><div><br></div><div>We have two pci vga cards, and the shadow rom by design for non-legacy vga starts at 0xc0000 and is 64K long. 0xd0000 is the non-vga pci option rom shadow area. Interestingly linux kernel allocates 132K starting from 0xc0000 for video shadow, and this could be a reason, why "normal" oprom initialization for vga card1 + vga card2 doesn't work, because vga card 1 claims in linux kernel all 132K for itself, and therefore the second vga card cannot use this overlapping range for its shadowed rom. At least this is something (how I understood) I found when searching for this. This was mentioned in connection with a linux 3.16? kernel issue.</div><div>Also note, that SB alligns oproms with 2048 (0x800) address allignment, so when I "played" with oprom init both in CB and in SB I transparently used 4096 (0x1000) allignment in SB. Oproms can only exectue, if its addresses are alligned to at least 0x800 as I remember... <br></div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
6)<br>
> and pci_rom_write_acpi_tables only run for PCI_CLASS_DEVICE_OTHER → dGPU (ACPI VFCT fill is done from the previously prepared 0xd0000=PCI_RAM_IMAGE_START address)<br>
Why this should be done for dGPU only? (not for APU iGPU or any other devices)<br>
<br></blockquote><div>So far I haven't succeeded to modify CB ACPI VFCT creator function to povide VFCT tables, that contain both VBIOSes(Oproms) in a way, that the kernel radeon or amdgpu driver can use. The problem is, that I believe one main ACPI-VCFT table need to be created, with two VBIOS contents at different offsets (ref: <a href="https://www.phoronix.com/forums/forum/linux-graphics-x-org-drivers/open-source-amd-linux/926087-more-radeon-amdgpu-fixes-line-up-for-linux-4-10/page2">https://www.phoronix.com/forums/forum/linux-graphics-x-org-drivers/open-source-amd-linux/926087-more-radeon-amdgpu-fixes-line-up-for-linux-4-10/page2</a>). Also worth to check the linux kernel sources for ACPI VFCT headers..</div><div> CB ACPI VFCT was not designed to provide that, it runs once when normally pci_class = PCI_CLASS_DEVICE_VGA device is found. For simplicity, I modified this to run oly for PCI_CLASS_DEVICE_OTHER, and it worked, so I haven't gone further. CB initializes iGPU oprom anyways..</div></div><div class="gmail_quote">dGPU is only initialized (KMS) when linux kernel loads the radeon/amdgu driver, until that the dGPU is not initialized.<br></div><div class="gmail_quote"><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
7) I would like to refine your "hacks" and commit them to coreboot,<br>
while giving a full credit to you of course. But, if I would start<br>
doing it from scratch according to your instructions, there is always<br>
a chance that I'd misunderstand or forget something (or you forgot to<br>
mention some small but important technicality) and as result my<br>
attempt to reproduce your success could be problematic. It would be<br>
ideal if you could share:<br>
A*) your successful coreboot.rom (I'd like to check its' PCI state out<br>
of curiosity)<br>
B*) your coreboot's .config file for the reference purposes<br>
C*) upload your whole coreboot directory to some convenient place,<br>
either to GitHub or just archive it as .tar.gz (maybe as a multipart<br>
archive if it ends up too big) and upload it to some hosting website;<br>
or maybe even create a .torrent out of it and host it temporarily -<br>
whatever is most convenient to you.</blockquote><div><br></div><div>OK, I'll see what I can do to submit logs, config files and maybe "cleaned up" patches, of my current working setup.</div><div></div><div><br></div><div>BR,</div><div>HJK<br></div><div><br></div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hopefully you could share this stuff and I will be able to reproduce<br>
your results, initially "AS-IS" - but then I will try my best to<br>
refine it to the acceptable code quality (so that it could be<br>
officially accepted to coreboot) while checking from time to time if<br>
it is still working after my gradual changes. Hope to hear any news<br>
from you soon<br>
<br>
Best regards,<br>
Mike Banon<br>
On Sat, Nov 3, 2018 at 8:42 PM Hans Jürgen Kitter<br>
<<a href="mailto:hansjurgenj789@gmail.com" target="_blank">hansjurgenj789@gmail.com</a>> wrote:<br>
><br>
> Hi All,<br>
><br>
> I thought that I share with the G505S+Coreboot (+QubesOS) users that I finally managed to enable and use the dGPUs on the G505S boards. When mentioning boards, I mean, that both variants with its respective VBIOSes: dGPU=1002,6663 (ATI HD 8570M, Sun-Pro) and the dGPU=1002,6665 (ATI R5 M230, Jet-Pro) are working under Ubuntu Linux or Qubes 4.0 (no Windows testing was done or planned).<br>
> DRI_PRIME=1 seem to work, I use the radeon driver for the APU GPU (A10-5750m) and amdgpu for the dGPU. I'm currently investigating how to get the most out of this setup in Qubes (HW accel 2D in AppVMs).<br>
><br>
> Problem statement: the dGPU was not initiated correctly and was not enabled by the radeon or amdgpu kernel module, VFCT header was corrupt/truncated (effectively was not corrupt, but only VFCT for the APU/iGPU was present). Trying to init the dGPU device through Oprom initialization failed all my attempts (radeon 0000:04:00.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0x0000). This was true whether using Coreboot+GRUB or using Coreboot+Seabios, regardless of kernel version 4.x.<br>
><br>
> Solution in short: Modify Coreboot, that the VFCT table is only created for the dGPU so it can be used by either the radeon or amdgpu KMS driver.<br>
><br>
> Solution in more details:<br>
><br>
> CB 4.8 latest, with SB latest 1.11.2 as payload<br>
> I modified the IO address & the PCI device ID in APU GPU VBIOS file (even though IO address change to 0x3000 might not required).<br>
> I modified the IO address in the dGPU VBIOS files to the actual CB - linux kernel provided (0x1000) address<br>
> both prepared VBIOSes were put into CBFS<br>
> Coreboot config: std. g505s build, but both run oprom & load oprom was enabled (native mode), also display was set to FB, 1024x768 16.8M color 8:8:8, legacy VGA → this provides console output even if eg. GRUB is the payload).<br>
> main payload Seabios (compiled as elf separately)<br>
> I had to add my SPI chip driver for EN25QH32=0x7016 to CB<br>
> I used version 0600111F AMD Fam 15h microcode (even though this was later recalled by AMD to n-1 version 06001119). Nevertheless, Spectre V2 RSB and IBPB mitigations are thus enabled.<br>
> I enabled pci 00:02.0 for dGPU in devicetree.cb → according to the CB logs this causes pci resource allocation problems, because the 256M address window of the dGPU conficts with the APU GPU VRAM UMA memory window (originally 512M). Solution (not really professional, but I gave up understanding the whole AGESA and CB PCI allocation mechanism): I decreased the UMA size in buildopts.c to 256M (0x1000) and patched amd_mtrr.c, function setup_bsp_ramtop, to bring TOM down by 256M. So now the APU GPU has an 256M PCI address window @0xd0000000 and the dGPU has also 256M from 0xe0000000 without eventual resource conflict (there's an initial allocation warning for pci:00:01.0 in linux log though).<br>
> I modified pci_device.c, function pci_dev_init to only enable pci_rom_probe and pci_rom load for the dGPU pci device (no oprom exec needed for PCI_CLASS_DISPLAY_OTHER).<br>
> I modified pci_rom.c,<br>
><br>
> so the VBIOS of the dGPU is copied to 0xd0000 from CBFS (..check_initialized..)<br>
> and pci_rom_write_acpi_tables only run for PCI_CLASS_DEVICE_OTHER → dGPU (ACPI VFCT fill is done from the previously prepared 0xd0000=PCI_RAM_IMAGE_START address)<br>
><br>
> Remark1: there could be one ACPI VFCT table containing both VBIOSes present, but the APU GPU VBIOS is fully working via Oprom exec only.<br>
> Remark2: in the stock v3 bios additional ACPI tables are present (SSDT, DSDT) which contain VGA related methods and descriptions therefore, the VFCT table itself (in EFI mode) is a lot smaller ca. 20K (vs. 65k). These additional ACPI extentions allow eg. the vga_switcheroo to work under Ubuntu in stock BIOS. WIth CB currently only offloading (DRI...) seems to work. And as I checked I will need to understand the whole ACPI concept before I can migrate the relevant ACPI tables from the stock BIOS to CB for the switching to work. Also, it seems, that TurboCore (PowerNow?) is not currently enabled entirely in CB --> also implemented via ACPI tables in stock BIOS.<br>
> Well, I’m not a coding expert (just a G505S enthusiast), this is the reason I didn’t include patches. My goal was to describe a working solution, and someone with proper coding skills and the possibility to submit official patches for CB can get this committed.<br>
><br>
> BR,<br>
> HJK<br>
> --<br>
> coreboot mailing list: <a href="mailto:coreboot@coreboot.org" target="_blank">coreboot@coreboot.org</a><br>
> <a href="https://mail.coreboot.org/mailman/listinfo/coreboot" rel="noreferrer" target="_blank">https://mail.coreboot.org/mailman/listinfo/coreboot</a><br>
</blockquote></div><div class="gmail_quote"><br></div></div></div></div></div>