Hello,
I have Win7 working well with mahogany_fam10 if I use a PCI video card. However, Device manager complains that it is unable to find enough free resources for the UMA graphics controller. Running an AMI legacy BIOS shows the problem can be solved by enabling the PCI command register memory and I/O enable bits for only the graphics card that you are using. Is there a way to make coreboot do this, and if not, any suggestions on where best to implement such a feature?
Thanks, Scott
On Thu, Oct 21, 2010 at 4:18 PM, Scott Duplichan scott@notabs.org wrote:
Hello,
I have Win7 working well with mahogany_fam10 if I use a PCI video card. However, Device manager complains that it is unable to find enough free resources for the UMA graphics controller. Running an AMI legacy BIOS shows the problem can be solved by enabling the PCI command register memory and I/O enable bits for only the graphics card that you are using. Is there a way to make coreboot do this, and if not, any suggestions on where best to implement such a feature?
Scott,
I think that you can define the device you want disabled in devicetree.cb and set it to off.
Marc
-----Original Message----- From: coreboot-bounces+scott=notabs.org@coreboot.org [mailto:coreboot-bounces+scott=notabs.org@coreboot.org] On Behalf Of Marc Jones Sent: Thursday, October 21, 2010 07:47 PM To: Scott Duplichan Cc: coreboot@coreboot.org Subject: Re: [coreboot] how to prevent legacy resource conflict with multiple VGA cards
]On Thu, Oct 21, 2010 at 4:18 PM, Scott Duplichan scott@notabs.org wrote: ]> Hello, ]> ]> I have Win7 working well with mahogany_fam10 if I use a PCI ]> video card. However, Device manager complains that it is unable ]> to find enough free resources for the UMA graphics controller. ]> Running an AMI legacy BIOS shows the problem can be solved by ]> enabling the PCI command register memory and I/O enable bits ]> for only the graphics card that you are using. Is there a way ]> to make coreboot do this, and if not, any suggestions on where ]> best to implement such a feature? ] ]Scott, ] ]I think that you can define the device you want disabled in ]devicetree.cb and set it to off. ] ]Marc ]-- ]http://se-eng.com
Hello Marc,
Thanks for the suggestion. What I am really looking for is the special handling that a commercial BIOS does for legacy video devices. Say you have a typical desktop UMA board. If you add a PCI video card, Windows resource manager will not report any resource conflicts. The same is not true for coreboot+seabios. When you add the PCI video card, Windows device manager reports a resource conflict.
One detail I forgot to mention in the original email. What is the exact resource conflict? Coreboot does properly assign unique bar values for both video devices. However, the PCI class code tells the OS that in addition to the bars, both video devices decode memory range a0000-bffff, I/O 3b0-3bb, and I/O 3c0-3df. So there really is a conflict, and using the pci command register to turn off memory and I/O decoding for all but one video card solves the problem.
Thanks, Scott
On Thu, Oct 21, 2010 at 7:08 PM, Scott Duplichan scott@notabs.org wrote:
-----Original Message----- From: coreboot-bounces+scott=notabs.org@coreboot.org [mailto:coreboot-bounces+scott=notabs.org@coreboot.org] On Behalf Of Marc Jones Sent: Thursday, October 21, 2010 07:47 PM To: Scott Duplichan Cc: coreboot@coreboot.org Subject: Re: [coreboot] how to prevent legacy resource conflict with multiple VGA cards
]On Thu, Oct 21, 2010 at 4:18 PM, Scott Duplichan scott@notabs.org wrote: ]> Hello, ]> ]> I have Win7 working well with mahogany_fam10 if I use a PCI ]> video card. However, Device manager complains that it is unable ]> to find enough free resources for the UMA graphics controller. ]> Running an AMI legacy BIOS shows the problem can be solved by ]> enabling the PCI command register memory and I/O enable bits ]> for only the graphics card that you are using. Is there a way ]> to make coreboot do this, and if not, any suggestions on where ]> best to implement such a feature? ] ]Scott, ] ]I think that you can define the device you want disabled in ]devicetree.cb and set it to off. ] ]Marc ]-- ]http://se-eng.com
Hello Marc,
Thanks for the suggestion. What I am really looking for is the special handling that a commercial BIOS does for legacy video devices. Say you have a typical desktop UMA board. If you add a PCI video card, Windows resource manager will not report any resource conflicts. The same is not true for coreboot+seabios. When you add the PCI video card, Windows device manager reports a resource conflict.
One detail I forgot to mention in the original email. What is the exact resource conflict? Coreboot does properly assign unique bar values for both video devices. However, the PCI class code tells the OS that in addition to the bars, both video devices decode memory range a0000-bffff, I/O 3b0-3bb, and I/O 3c0-3df. So there really is a conflict, and using the pci command register to turn off memory and I/O decoding for all but one video card solves the problem.
To disable the command register should be to set the device to off.
I think that you really want is the legacy video bit in the device and bridge, VGA pallet snoop and VGA enable in the command register. As far as I know, this isn't something we have had to handle previously. I think that the right way is to add a resource ( like IORESOURCE), but I don't have any implementation ideas beyond that (like how to mark a specific one as the VGA resource.) Maybe Myles or Stefan can comment?
Marc
Thanks for the suggestion. What I am really looking for is the special handling that a commercial BIOS does for legacy video devices. Say you have a typical desktop UMA board. If you add a PCI video card, Windows resource manager will not report any resource conflicts. The same is not true for coreboot+seabios. When you add the PCI video card, Windows device manager reports a resource conflict.
One detail I forgot to mention in the original email. What is the exact resource conflict? Coreboot does properly assign unique bar values for both video devices. However, the PCI class code tells the OS that in addition to the bars, both video devices decode memory range a0000-bffff, I/O 3b0-3bb, and I/O 3c0-3df. So there really is a conflict, and using the pci command register to turn off memory and I/O decoding for all but one video card solves the problem.
To disable the command register should be to set the device to off.
I thought there was a flag that was set that took care of that. I don't think you need to disable the device.
I think that you really want is the legacy video bit in the device and bridge, VGA pallet snoop and VGA enable in the command register.
I think that's right.
I'm not at my machine right now, but there used to be some options that controlled which VGA device was set up. CONFIG_CONSOLE_VGA_MULTI ? Some other config option that had to do with first VGA device found? Maybe it's in src/device/device.c.
Anyway, you had to enable multiple cards and tell it to use the first one (or the last one sometimes.)
Hopefully that helps.
Thanks, Myles
]>> Thanks for the suggestion. What I am really looking for is the ]>> special handling that a commercial BIOS does for legacy video ]>> devices. Say you have a typical desktop UMA board. If you add a ]>> PCI video card, Windows resource manager will not report any ]>> resource conflicts. The same is not true for coreboot+seabios. ]>> When you add the PCI video card, Windows device manager reports ]>> a resource conflict. ]>> ]>> One detail I forgot to mention in the original email. What is ]>> the exact resource conflict? Coreboot does properly assign ]>> unique bar values for both video devices. However, the PCI class ]>> code tells the OS that in addition to the bars, both video ]>> devices decode memory range a0000-bffff, I/O 3b0-3bb, and I/O ]>> 3c0-3df. So there really is a conflict, and using the pci ]>> command register to turn off memory and I/O decoding for all but ]>> one video card solves the problem. ]> ]> To disable the command register should be to set the device to off. ]I thought there was a flag that was set that took care of that. I ]don't think you need to disable the device. ] ]> I think that you really want is the legacy video bit in the device and ]> bridge, VGA pallet snoop and VGA enable in the command register. ]I think that's right.
The VGA enable bit in the bridge control register disables VGA for the entire bus. What if the bus has two VGA cards, and one is the one you want to use?
]I'm not at my machine right now, but there used to be some options ]that controlled which VGA device was set up. CONFIG_CONSOLE_VGA_MULTI ]? Some other config option that had to do with first VGA device ]found? Maybe it's in src/device/device.c.
I found and tried CONFIG_CONSOLE_VGA_MULTI. I see a couple of problems with it. First, it seems to apply only to AMD K8 and family 10h processors (in my case this is OK). A bigger problem is that it seems to control VGA on an HT link by HT link basis. One problem there is that an HT link could have multiple graphics cards. The CONFIG_CONSOLE_VGA_MULTI doesn't work for me for a different reason. The graphics that needs to be disabled is not on an HT link at all, it is the internal UMA graphics.
Thanks, Scott
]Anyway, you had to enable multiple cards and tell it to use the first ]one (or the last one sometimes.) ] ]Hopefully that helps. ] ]Thanks, ]Myles
I found and tried CONFIG_CONSOLE_VGA_MULTI. I see a couple of problems with it. First, it seems to apply only to AMD K8 and family 10h processors (in my case this is OK). A bigger problem is that it seems to control VGA on an HT link by HT link basis. One problem there is that an HT link could have multiple graphics cards. The CONFIG_CONSOLE_VGA_MULTI doesn't work for me for a different reason. The graphics that needs to be disabled is not on an HT link at all, it is the internal UMA graphics.
Internal to where? Is it internal to the processor?
I think in the past it has been enough to set the VGA bits on the bridges. I guess there could have been a problem where there were two VGA devices on the same bus.
So you want to implement the logic to disable UMA if there's an external VGA device added, right? It seems like the place to add that is in the code for the device that implements it. Do you want to disable allocation of the UMA area if there's an external card too?
Thanks, Myles
]> I found and tried CONFIG_CONSOLE_VGA_MULTI. I see a couple of problems ]> with it. First, it seems to apply only to AMD K8 and family 10h processors ]> (in my case this is OK). A bigger problem is that it seems to control ]> VGA on an HT link by HT link basis. One problem there is that an HT link ]> could have multiple graphics cards. The CONFIG_CONSOLE_VGA_MULTI doesn't ]> work for me for a different reason. The graphics that needs to be disabled ]> is not on an HT link at all, it is the internal UMA graphics.
]Internal to where? Is it internal to the processor?
Hello Myles,
My mistake with the word 'internal', I really meant UMA.
]I think in the past it has been enough to set the VGA bits on the bridges. ]I guess there could have been a problem where there were two VGA devices on ]the same bus. ] ]So you want to implement the logic to disable UMA if there's an external VGA ]device added, right? It seems like the place to add that is in the code for ]the device that implements it. Do you want to disable allocation of the UMA ]area if there's an external card too? ] ]Thanks, ]Myles
I should explain the sequence of events that lead to my question. I have Win7 running well on this AMD RS780/SB700 board with the exception of a couple of problems. One problem is that the in-box ATI driver for the UMA graphics is unstable. So to get Win7 installed, I must add a PCI video card. When Win7 is installed with the PCI video card present, device manager reports that the UMA graphics device cannot find enough free resources.
When a reference BIOS is used, the device manager warnings are not present because the BIOS disables the UMA. One reference BIOS disables UMA only by clearing the PCI command register bits for memory and I/O decode. A different reference BIOS skips UMA initialization altogether. I think for now, the command register method might be easiest for coreboot.
The problem with the existing coreboot CONFIG_CONSOLE_VGA_MULTI feature is that it can only disable graphics devices that are on an HT link. In the case of UMA, non-AMD or non-HT systems, it doesn't work.
Thanks, Scott
One problem is that the in-box ATI driver for the UMA graphics is unstable.
Even with the factory BIOS?
So to get Win7 installed, I must add a PCI video card. When Win7 is installed with the PCI video card present, device manager reports that the UMA graphics device cannot find enough free resources.
When a reference BIOS is used, the device manager warnings are not present because the BIOS disables the UMA. One reference BIOS disables UMA only by clearing the PCI command register bits for memory and I/O decode. A different reference BIOS skips UMA initialization altogether. I think for now, the command register method might be easiest for coreboot.
OK
The problem with the existing coreboot CONFIG_CONSOLE_VGA_MULTI feature is that it can only disable graphics devices that are on an HT link. In the case of UMA, non-AMD or non-HT systems, it doesn't work.
I think the easiest thing to do would be to use the same mechanism to know whether or not to disable UMA. If there is another VGA card added, then the VGA bits will be set on bridges that aren't associated with UMA. The UMA device could check the tree for these bits and disable itself, or some more code could be added to generically do the same thing.
Thanks, Myles
]> One problem is that the in-box ATI driver for the UMA graphics is ]> unstable. ]Even with the factory BIOS?
The reference BIOS works fine, so this is a coreboot problem. I have added missing family 10h code to the coreboot RS780 GFX initialization, and enabled HT3 for the link. I have added missing NP attributes to the frame buffer mapping. But the driver still fails after a few seconds. There is more debugging to do here. This is the most serious remaining Win7 problem I know of for RS780/SB700 boards.
]> So to get Win7 installed, I must add a PCI video card. When Win7 is ]> installed with the PCI video card present, device manager reports that the ]> UMA graphics device cannot find enough free resources. ]> ]> When a reference BIOS is used, the device manager warnings are not present ]> because the BIOS disables the UMA. One reference BIOS disables UMA only by ]> clearing the PCI command register bits for memory and I/O decode. A ]> different ]> reference BIOS skips UMA initialization altogether. I think for now, the ]> command register method might be easiest for coreboot. ]OK ] ]> The problem with the existing coreboot CONFIG_CONSOLE_VGA_MULTI feature is ]> that it can only disable graphics devices that are on an HT link. In the ]> case of UMA, non-AMD or non-HT systems, it doesn't work. ] ]I think the easiest thing to do would be to use the same mechanism to know ]whether or not to disable UMA. If there is another VGA card added, then the ]VGA bits will be set on bridges that aren't associated with UMA. The UMA ]device could check the tree for these bits and disable itself, or some more ]code could be added to generically do the same thing.
OK, thanks. I will eventually prototype something for review.
]Thanks, ]Myles
On Thu, Oct 28, 2010 at 11:48 AM, Scott Duplichan scott@notabs.org wrote:
]> One problem is that the in-box ATI driver for the UMA graphics is ]> unstable. ]Even with the factory BIOS?
The reference BIOS works fine, so this is a coreboot problem. I have added missing family 10h code to the coreboot RS780 GFX initialization, and enabled HT3 for the link. I have added missing NP attributes to the frame buffer mapping. But the driver still fails after a few seconds. There is more debugging to do here. This is the most serious remaining Win7 problem I know of for RS780/SB700 boards.
This sounds like a much higher priority problem than the two video cards problem. As a way to work-around it, maybe you should just use Marc's suggestion and disable the UMA device in the devicetree.
I wonder what the driver expects. Is it looking for the memory allocation at fixed locations? Do you have a register dump for the ATI device with coreboot and the factory BIOS before an OS takes over?
Thanks, Myles
]On Thu, Oct 28, 2010 at 11:48 AM, Scott Duplichan scott@notabs.org wrote: ]> ]> One problem is that the in-box ATI driver for the UMA graphics is ]> ]> unstable. ]> ]Even with the factory BIOS? ]> ]> The reference BIOS works fine, so this is a coreboot problem. I have ]> added missing family 10h code to the coreboot RS780 GFX initialization, ]> and enabled HT3 for the link. I have added missing NP attributes to the ]> frame buffer mapping. But the driver still fails after a few seconds. ]> There is more debugging to do here. This is the most serious remaining ]> Win7 problem I know of for RS780/SB700 boards. ]This sounds like a much higher priority problem than the two video ]cards problem.
Hello Myles,
Yes, this is certainly a Win7 shop-stopper for AMD RS780 (or RS880) systems. The Win7 generic vga driver works. But there is a catch. I know of no way to use this driver for Win7 setup. Once setup is complete, safe mode can be used to enable the generic vga driver. But as it stands now, adding a video card is the only way I am able to get through Win7 setup.
]As a way to work-around it, maybe you should just use ]Marc's suggestion and disable the UMA device in the devicetree.
I don't really need a work-around at all. Even though there is a conflict, the PCI graphics card works OK.
]I wonder what the driver expects. Is it looking for the memory ]allocation at fixed locations? Do you have a register dump for the ]ATI device with coreboot and the factory BIOS before an OS takes over?
I was thinking of comparing register dumps. I need to write some dump code. ATI has several indirect spaces. A different approach I have used is to figure out what GFX initialization steps are non-essential for basic operation by stripping down the reference BIOS. That way I can limit the amount of coreboot code to check. I don't think a signed Win7 driver could be looking at anything at a fixed location. The frame buffer can be above or below 4GB. I noticed the current cimx code leaves the debug bar enabled, so I did the same. I fixed a problem where coreboot leaves a temporary PCIe bar enabled. There must be a pretty basic problem remaining because the driver fails after a few seconds. Win7 reports that a kernel mode thread has spent too long in the ATI driver code.
]Thanks, ]Myles
Myles Watson wrote:
The UMA device could check the tree for these bits and disable itself, or some more code could be added to generically do the same thing.
I think it's a very good idea to have this code be generic, since it can apply to all chipsets with UMA.
Scott Duplichan wrote:
There must be a pretty basic problem remaining because the driver fails after a few seconds. Win7 reports that a kernel mode thread has spent too long in the ATI driver code.
I guess that the watchdog is those few seconds. Does graphics work during that time?
Could you use windbg to get more details, in particular find out what the ATI driver was trying to do?
//Peter
-----Original Message----- From: coreboot-bounces@coreboot.org [mailto:coreboot-bounces@coreboot.org] On Behalf Of Peter Stuge Sent: Thursday, October 28, 2010 11:44 PM To: coreboot@coreboot.org Subject: Re: [coreboot] how to prevent legacy resource conflictwith multiple VGA cards
]Myles Watson wrote: ]> The UMA device could check the tree for these bits and disable ]> itself, or some more code could be added to generically do the ]> same thing. ] ]I think it's a very good idea to have this code be generic, since it ]can apply to all chipsets with UMA. ] ] ]Scott Duplichan wrote: ]> There must be a pretty basic problem remaining because the driver ]> fails after a few seconds. Win7 reports that a kernel mode thread ]> has spent too long in the ATI driver code. ] ]I guess that the watchdog is those few seconds. Does graphics work ]during that time?
The high resolution display from the ATI driver is often visible for a few seconds. But then the display appears to reset. Sometimes the BSOD is "the device driver got stuck in an infinite loop". Other times it is Stop 116 BSOD.
I just now tried several different video option roms. The all behave the same.
Earlier this evening I tried different memory congigurations. With 4GB (2+2), I have an ACPI related BSOD to fix. With 3GB (2+1), a memory test fails immediately when it gets near the end of the first GB. With a single 1GB DIMM or a single 2 GB DIMM, the problem is the same.
]Could you use windbg to get more details, in particular find out what ]the ATI driver was trying to do?
For the infinite loop case, I captured a stack trace:
nt!DbgBreakPointWithStatus nt!KiBugCheckDebugBreak+0x14 nt!KeBugCheck2+0x7c8 nt!KeBugCheckEx+0x104 dxgkrnl!TdrTimedOperationBugcheckOnTimeout+0x3e dxgkrnl!TdrTimedOperationDelay+0xbb atikmdag!OsServices::Wait+0x92 atikmdag!MCIL_WaitFor+0x50 atikmdag!Cail_MCILWaitFor+0x72 atikmdag!Cail_R600_WaitForIdle+0x52 atikmdag!Cail_R600_WaitForIdle+0x172 atikmdag!Cail_ExecuteAsicSetupTable+0x98 atikmdag!CAIL_ASICSetup+0x7b atikmdag!CAIL_VPURecoveryBegin+0x20e atikmdag!CAILVPURecoveryBegin+0x19 atikmdag!AsicInit::ResetTheAsic+0x4a atikmdag!DISPATCHER::HeavyWeightReset+0x65 atikmdag!DISPATCHER::ResetFromTimeoutWorker+0x16b atikmdag!Dispatch_ResetFromTimeout+0x67 dxgkrnl!DXGADAPTER::DdiResetFromTimeout+0x4c dxgkrnl!DXGADAPTER::PrepareToReset+0xf6 dxgkrnl!TdrIsRecoveryRequired+0x279 dxgmms1!VidSchiReportHwHang+0x4ce dxgmms1!VidSchiCheckHwProgress+0xee dxgmms1!VidSchiWaitForSchedulerEvents+0x319 dxgmms1!VidSchiScheduleCommandToRun+0x3d9 dxgmms1!VidSchiWorkerThread+0x196 nt!PspSystemThreadStartup+0x1a9 nt!KxStartSystemThread+0x16
Looks like it had a hang early and some recovery attempts were unsuccessful.
To break into the initial failure might take some experimentation. I did do the sanity check of reading a few I/O ports and they do not come back FF. The frame buffer looks OK. I don't even know what bars are for.
Thanks, Scott
]//Peter
Scott Duplichan wrote:
]Could you use windbg to get more details, in particular find out ]what the ATI driver was trying to do?
For the infinite loop case, I captured a stack trace:
nt!DbgBreakPointWithStatus nt!KiBugCheckDebugBreak+0x14 nt!KeBugCheck2+0x7c8 nt!KeBugCheckEx+0x104 dxgkrnl!TdrTimedOperationBugcheckOnTimeout+0x3e dxgkrnl!TdrTimedOperationDelay+0xbb atikmdag!OsServices::Wait+0x92 atikmdag!MCIL_WaitFor+0x50 atikmdag!Cail_MCILWaitFor+0x72 atikmdag!Cail_R600_WaitForIdle+0x52 atikmdag!Cail_R600_WaitForIdle+0x172 atikmdag!Cail_ExecuteAsicSetupTable+0x98 atikmdag!CAIL_ASICSetup+0x7b atikmdag!CAIL_VPURecoveryBegin+0x20e atikmdag!CAILVPURecoveryBegin+0x19 atikmdag!AsicInit::ResetTheAsic+0x4a atikmdag!DISPATCHER::HeavyWeightReset+0x65 atikmdag!DISPATCHER::ResetFromTimeoutWorker+0x16b atikmdag!Dispatch_ResetFromTimeout+0x67 dxgkrnl!DXGADAPTER::DdiResetFromTimeout+0x4c dxgkrnl!DXGADAPTER::PrepareToReset+0xf6 dxgkrnl!TdrIsRecoveryRequired+0x279 dxgmms1!VidSchiReportHwHang+0x4ce dxgmms1!VidSchiCheckHwProgress+0xee dxgmms1!VidSchiWaitForSchedulerEvents+0x319 dxgmms1!VidSchiScheduleCommandToRun+0x3d9 dxgmms1!VidSchiWorkerThread+0x196 nt!PspSystemThreadStartup+0x1a9 nt!KxStartSystemThread+0x16
Nice!
Looks like it had a hang early and some recovery attempts were unsuccessful.
I think it might be useful to talk to the radeon dri (Linux kernel graphics) people about this, they might be able to provide some hints about why the asic would get stuck and not even be resettable.
//Peter
On 29.10.2010 06:44, Peter Stuge wrote:
Scott Duplichan wrote:
There must be a pretty basic problem remaining because the driver fails after a few seconds. Win7 reports that a kernel mode thread has spent too long in the ATI driver code.
I guess that the watchdog is those few seconds. Does graphics work during that time?
Could you use windbg to get more details, in particular find out what the ATI driver was trying to do?
A few shots in the dark, maybe they help. Is it possible that the failure reason is totally unrelated to graphics? Maybe the ATI driver uses a timer which is set up differently under coreboot? Or it is waiting for an hardcoded interrupt or some ACPI event which only exists in the reference BIOS? Side note: RS690 ACPI PCI config space accesses were broken in coreboot because they assumed MMCONF and MMCONF was disabled before starting the payload last time I checked. Maybe your issue is related?
Regards, Carl-Daniel
-----Original Message----- From: coreboot-bounces@coreboot.org [mailto:coreboot-bounces@coreboot.org] On Behalf Of Carl-Daniel Hailfinger Sent: Friday, October 29, 2010 12:17 AM To: coreboot@coreboot.org Subject: Re: [coreboot] how to prevent legacy resource conflict with multiple VGA cards
]On 29.10.2010 06:44, Peter Stuge wrote: ]> Scott Duplichan wrote: ]> ]>> There must be a pretty basic problem remaining because the driver ]>> fails after a few seconds. Win7 reports that a kernel mode thread ]>> has spent too long in the ATI driver code. ]>> ]> ]> I guess that the watchdog is those few seconds. Does graphics work ]> during that time? ]> ]> Could you use windbg to get more details, in particular find out what ]> the ATI driver was trying to do? ]> ] ]A few shots in the dark, maybe they help. ]Is it possible that the failure reason is totally unrelated to graphics? ]Maybe the ATI driver uses a timer which is set up differently under ]coreboot? Or it is waiting for an hardcoded interrupt or some ACPI event ]which only exists in the reference BIOS? Side note: RS690 ACPI PCI ]config space accesses were broken in coreboot because they assumed ]MMCONF and MMCONF was disabled before starting the payload last time I ]checked. Maybe your issue is related?
Hello Carl-Daniel,
Those are good ideas. I already wondered if maybe the timeout was for no other reason that a timer running to fast. I can boot the PCI card again and look for suspicious timer related behavior.
The reference BIOS does have some ASL I have not studied closely. One module mentions "code for WMI Overclock function". I assume that works with some overclocking utility.
Thanks, Scott
]Regards, ]Carl-Daniel ] ]-- ]http://www.hailfinger.org/
Scott Duplichan wrote:
"code for WMI Overclock function". I assume that works with some overclocking utility.
WMI might be Windows Management Interface ?
//Peter
-----Original Message----- From: coreboot-bounces@coreboot.org [mailto:coreboot-bounces@coreboot.org] On Behalf Of Peter Stuge Sent: Friday, October 29, 2010 12:41 AM To: coreboot@coreboot.org Subject: Re: [coreboot] how to prevent legacy resource conflictwith multiple VGA cards
]Scott Duplichan wrote: ]> "code for WMI Overclock function". I assume that works with some ]> overclocking utility. ] ]WMI might be Windows Management Interface ?
Yes it is.
]//Peter
Whew, finally solved this problem. 1600 x 1200 32-bit color is now working with the Win7 in-box ATI driver. The problem is that some important information is passed from coreboot to the ATI video driver, and some of it was incorrect. The information is passed by placing it in the last 512 bytes of the frame buffer, so it is easy to look at. The format of the information is in struct ATOM_INTEGRATED_SYSTEM_INFO_V2.
I should have thought of this when the slow HT link problem resulted in screen tearing. The driver tries to never let this happen. One of the items passed to the driver is HT link frequency and width. Hard-coded values were used for link frequency and width, and frequency was invalid. Apparently the driver skipped the bandwidth checks because of this.
I believe with this change, Win7 will be able to get through setup smoothly and most everything will be working. There are several more patches to submit before mahogany_fam10 will have all of the Win7 changes.
Thanks, Scott
On 30.10.2010 07:07, Scott Duplichan wrote:
Whew, finally solved this problem. 1600 x 1200 32-bit color is now working with the Win7 in-box ATI driver. The problem is that some important information is passed from coreboot to the ATI video driver, and some of it was incorrect. The information is passed by placing it in the last 512 bytes of the frame buffer, so it is easy to look at. The format of the information is in struct ATOM_INTEGRATED_SYSTEM_INFO_V2.
I should have thought of this when the slow HT link problem resulted in screen tearing. The driver tries to never let this happen. One of the items passed to the driver is HT link frequency and width. Hard-coded values were used for link frequency and width, and frequency was invalid. Apparently the driver skipped the bandwidth checks because of this.
I believe with this change, Win7 will be able to get through setup smoothly and most everything will be working. There are several more patches to submit before mahogany_fam10 will have all of the Win7 changes.
Awesome detective work, Scott!
If you ever have the time, please write down all the stuff you had to go through to find and fix the bugs you encountered, or at least talk about it and record it with your MP3 player so someone else can transcribe it. Transcribing may be a task suitable for Google Code-in 2010 because it adds documentation about the project.
Regards, Carl-Daniel
-----Original Message----- From: Carl-Daniel Hailfinger [mailto:c-d.hailfinger.devel.2006@gmx.net] Sent: Saturday, October 30, 2010 07:40 PM To: Scott Duplichan Cc: coreboot@coreboot.org Subject: Re: [coreboot] how to prevent legacy resource conflictwith multiple VGA cards
]On 30.10.2010 07:07, Scott Duplichan wrote: ]> Whew, finally solved this problem. 1600 x 1200 32-bit color is now working ]> with the Win7 in-box ATI driver. The problem is that some important ]> information is passed from coreboot to the ATI video driver, and some of ]> it was incorrect. The information is passed by placing it in the last ]> 512 bytes of the frame buffer, so it is easy to look at. The format of ]> the information is in struct ATOM_INTEGRATED_SYSTEM_INFO_V2. ]> ]> I should have thought of this when the slow HT link problem resulted in ]> screen tearing. The driver tries to never let this happen. One of the ]> items passed to the driver is HT link frequency and width. Hard-coded ]> values were used for link frequency and width, and frequency was invalid. ]> Apparently the driver skipped the bandwidth checks because of this. ]> ]> I believe with this change, Win7 will be able to get through setup ]> smoothly and most everything will be working. There are several more ]> patches to submit before mahogany_fam10 will have all of the Win7 changes.
]Awesome detective work, Scott!
Well darn, still not there yet. Last night, I booted in a low resolution mode and then successfully switched into 32-bit 1600x1200. Now I find that this mode from the start is not reliable. Also, Microsoft's dxdiag (direct x test) causes a crash. The ATI Atom table is now identical to that of the reference BIOS. Debug continues...
Thanks, Scott
]If you ever have the time, please write down all the stuff you had to go ]through to find and fix the bugs you encountered, or at least talk about ]it and record it with your MP3 player so someone else can transcribe it. ]Transcribing may be a task suitable for Google Code-in 2010 because it ]adds documentation about the project. ] ]Regards, ]Carl-Daniel
A minor correction to the AMD family 10h name string function
Signed-off-by: Scott Duplichan scott@notabs.org
Index: src/cpu/amd/model_10xxx/processor_name.c =================================================================== --- src/cpu/amd/model_10xxx/processor_name.c (revision 6000) +++ src/cpu/amd/model_10xxx/processor_name.c (working copy) @@ -204,7 +204,7 @@ memset(program_string, 0, sizeof(program_string));
if (!Model) { - processor_name_string = Pg ? sample : thermal; + processor_name_string = Pg ? thermal : sample; goto done; }
Scott Duplichan wrote:
A minor correction to the AMD family 10h name string function
Signed-off-by: Scott Duplichan scott@notabs.org
Acked-by: Peter Stuge peter@stuge.se