Aaron Durbin (adurbin@google.com) just uploaded a new patch set to gerrit, which you can find at http://review.coreboot.org/2892
-gerrit
commit 7e0e23ff785400cff7783ce87a4e806eacd674f1 Author: Aaron Durbin adurbin@chromium.org Date: Tue Mar 26 14:10:34 2013 -0500
mtrr: honor IORESOURCE_WRCOMB
All resources that set the IORESOURCE_WRCOMB attribute which are also marked as IORESOURCE_PREFETCH will have a MTRR set up that is of the write-combining cacheable type. The only resources on x86 that can be set to write-combining are prefetchable ones.
Change-Id: Iba7452cff3677e07d7e263b79982a49c93be9c54 Signed-off-by: Aaron Durbin adurbin@chromium.org --- src/cpu/x86/mtrr/mtrr.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/src/cpu/x86/mtrr/mtrr.c b/src/cpu/x86/mtrr/mtrr.c index 8bcdc6e..8bf69c5 100644 --- a/src/cpu/x86/mtrr/mtrr.c +++ b/src/cpu/x86/mtrr/mtrr.c @@ -142,10 +142,12 @@ static struct memranges *get_physical_address_space(void) * time remove unacheable regions from the cacheable ones. */ if (addr_space == NULL) { struct range_entry *r; - const unsigned long mask = IORESOURCE_CACHEABLE; + unsigned long mask; + unsigned long match;
addr_space = &addr_space_storage;
+ mask = IORESOURCE_CACHEABLE; /* Collect cacheable and uncacheable address ranges. The * uncacheable regions take precedence over the cacheable * regions. */ @@ -153,6 +155,14 @@ static struct memranges *get_physical_address_space(void) memranges_add_resources(addr_space, mask, 0, MTRR_TYPE_UNCACHEABLE);
+ /* Handle any write combining resources. Only prefetchable + * resources with the IORESOURCE_WRCOMB flag are appropriate + * for this MTRR type. */ + match = IORESOURCE_PREFETCH | IORESOURCE_WRCOMB; + mask |= match; + memranges_add_resources(addr_space, mask, match, + MTRR_TYPE_WRCOMB); + /* The address space below 4GiB is special. It needs to be * covered entirly by range entries so that MTRR calculations * can be properly done for the full 32-bit address space.