Aaron Durbin (adurbin@google.com) just uploaded a new patch set to gerrit, which you can find at http://review.coreboot.org/2892
-gerrit
commit 563698e6a32661407df6d085780cfdaa4a0eff2c Author: Aaron Durbin adurbin@chromium.org Date: Fri Mar 22 22:20:01 2013 -0500
mtrr: honor IORESOURCE_WRCOMB
All resources that set the IORESOURCE_WRCOMB attribute which are also marked as IORESOURCE_PREFETCH will have a MTRR set up that is of the write-combining cacheable type. The only resources on x86 that can be set to write-combining are prefetchable ones.
Change-Id: Iba7452cff3677e07d7e263b79982a49c93be9c54 Signed-off-by: Aaron Durbin adurbin@chromium.org --- src/cpu/x86/mtrr/mtrr.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/src/cpu/x86/mtrr/mtrr.c b/src/cpu/x86/mtrr/mtrr.c index 6df659b..1f124a2 100644 --- a/src/cpu/x86/mtrr/mtrr.c +++ b/src/cpu/x86/mtrr/mtrr.c @@ -152,10 +152,12 @@ static struct memory_ranges *get_physical_address_space(void) if (addr_space == NULL) { struct memory_range *r; uint32_t below4gbase; - const unsigned long mask = IORESOURCE_CACHEABLE; + unsigned long mask; + unsigned long match;
addr_space = &addr_space_storage;
+ mask = IORESOURCE_CACHEABLE; /* Collect cacheable and uncacheable address ranges. The * uncacheable regions take precedence over the cacheable * regions. */ @@ -168,6 +170,14 @@ static struct memory_ranges *get_physical_address_space(void) insert_memory_range(addr_space, RANGE_TO_PHYS_ADDR(below4gbase), 4096, MTRR_TYPE_UNCACHEABLE);
+ /* Handle any write combining resources. Only prefetchable + * resources with the IORESOURCE_WRCOMB flag are appropriate + * for this MTRR type. */ + match = IORESOURCE_PREFETCH | IORESOURCE_WRCOMB; + mask |= match; + memory_ranges_add_resources(addr_space, mask, match, + MTRR_TYPE_WRCOMB); + /* Fill all holes in address space with an uncachable entry. */ memory_ranges_fill_holes(addr_space, MTRR_TYPE_UNCACHEABLE);