cedarhouse1@comcast.net has uploaded this change for review.
cpu/x86: Put guard around align for smm_save_state_size
The STM support aligns the smm_save_state_size. However, this creates
issue for some platforms because of this value being hard coded to
0x400
Signed-off-by: Eugene D. Myers <edmyers@tycho.nsa.gov>
Change-Id: Ia584f7e9b86405a12eb6cbedc3a2615a8727f69e
---
M src/cpu/x86/mp_init.c
1 file changed, 3 insertions(+), 2 deletions(-)
git pull ssh://review.coreboot.org:29418/coreboot refs/changes/34/38734/1
diff --git a/src/cpu/x86/mp_init.c b/src/cpu/x86/mp_init.c
index 331f3b5..aba50bf 100644
--- a/src/cpu/x86/mp_init.c
+++ b/src/cpu/x86/mp_init.c
@@ -1053,8 +1053,9 @@
* note: In the future, this will need to handle newer x86 processors
* that require alignment of the save state on 32K boundaries.
*/
- state->smm_save_state_size =
- ALIGN_UP(state->smm_save_state_size, 0x1000);
+ if (CONFIG(STM))
+ state->smm_save_state_size =
+ ALIGN_UP(state->smm_save_state_size, 0x1000);
/*
* Default to smm_initiate_relocation() if trigger callback isn't
To view, visit change 38734. To unsubscribe, or for help writing mail filters, visit settings.