cpu/x86: Put guard around align for smm_save_state_size

The STM support aligns the smm_save_state_size.  However, this creates
issue for some platforms because of this value being hard coded to
0x400

Signed-off-by: Eugene D. Myers <edmyers@tycho.nsa.gov>
Change-Id: Ia584f7e9b86405a12eb6cbedc3a2615a8727f69e
Reviewed-on: https://review.coreboot.org/c/coreboot/+/38734
Reviewed-by: Patrick Rudolph <siro@das-labor.org>
Reviewed-by: Patrick Georgi <pgeorgi@google.com>
Reviewed-by: ron minnich <rminnich@gmail.com>
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
This commit is contained in:
Eugene Myers 2020-02-06 10:37:01 -05:00 committed by Patrick Georgi
parent 7354605f86
commit faa1118fc7
1 changed files with 13 additions and 8 deletions

View File

@ -1044,7 +1044,7 @@ static void fill_mp_state(struct mp_state *state, const struct mp_ops *ops)
/*
* Make sure there is enough room for the SMM descriptor
*/
if (CONFIG(STM))
if (CONFIG(STM)) {
state->smm_save_state_size +=
sizeof(TXT_PROCESSOR_SMM_DESCRIPTOR);
@ -1052,9 +1052,14 @@ static void fill_mp_state(struct mp_state *state, const struct mp_ops *ops)
* algorithm. (align on 4K)
* note: In the future, this will need to handle newer x86 processors
* that require alignment of the save state on 32K boundaries.
* The alignment is done here because coreboot has a hard coded
* value of 0x400 for this value.
* Also, this alignment only works on CPUs less than 5 threads
*/
if (CONFIG(STM))
state->smm_save_state_size =
ALIGN_UP(state->smm_save_state_size, 0x1000);
}
/*
* Default to smm_initiate_relocation() if trigger callback isn't