cbmem: Always maintain backing store struct in a global on non-x86

The current CBMEM code contains an optimization that maintains the
structure with information about the CBMEM backing store in a global
variable, so that we don't have to recover it from cbmem_top() again
every single time we access CBMEM. However, due to the problems with
using globals in x86 romstage, this optimization has only been enabled
in ramstage.

However, all non-x86 platforms are SRAM-based (at least for now) and
can use globals perfectly fine in earlier stages. Therefore, this patch
extends the optimization on those platforms to all stages. This also
allows us to remove the requirement that cbmem_top() needs to return
NULL before its backing store has been initialized from those boards,
since the CBMEM code can now keep track of whether it has been
initialized by itself.

Change-Id: Ia6c1db00ae01dee485d5e96e4315cb399dc63696
Signed-off-by: Julius Werner <jwerner@chromium.org>
Reviewed-on: https://review.coreboot.org/16273
Tested-by: build bot (Jenkins)
Reviewed-by: Aaron Durbin <adurbin@chromium.org>
This commit is contained in:
Julius Werner 2016-08-19 16:20:40 -07:00
parent f975e55dcd
commit 3c814b2e2b
2 changed files with 20 additions and 9 deletions

View File

@ -67,9 +67,8 @@ void cbmem_initialize_empty_id_size(u32 id, u64 size);
/* Return the top address for dynamic cbmem. The address returned needs to /* Return the top address for dynamic cbmem. The address returned needs to
* be consistent across romstage and ramstage, and it is required to be * be consistent across romstage and ramstage, and it is required to be
* below 4GiB. * below 4GiB.
* Board or chipset should return NULL if any interface that might rely on cbmem * x86 boards or chipsets must return NULL before the cbmem backing store has
* (e.g. cbfs, vboot) is used before the cbmem backing store has been * been initialized. */
* initialized. */
void *cbmem_top(void); void *cbmem_top(void);
/* Add a cbmem entry of a given size and id. These return NULL on failure. The /* Add a cbmem entry of a given size and id. These return NULL on failure. The

View File

@ -26,10 +26,23 @@
#include <arch/acpi.h> #include <arch/acpi.h>
#endif #endif
/*
* We need special handling on x86 before ramstage because we cannot use global
* variables (we're executing in-place from flash so we don't have a writable
* data segment, and we cannot use CAR_GLOBAL here since that mechanism itself
* is dependent on CBMEM). Therefore, we have to always try to partially recover
* CBMEM from cbmem_top() whenever we try to access it. In other environments
* we're not so constrained and just keep the backing imd struct in a global.
* This also means that we can easily tell whether CBMEM has explicitly been
* initialized or recovered yet on those platforms, and don't need to put the
* burden on board or chipset code to tell us by returning NULL from cbmem_top()
* before that point.
*/
#define CAN_USE_GLOBALS (!IS_ENABLED(CONFIG_ARCH_X86) || ENV_RAMSTAGE)
static inline struct imd *cbmem_get_imd(void) static inline struct imd *cbmem_get_imd(void)
{ {
/* Only supply a backing store for imd in ramstage. */ if (CAN_USE_GLOBALS) {
if (ENV_RAMSTAGE) {
static struct imd imd_cbmem; static struct imd imd_cbmem;
return &imd_cbmem; return &imd_cbmem;
} }
@ -77,11 +90,10 @@ static struct imd *imd_init_backing_with_recover(struct imd *backing)
struct imd *imd; struct imd *imd;
imd = imd_init_backing(backing); imd = imd_init_backing(backing);
if (!ENV_RAMSTAGE) { if (!CAN_USE_GLOBALS) {
/* Always partially recover if we can't keep track of whether
* we have already initialized CBMEM in this stage. */
imd_handle_init(imd, cbmem_top()); imd_handle_init(imd, cbmem_top());
/* Need to partially recover all the time outside of ramstage
* because there's object storage outside of the stack. */
imd_handle_init_partial_recovery(imd); imd_handle_init_partial_recovery(imd);
} }