David Hendricks (dhendrix(a)chromium.org) just uploaded a new patch set to gerrit, which you can find at http://review.coreboot.org/2698
-gerrit
commit 8f85c75edaabfa057a73f60cd0d44b3b3a938576
Author: David Hendricks <dhendrix(a)chromium.org>
Date: Tue Mar 12 20:16:44 2013 -0700
exynos5250: add RAM resource beginning at physical address
The original code attempted to reserve a space in RAM for coreboot to
remain resident. This turns out not to be needed, and breaks things
for the kernel since the exynos5250-smdk5250 kernel device tree is
set start RAM at 0x40000000.
(This patch was originally by Gabe, I'm just uploading it)
Change-Id: I4536edaf8785d81a3ea008216a2d57549ce5edfb
Signed-off-by: Gabe Black <gabeblack(a)chromium.org>
Signed-off-by: David Hendricks <dhendrix(a)chromium.org>
---
src/cpu/samsung/exynos5250/cpu.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/src/cpu/samsung/exynos5250/cpu.c b/src/cpu/samsung/exynos5250/cpu.c
index 0a49e1e..7a565d2 100644
--- a/src/cpu/samsung/exynos5250/cpu.c
+++ b/src/cpu/samsung/exynos5250/cpu.c
@@ -1,9 +1,8 @@
#include <console/console.h>
#include <device/device.h>
-#define RAM_BASE ((CONFIG_SYS_SDRAM_BASE >> 10) + (CONFIG_COREBOOT_ROMSIZE_KB))
-#define RAM_SIZE (((CONFIG_DRAM_SIZE_MB << 10UL) * CONFIG_NR_DRAM_BANKS) \
- - CONFIG_COREBOOT_ROMSIZE_KB)
+#define RAM_BASE (CONFIG_SYS_SDRAM_BASE >> 10)
+#define RAM_SIZE ((CONFIG_DRAM_SIZE_MB << 10UL) * CONFIG_NR_DRAM_BANKS)
static void domain_read_resources(device_t dev)
{
Stefan Reinauer (stefan.reinauer(a)coreboot.org) just uploaded a new patch set to gerrit, which you can find at http://review.coreboot.org/2693
-gerrit
commit 613ecddbbf1d9b21929bf1b11883778ca02cb26d
Author: Aaron Durbin <adurbin(a)chromium.org>
Date: Thu Jan 3 17:38:47 2013 -0600
x86: SMM Module Support
Add support for SMM modules by leveraging the RMODULE lib. This allows
for easier dynamic SMM handler placement. The SMM module support
consists of a common stub which puts the executing CPU into protected
mode and calls into a pre-defined handler. This stub can then be used
for SMM relocation as well as the real SMM handler. For the relocation
one can call back into coreboot ramstage code to perform relocation in
C code.
The handler is essentially a copy of smihandler.c, but it drops the TSEG
differences. It also doesn't rely on the SMM revision as the cpu code
should know what processor it is supported.
Ideally the CONFIG_SMM_TSEG option could be removed once the existing
users of that option transitioned away from tseg_relocate() and
smi_get_tseg_base().
The generic SMI callbacks are now not marked as weak in the
declaration so that there aren't unlinked references. The handler
has default implementations of the generic SMI callbacks which are
marked as weak. If an external compilation module has a strong symbol
the linker will use that instead of the link one.
Additionally, the parameters to the generic callbacks are dropped as
they don't seem to be used directly. The SMM runtime can provide the
necessary support if needed.
Change-Id: I1e2fed71a40b2eb03197697d29e9c4b246e3b25e
Signed-off-by: Aaron Durbin <adurbin(a)chromium.org>
---
Makefile.inc | 6 +-
src/cpu/x86/Kconfig | 18 ++
src/cpu/x86/smm/Makefile.inc | 46 +++++
src/cpu/x86/smm/smm_module_handler.c | 171 ++++++++++++++++
src/cpu/x86/smm/smm_module_header.c | 24 +++
src/cpu/x86/smm/smm_module_loader.c | 371 +++++++++++++++++++++++++++++++++++
src/cpu/x86/smm/smm_stub.S | 145 ++++++++++++++
src/include/cpu/x86/smm.h | 80 ++++++++
8 files changed, 860 insertions(+), 1 deletion(-)
diff --git a/Makefile.inc b/Makefile.inc
index ad97363..729da4d 100644
--- a/Makefile.inc
+++ b/Makefile.inc
@@ -58,7 +58,7 @@ subdirs-y += site-local
#######################################################################
# Add source classes and their build options
-classes-y := ramstage romstage bootblock smm cpu_microcode
+classes-y := ramstage romstage bootblock smm smmstub cpu_microcode
#######################################################################
# Helper functions for ramstage postprocess
@@ -110,13 +110,17 @@ endif
bootblock-c-ccopts:=-D__BOOT_BLOCK__ -D__PRE_RAM__
bootblock-S-ccopts:=-D__BOOT_BLOCK__ -D__PRE_RAM__
+smmstub-c-ccopts:=-D__SMM__
+smmstub-S-ccopts:=-D__SMM__
smm-c-ccopts:=-D__SMM__
smm-S-ccopts:=-D__SMM__
# SMM TSEG base is dynamic
+ifneq ($(CONFIG_SMM_MODULES),y)
ifeq ($(CONFIG_SMM_TSEG),y)
smm-c-ccopts += -fpic
endif
+endif
ramstage-c-deps:=$$(OPTION_TABLE_H)
romstage-c-deps:=$$(OPTION_TABLE_H)
diff --git a/src/cpu/x86/Kconfig b/src/cpu/x86/Kconfig
index ae3241e..62d78b5 100644
--- a/src/cpu/x86/Kconfig
+++ b/src/cpu/x86/Kconfig
@@ -66,3 +66,21 @@ config SMM_TSEG
config SMM_TSEG_SIZE
hex
default 0
+
+config SMM_MODULES
+ bool
+ default n
+ depends on HAVE_SMI_HANDLER
+ select RELOCATABLE_MODULES
+ help
+ If SMM_MODULES is selected then SMM handlers are built as modules.
+ A SMM stub along with a SMM loader/relocator. All the handlers are
+ written in C with stub being the only assembly.
+
+config SMM_MODULE_HEAP_SIZE
+ hex
+ default 0x4000
+ depends on SMM_MODULES
+ help
+ This option determines the size of the heap within the SMM handler
+ modules.
diff --git a/src/cpu/x86/smm/Makefile.inc b/src/cpu/x86/smm/Makefile.inc
index 405cf89..ee4dbea 100644
--- a/src/cpu/x86/smm/Makefile.inc
+++ b/src/cpu/x86/smm/Makefile.inc
@@ -17,6 +17,51 @@
## Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
##
+ifeq ($(CONFIG_SMM_MODULES),y)
+
+smmstub-y += smm_stub.S
+smmstub-y += smm_module_header.c
+
+smm-y += smiutil.c
+smm-y += smm_module_header.c
+smm-y += smm_module_handler.c
+
+ramstage-y += smm_module_loader.c
+
+ramstage-srcs += $(obj)/cpu/x86/smm/smm
+ramstage-srcs += $(obj)/cpu/x86/smm/smmstub
+
+# SMM Stub Module. The stub is used as a trampoline for relocation and normal
+# SMM handling.
+$(obj)/cpu/x86/smm/smmstub.o: $$(smmstub-objs)
+ $(CC) $(LDFLAGS) -nostdlib -r -o $@ $^
+
+# Link the SMM stub module with a 0-byte heap.
+$(eval $(call rmodule_link,$(obj)/cpu/x86/smm/smmstub.elf, $(obj)/cpu/x86/smm/smmstub.o, 0))
+
+$(obj)/cpu/x86/smm/smmstub: $(obj)/cpu/x86/smm/smmstub.elf
+ $(OBJCOPY) -O binary $< $@
+
+$(obj)/cpu/x86/smm/smmstub.ramstage.o: $(obj)/cpu/x86/smm/smmstub
+ @printf " OBJCOPY $(subst $(obj)/,,$(@))\n"
+ cd $(dir $@); $(OBJCOPY) -I binary $(notdir $<) -O elf32-i386 -B i386 $(notdir $@)
+
+# C-based SMM handler.
+
+$(obj)/cpu/x86/smm/smm.o: $$(smm-objs)
+ $(CC) $(LDFLAGS) -nostdlib -r -o $@ $^
+
+$(eval $(call rmodule_link,$(obj)/cpu/x86/smm/smm.elf, $(obj)/cpu/x86/smm/smm.o, $(CONFIG_SMM_MODULE_HEAP_SIZE)))
+
+$(obj)/cpu/x86/smm/smm: $(obj)/cpu/x86/smm/smm.elf
+ $(OBJCOPY) -O binary $< $@
+
+$(obj)/cpu/x86/smm/smm.ramstage.o: $(obj)/cpu/x86/smm/smm
+ @printf " OBJCOPY $(subst $(obj)/,,$(@))\n"
+ cd $(dir $@); $(OBJCOPY) -I binary $(notdir $<) -O elf32-i386 -B i386 $(notdir $@)
+
+else # CONFIG_SMM_MODULES
+
ramstage-$(CONFIG_HAVE_SMI_HANDLER) += smmrelocate.S
ifeq ($(CONFIG_HAVE_SMI_HANDLER),y)
ramstage-srcs += $(obj)/cpu/x86/smm/smm_wrap
@@ -50,3 +95,4 @@ $(obj)/cpu/x86/smm/smm_wrap.ramstage.o: $(obj)/cpu/x86/smm/smm_wrap
@printf " OBJCOPY $(subst $(obj)/,,$(@))\n"
cd $(obj)/cpu/x86/smm; $(OBJCOPY) -I binary smm -O elf32-i386 -B i386 smm_wrap.ramstage.o
+endif # CONFIG_SMM_MODULES
diff --git a/src/cpu/x86/smm/smm_module_handler.c b/src/cpu/x86/smm/smm_module_handler.c
new file mode 100644
index 0000000..67802d6
--- /dev/null
+++ b/src/cpu/x86/smm/smm_module_handler.c
@@ -0,0 +1,171 @@
+/*
+ * This file is part of the coreboot project.
+ *
+ * Copyright (C) 2013 ChromeOS Authors
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include <arch/io.h>
+#include <arch/romcc_io.h>
+#include <console/console.h>
+#include <cpu/x86/smm.h>
+
+typedef enum { SMI_LOCKED, SMI_UNLOCKED } smi_semaphore;
+
+/* SMI multiprocessing semaphore */
+static volatile
+smi_semaphore smi_handler_status __attribute__ ((aligned (4))) = SMI_UNLOCKED;
+
+static int smi_obtain_lock(void)
+{
+ u8 ret = SMI_LOCKED;
+
+ asm volatile (
+ "movb %2, %%al\n"
+ "xchgb %%al, %1\n"
+ "movb %%al, %0\n"
+ : "=g" (ret), "=m" (smi_handler_status)
+ : "g" (SMI_LOCKED)
+ : "eax"
+ );
+
+ return (ret == SMI_UNLOCKED);
+}
+
+static void smi_release_lock(void)
+{
+ asm volatile (
+ "movb %1, %%al\n"
+ "xchgb %%al, %0\n"
+ : "=m" (smi_handler_status)
+ : "g" (SMI_UNLOCKED)
+ : "eax"
+ );
+}
+
+void io_trap_handler(int smif)
+{
+ /* If a handler function handled a given IO trap, it
+ * shall return a non-zero value
+ */
+ printk(BIOS_DEBUG, "SMI function trap 0x%x: ", smif);
+
+ if (southbridge_io_trap_handler(smif))
+ return;
+
+ if (mainboard_io_trap_handler(smif))
+ return;
+
+ printk(BIOS_DEBUG, "Unknown function\n");
+}
+
+/**
+ * @brief Set the EOS bit
+ */
+static void smi_set_eos(void)
+{
+ southbridge_smi_set_eos();
+}
+
+
+static u32 pci_orig;
+
+/**
+ * @brief Backup PCI address to make sure we do not mess up the OS
+ */
+static void smi_backup_pci_address(void)
+{
+ pci_orig = inl(0xcf8);
+}
+
+/**
+ * @brief Restore PCI address previously backed up
+ */
+static void smi_restore_pci_address(void)
+{
+ outl(pci_orig, 0xcf8);
+}
+
+
+static const struct smm_runtime *smm_runtime;
+
+void *smm_get_save_state(int cpu)
+{
+ char *base;
+
+ /* This function assumes all save states start at top of default
+ * SMRAM size space and are staggered down by save state size. */
+ base = (void *)smm_runtime->smbase;
+ base += SMM_DEFAULT_SIZE;
+ base -= (cpu + 1) * smm_runtime->save_state_size;
+
+ return base;
+}
+
+void smm_handler_start(void *arg, int cpu, const struct smm_runtime *runtime)
+{
+ /* Make sure to set the global runtime. It's OK to race as the value
+ * will be the same across CPUs as well as multiple SMIs. */
+ if (smm_runtime == NULL)
+ smm_runtime = runtime;
+
+ if (cpu >= CONFIG_MAX_CPUS) {
+ console_init();
+ printk(BIOS_CRIT,
+ "Invalid CPU number assigned in SMM stub: %d\n", cpu);
+ return;
+ }
+
+ /* Are we ok to execute the handler? */
+ if (!smi_obtain_lock()) {
+ /* For security reasons we don't release the other CPUs
+ * until the CPU with the lock is actually done */
+ while (smi_handler_status == SMI_LOCKED) {
+ asm volatile (
+ ".byte 0xf3, 0x90\n" /* PAUSE */
+ );
+ }
+ return;
+ }
+
+ smi_backup_pci_address();
+
+ console_init();
+
+ printk(BIOS_SPEW, "\nSMI# #%d\n", cpu);
+
+ cpu_smi_handler();
+ northbridge_smi_handler();
+ southbridge_smi_handler();
+
+ smi_restore_pci_address();
+
+ smi_release_lock();
+
+ /* De-assert SMI# signal to allow another SMI */
+ smi_set_eos();
+}
+
+/* Provide a default implementation for all weak handlers so that relocation
+ * entries in the modules make sense. Without default implementations the
+ * weak relocations w/o a symbol have a 0 address which is where the modules
+ * are linked at. */
+int __attribute__((weak)) mainboard_io_trap_handler(int smif) { return 0; }
+void __attribute__((weak)) cpu_smi_handler(void) {}
+void __attribute__((weak)) northbridge_smi_handler() {}
+void __attribute__((weak)) southbridge_smi_handler() {}
+void __attribute__((weak)) mainboard_smi_gpi(u16 gpi_sts) {}
+int __attribute__((weak)) mainboard_smi_apmc(u8 data) { return 0; }
+void __attribute__((weak)) mainboard_smi_sleep(u8 slp_typ) {}
diff --git a/src/cpu/x86/smm/smm_module_header.c b/src/cpu/x86/smm/smm_module_header.c
new file mode 100644
index 0000000..3ee654f
--- /dev/null
+++ b/src/cpu/x86/smm/smm_module_header.c
@@ -0,0 +1,24 @@
+/*
+ * This file is part of the coreboot project.
+ *
+ * Copyright (C) 2013 ChromeOS Authors
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include <rmodule.h>
+
+extern char smm_handler_start[];
+
+DEFINE_RMODULE_HEADER(smm_module, smm_handler_start, RMODULE_TYPE_SMM);
diff --git a/src/cpu/x86/smm/smm_module_loader.c b/src/cpu/x86/smm/smm_module_loader.c
new file mode 100644
index 0000000..5eb4c5a
--- /dev/null
+++ b/src/cpu/x86/smm/smm_module_loader.c
@@ -0,0 +1,371 @@
+/*
+ * This file is part of the coreboot project.
+ *
+ * Copyright (C) 2012 ChromeOS Authors
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include <string.h>
+#include <rmodule.h>
+#include <cpu/x86/smm.h>
+#include <cpu/x86/cache.h>
+#include <console/console.h>
+
+/*
+ * Compoments that make up the SMRAM:
+ * 1. Save state - the total save state memory used
+ * 2. Stack - stacks for the CPUs in the SMM handler
+ * 3. Stub - SMM stub code for calling into handler
+ * 4. Handler - C-based SMM handler.
+ *
+ * The compoents are assumed to consist of one consecutive region.
+ */
+
+/* These paramters are used by the SMM stub code. A pointer to the params
+ * is also passed to the C-base handler. */
+struct smm_stub_params {
+ u32 stack_size;
+ u32 stack_top;
+ u32 c_handler;
+ u32 c_handler_arg;
+ struct smm_runtime runtime;
+} __attribute__ ((packed));
+
+/*
+ * The stub is the entry point that sets up protected mode and stacks for each
+ * cpu. It then calls into the SMM handler module. It is encoded as an rmodule.
+ */
+extern unsigned char _binary_smmstub_start[];
+
+/* This is the SMM handler that the stub calls. It is encoded as an rmodule. */
+extern unsigned char _binary_smm_start[];
+
+/* Per cpu minimum stack size. */
+#define SMM_MINIMUM_STACK_SIZE 32
+
+/*
+ * The smm_entry_ins consists of 3 bytes. It is used when staggering SMRAM entry
+ * addresses across CPUs.
+ *
+ * 0xe9 <16-bit relative target> ; jmp <relative-offset>
+ */
+struct smm_entry_ins {
+ char jmp_rel;
+ uint16_t rel16;
+} __attribute__ ((packed));
+
+/*
+ * Place the entry instructions for num entries beginning at entry_start with
+ * a given stride. The entry_start is the highest entry point's address. All
+ * other entry points are stride size below the previous.
+ */
+static void smm_place_jmp_instructions(void *entry_start, int stride, int num,
+ void *jmp_target)
+{
+ int i;
+ char *cur;
+ struct smm_entry_ins entry = { .jmp_rel = 0xe9 };
+
+ /* Each entry point has an IP value of 0x8000. The SMBASE for each
+ * cpu is different so the effective address of the entry instruction
+ * is different. Therefore, the relative displacment for each entry
+ * instruction needs to be updated to reflect the current effective
+ * IP. Additionally, the IP result from the jmp instruction is
+ * calculated using the next instruction's address so the size of
+ * the jmp instruction needs to be taken into account. */
+ cur = entry_start;
+ for (i = 0; i < num; i++) {
+ uint32_t disp = (uint32_t)jmp_target;
+
+ disp -= sizeof(entry) + (uint32_t)cur;
+ printk(BIOS_DEBUG,
+ "SMM Module: placing jmp sequence at %p rel16 0x%04x\n",
+ cur, disp);
+ entry.rel16 = disp;
+ memcpy(cur, &entry, sizeof(entry));
+ cur -= stride;
+ }
+}
+
+/* Place stacks in base -> base + size region, but ensure the stacks don't
+ * overlap the staggered entry points. */
+static void *smm_stub_place_stacks(char *base, int size,
+ struct smm_loader_params *params)
+{
+ int total_stack_size;
+ char *stacks_top;
+
+ if (params->stack_top != NULL)
+ return params->stack_top;
+
+ /* If stack space is requested assume the space lives in the lower
+ * half of SMRAM. */
+ total_stack_size = params->per_cpu_stack_size *
+ params->num_concurrent_stacks;
+
+ /* There has to be at least one stack user. */
+ if (params->num_concurrent_stacks < 1)
+ return NULL;
+
+ /* Total stack size cannot fit. */
+ if (total_stack_size > size)
+ return NULL;
+
+ /* Stacks extend down to SMBASE */
+ stacks_top = &base[total_stack_size];
+
+ return stacks_top;
+}
+
+/* Place the staggered entry points for each CPU. The entry points are
+ * staggered by the per cpu SMM save state size extending down from
+ * SMM_ENTRY_OFFSET. */
+static void smm_stub_place_staggered_entry_points(char *base,
+ const struct smm_loader_params *params, const struct rmodule *smm_stub)
+{
+ int stub_entry_offset;
+
+ stub_entry_offset = rmodule_entry_offset(smm_stub);
+
+ /* If there are staggered entry points or the stub is not located
+ * at the SMM entry point then jmp instructionss need to be placed. */
+ if (params->num_concurrent_save_states > 1 || stub_entry_offset != 0) {
+ int num_entries;
+
+ base += SMM_ENTRY_OFFSET;
+ num_entries = params->num_concurrent_save_states;
+ /* Adjust beginning entry and number of entries down since
+ * the initial entry point doesn't need a jump sequence. */
+ if (stub_entry_offset == 0) {
+ base -= params->per_cpu_save_state_size;
+ num_entries--;
+ }
+ smm_place_jmp_instructions(base,
+ params->per_cpu_save_state_size,
+ num_entries,
+ rmodule_entry(smm_stub));
+ }
+}
+
+/*
+ * The stub setup code assumes it is completely contained within the
+ * default SMRAM size (0x10000). There are potentially 3 regions to place
+ * within the default SMRAM size:
+ * 1. Save state areas
+ * 2. Stub code
+ * 3. Stack areas
+ *
+ * The save state and stack areas are treated as contiguous for the number of
+ * concurrent areas requested. The save state always lives at the top of SMRAM
+ * space, and the entry point is at offset 0x8000.
+ */
+static int smm_module_setup_stub(void *smbase, struct smm_loader_params *params)
+{
+ int total_save_state_size;
+ int smm_stub_size;
+ int stub_entry_offset;
+ char *smm_stub_loc;
+ void *stacks_top;
+ int size;
+ char *base;
+ int i;
+ struct smm_stub_params *stub_params;
+ struct rmodule smm_stub;
+
+ base = smbase;
+ size = SMM_DEFAULT_SIZE;
+
+ /* The number of concurrent stacks cannot exceed CONFIG_MAX_CPUS. */
+ if (params->num_concurrent_stacks > CONFIG_MAX_CPUS)
+ return -1;
+
+ /* Fail if can't parse the smm stub rmodule. */
+ if (rmodule_parse(&_binary_smmstub_start, &smm_stub))
+ return -1;
+
+ /* Adjust remaining size to account for save state. */
+ total_save_state_size = params->per_cpu_save_state_size *
+ params->num_concurrent_save_states;
+ size -= total_save_state_size;
+
+ /* The save state size encroached over the first SMM entry point. */
+ if (size <= SMM_ENTRY_OFFSET)
+ return -1;
+
+ /* Need a minimum stack size and alignment. */
+ if (params->per_cpu_stack_size <= SMM_MINIMUM_STACK_SIZE ||
+ (params->per_cpu_stack_size & 3) != 0)
+ return -1;
+
+ smm_stub_loc = NULL;
+ smm_stub_size = rmodule_memory_size(&smm_stub);
+ stub_entry_offset = rmodule_entry_offset(&smm_stub);
+
+ /* Assume the stub is always small enough to live within upper half of
+ * SMRAM region after the save state space has been allocated. */
+ smm_stub_loc = &base[SMM_ENTRY_OFFSET];
+
+ /* Adjust for jmp instruction sequence. */
+ if (stub_entry_offset != 0) {
+ int entry_sequence_size = sizeof(struct smm_entry_ins);
+ /* Align up to 16 bytes. */
+ entry_sequence_size += 15;
+ entry_sequence_size &= ~15;
+ smm_stub_loc += entry_sequence_size;
+ smm_stub_size += entry_sequence_size;
+ }
+
+ /* Stub is too big to fit. */
+ if (smm_stub_size > (size - SMM_ENTRY_OFFSET))
+ return -1;
+
+ /* The stacks, if requested, live in the lower half of SMRAM space. */
+ size = SMM_ENTRY_OFFSET;
+
+ /* Ensure stacks don't encroach onto staggered SMM
+ * entry points. The staggered entry points extend
+ * below SMM_ENTRY_OFFSET by the number of concurrent
+ * save states - 1 and save state size. */
+ if (params->num_concurrent_save_states > 1) {
+ size -= total_save_state_size;
+ size += params->per_cpu_save_state_size;
+ }
+
+ /* Place the stacks in the lower half of SMRAM. */
+ stacks_top = smm_stub_place_stacks(base, size, params);
+ if (stacks_top == NULL)
+ return -1;
+
+ /* Load the stub. */
+ if (rmodule_load(smm_stub_loc, &smm_stub))
+ return -1;
+
+ /* Place staggered entry points. */
+ smm_stub_place_staggered_entry_points(base, params, &smm_stub);
+
+ /* Setup the parameters for the stub code. */
+ stub_params = rmodule_parameters(&smm_stub);
+ stub_params->stack_top = (u32)stacks_top;
+ stub_params->stack_size = params->per_cpu_stack_size;
+ stub_params->c_handler = (u32)params->handler;
+ stub_params->c_handler_arg = (u32)params->handler_arg;
+ stub_params->runtime.smbase = (u32)smbase;
+ stub_params->runtime.save_state_size = params->per_cpu_save_state_size;
+
+ /* Initialize the APIC id to cpu number table to be 1:1 */
+ for (i = 0; i < params->num_concurrent_stacks; i++)
+ stub_params->runtime.apic_id_to_cpu[i] = i;
+
+ /* Allow the initiator to manipulate SMM stub parameters. */
+ params->runtime = &stub_params->runtime;
+
+ printk(BIOS_DEBUG, "SMM Module: stub loaded at %p. Will call %p(%p)\n",
+ smm_stub_loc, params->handler, params->handler_arg);
+
+ return 0;
+}
+
+/*
+ * smm_setup_relocation_handler assumes the callback is already loaded in
+ * memory. i.e. Another SMM module isn't chained to the stub. The other
+ * assumption is that the stub will be entered from the default SMRAM
+ * location: 0x30000 -> 0x40000.
+ */
+int smm_setup_relocation_handler(struct smm_loader_params *params)
+{
+ void *smram = (void *)SMM_DEFAULT_BASE;
+
+ /* There can't be more than 1 concurrent save state for the relocation
+ * handler because all CPUs default to 0x30000 as SMBASE. */
+ if (params->num_concurrent_save_states > 1)
+ return -1;
+
+ /* A handler has to be defined to call for relocation. */
+ if (params->handler == NULL)
+ return -1;
+
+ /* Since the relocation handler always uses stack, adjust the number
+ * of conccurent stack users to be CONFIG_MAX_CPUS. */
+ if (params->num_concurrent_stacks == 0)
+ params->num_concurrent_stacks = CONFIG_MAX_CPUS;
+
+ return smm_module_setup_stub(smram, params);
+}
+
+/* The SMM module is placed within the provided region in the following
+ * manner:
+ * +-----------------+ <- smram + size
+ * | stacks |
+ * +-----------------+ <- smram + size - total_stack_size
+ * | ... |
+ * +-----------------+ <- smram + handler_size + SMM_DEFAULT_SIZE
+ * | handler |
+ * +-----------------+ <- smram + SMM_DEFAULT_SIZE
+ * | stub code |
+ * +-----------------+ <- smram
+ *
+ * It should be noted that this algorithm will not work for
+ * SMM_DEFAULT_SIZE SMRAM regions such as the A segment. This algorithm
+ * expectes a region large enough to encompass the handler and stacks
+ * as well as the SMM_DEFAULT_SIZE.
+ */
+int smm_load_module(void *smram, int size, struct smm_loader_params *params)
+{
+ struct rmodule smm_mod;
+ int total_stack_size;
+ int handler_size;
+ int module_alignment;
+ int alignment_size;
+ char *base;
+
+ if (size <= SMM_DEFAULT_SIZE)
+ return -1;
+
+ /* Fail if can't parse the smm rmodule. */
+ if (rmodule_parse(&_binary_smm_start, &smm_mod))
+ return -1;
+
+ total_stack_size = params->per_cpu_stack_size *
+ params->num_concurrent_stacks;
+
+ /* Stacks start at the top of the region. */
+ base = smram;
+ base += size;
+ params->stack_top = base;
+
+ /* SMM module starts at offset SMM_DEFAULT_SIZE with the load alignment
+ * taken into account. */
+ base = smram;
+ base += SMM_DEFAULT_SIZE;
+ handler_size = rmodule_memory_size(&smm_mod);
+ module_alignment = rmodule_load_alignment(&smm_mod);
+ alignment_size = module_alignment - ((u32)base % module_alignment);
+ if (alignment_size != module_alignment) {
+ handler_size += alignment_size;
+ base += alignment_size;
+ }
+
+ /* Does the required amount of memory exceed the SMRAM region size? */
+ if ((total_stack_size + handler_size + SMM_DEFAULT_SIZE) > size)
+ return -1;
+
+ if (rmodule_load(base, &smm_mod))
+ return -1;
+
+ params->handler = rmodule_entry(&smm_mod);
+ params->handler_arg = rmodule_parameters(&smm_mod);
+
+ return smm_module_setup_stub(smram, params);
+}
diff --git a/src/cpu/x86/smm/smm_stub.S b/src/cpu/x86/smm/smm_stub.S
new file mode 100644
index 0000000..07eb5dc
--- /dev/null
+++ b/src/cpu/x86/smm/smm_stub.S
@@ -0,0 +1,145 @@
+/*
+ * This file is part of the coreboot project.
+ *
+ * Copyright (C) 2012 ChromeOS Authors
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; version 2 of
+ * the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston,
+ * MA 02110-1301 USA
+ */
+
+/*
+ * The stub is a generic wrapper for bootstrapping a C-based SMM handler. Its
+ * primary purpose is to put the CPU into protected mode with a stack and call
+ * into the C handler.
+ *
+ * The stub_entry_params structure needs to correspond to the C structure
+ * found in smm.h.
+ */
+
+.code32
+.section ".module_parameters", "aw", @progbits
+stub_entry_params:
+stack_size:
+.long 0
+stack_top:
+.long 0
+c_handler:
+.long 0
+c_handler_arg:
+.long 0
+/* struct smm_runtime begins here. */
+smm_runtime:
+smbase:
+.long 0
+save_state_size:
+.long 0
+/* apic_to_cpu_num is a table mapping the default APIC id to cpu num. If the
+ * APIC id is found at the given index, the contiguous cpu number is index
+ * into the table. */
+apic_to_cpu_num:
+.fill CONFIG_MAX_CPUS,1,0xff
+/* end struct smm_runtime */
+
+.data
+/* Provide fallback stack to use when a valid cpu number cannot be found. */
+fallback_stack_bottom:
+.skip 128
+fallback_stack_top:
+
+.text
+.code16
+.global smm_handler_start
+smm_handler_start:
+ movl $(smm_relocate_gdt), %ebx
+ data32 lgdt (%ebx)
+
+ movl %cr0, %eax
+ andl $0x1FFAFFD1, %eax /* CD,NW,PG,AM,WP,NE,TS,EM,MP = 0 */
+ orl $0x1, %eax /* PE = 1 */
+ movl %eax, %cr0
+
+ /* Enable protected mode */
+ data32 ljmp $0x8, $smm_trampoline32
+
+.align 4
+smm_relocate_gdt:
+ /* The first GDT entry is used for the lgdt instruction. */
+ .word smm_relocate_gdt_end - smm_relocate_gdt - 1
+ .long smm_relocate_gdt
+ .word 0x0000
+
+ /* gdt selector 0x08, flat code segment */
+ .word 0xffff, 0x0000
+ .byte 0x00, 0x9b, 0xcf, 0x00 /* G=1 and 0x0f, 4GB limit */
+
+ /* gdt selector 0x10, flat data segment */
+ .word 0xffff, 0x0000
+ .byte 0x00, 0x93, 0xcf, 0x00
+smm_relocate_gdt_end:
+
+.align 4
+.code32
+.global smm_trampoline32
+smm_trampoline32:
+ /* Use flat data segment */
+ movw $0x10, %ax
+ movw %ax, %ds
+ movw %ax, %es
+ movw %ax, %ss
+ movw %ax, %fs
+ movw %ax, %gs
+
+ /* The CPU number is calculated by reading the initial APIC id. Since
+ * the OS can maniuplate the APIC id use the non-changing cpuid result
+ * for APIC id (ebx[31:24]). A table is used to handle a discontiguous
+ * APIC id space. */
+ mov $1, %eax
+ cpuid
+ bswap %ebx /* Default APIC id in bl. */
+ mov $(apic_to_cpu_num), %eax
+ xor %ecx, %ecx
+
+1:
+ cmp (%eax, %ecx, 1), %bl
+ je 1f
+ inc %ecx
+ cmp $CONFIG_MAX_CPUS, %ecx
+ jne 1b
+ /* This is bad. One cannot find a stack entry because a cpu num could
+ * not be assigned. Use the fallback stack and check this condition in
+ * C handler. */
+ movl $(fallback_stack_top), %esp
+ jmp 2f
+1:
+ movl stack_size, %eax
+ mul %ecx
+ movl stack_top, %edx
+ subl %eax, %edx
+ mov %edx, %esp
+
+2:
+ /* Call into the c-based SMM relocation function with the platform
+ * parameters. Equivalent to:
+ * c_handler(c_handler_params, cpu_num, smm_runtime);
+ */
+ push $(smm_runtime)
+ push %ecx
+ push c_handler_arg
+ mov c_handler, %eax
+ call *%eax
+
+ /* Exit from SM mode. */
+ rsm
+
diff --git a/src/include/cpu/x86/smm.h b/src/include/cpu/x86/smm.h
index 698ddaf..b6a6c4e 100644
--- a/src/include/cpu/x86/smm.h
+++ b/src/include/cpu/x86/smm.h
@@ -382,6 +382,14 @@ int __attribute__((weak)) mainboard_io_trap_handler(int smif);
void southbridge_smi_set_eos(void);
+#if CONFIG_SMM_MODULES
+void cpu_smi_handler(void);
+void northbridge_smi_handler(void);
+void southbridge_smi_handler(void);
+void mainboard_smi_gpi(u16 gpi_sts);
+int mainboard_smi_apmc(u8 data);
+void mainboard_smi_sleep(u8 slp_typ);
+#else
void __attribute__((weak)) cpu_smi_handler(unsigned int node, smm_state_save_area_t *state_save);
void __attribute__((weak)) northbridge_smi_handler(unsigned int node, smm_state_save_area_t *state_save);
void __attribute__((weak)) southbridge_smi_handler(unsigned int node, smm_state_save_area_t *state_save);
@@ -389,10 +397,14 @@ void __attribute__((weak)) southbridge_smi_handler(unsigned int node, smm_state_
void __attribute__((weak)) mainboard_smi_gpi(u16 gpi_sts);
int __attribute__((weak)) mainboard_smi_apmc(u8 data);
void __attribute__((weak)) mainboard_smi_sleep(u8 slp_typ);
+#endif /* CONFIG_SMM_MODULES */
#if !CONFIG_SMM_TSEG
void smi_release_lock(void);
#define tseg_relocate(ptr)
+#elif CONFIG_SMM_MODULES
+#define tseg_relocate(ptr)
+#define smi_get_tseg_base() 0
#else
/* Return address of TSEG base */
u32 smi_get_tseg_base(void);
@@ -403,4 +415,72 @@ void tseg_relocate(void **ptr);
/* Get PMBASE address */
u16 smm_get_pmbase(void);
+#if CONFIG_SMM_MODULES
+
+struct smm_runtime {
+ u32 smbase;
+ u32 save_state_size;
+ /* The apic_id_to_cpu provides a mapping from APIC id to cpu number.
+ * The cpu number is indicated by the index into the array by matching
+ * the deafult APIC id and value at the index. The stub loader
+ * initializes this array with a 1:1 mapping. If the APIC ids are not
+ * contiguous like the 1:1 mapping it is up to the caller of the stub
+ * loader to adjust this mapping. */
+ u8 apic_id_to_cpu[CONFIG_MAX_CPUS];
+} __attribute__ ((packed));
+
+typedef void (*smm_handler_t)(void *arg, int cpu,
+ const struct smm_runtime *runtime);
+
+#ifdef __SMM__
+/* SMM Runtime helpers. */
+
+/* Entry point for SMM modules. */
+void smm_handler_start(void *arg, int cpu, const struct smm_runtime *runtime);
+
+/* Retrieve SMM save state for a given CPU. WARNING: This does not take into
+ * account CPUs which are configured to not save their state to RAM. */
+void *smm_get_save_state(int cpu);
+
+#else
+/* SMM Module Loading API */
+
+/* Ths smm_loader_params structure provides direction to the SMM loader:
+ * - stack_top - optional external stack provided to loader. It must be at
+ * least per_cpu_stack_size * num_concurrent_stacks in size.
+ * - per_cpu_stack_size - stack size per cpu for smm modules.
+ * - num_concurrent_stacks - number of concurrent cpus in handler needing stack
+ * optional for setting up relocation handler.
+ * - per_cpu_save_state_size - the smm save state size per cpu
+ * - num_concurrent_save_states - number of concurrent cpus needing save state
+ * space
+ * - handler - optional handler to call. Only used during SMM relocation setup.
+ * - handler_arg - optional argument to handler for SMM relocation setup. For
+ * loading the SMM module, the handler_arg is filled in with
+ * the address of the module's parameters (if present).
+ * - runtime - this field is a result only. The SMM runtime location is filled
+ * into this field so the code doing the loading can manipulate the
+ * runtime's assumptions. e.g. updating the apic id to cpu map to
+ * handle sparse apic id space.
+ */
+struct smm_loader_params {
+ void *stack_top;
+ int per_cpu_stack_size;
+ int num_concurrent_stacks;
+
+ int per_cpu_save_state_size;
+ int num_concurrent_save_states;
+
+ smm_handler_t handler;
+ void *handler_arg;
+
+ struct smm_runtime *runtime;
+};
+
+/* Both of these return 0 on success, < 0 on failure. */
+int smm_setup_relocation_handler(struct smm_loader_params *params);
+int smm_load_module(void *smram, int size, struct smm_loader_params *params);
+#endif /* __SMM__ */
+#endif /* CONFIG_SMM_MODULES */
+
#endif
Stefan Reinauer (stefan.reinauer(a)coreboot.org) just uploaded a new patch set to gerrit, which you can find at http://review.coreboot.org/2692
-gerrit
commit 125c9b37204edce8ec77398334bebf0a190421ec
Author: Aaron Durbin <adurbin(a)chromium.org>
Date: Mon Dec 24 14:28:37 2012 -0600
lib: add rmodule support
A rmodule is short for relocation module. Relocaiton modules are
standalone programs. These programs are linked at address 0 as a shared
object with a special linker script that maintains the relocation
entries for the object. These modules can then be embedded as a raw
binary (objcopy -O binary) to be loaded at any location desired.
Initially, the only arch support is for x86. All comments below apply to
x86 specific properties.
The intial user of this support would be for SMM handlers since those
handlers sometimes need to be located at a dynamic address (e.g. TSEG
region).
The relocation entries are currently Elf32_Rel. They are 8 bytes large,
and the entries are not necessarily in sorted order. An future
optimization would be to have a tool convert the unsorted relocations
into just sorted offsets. This would reduce the size of the blob
produced after being processed. Essentialy, 8 bytes per relocation meta
entry would reduce to 4 bytes.
Change-Id: I2236dcb66e9d2b494ce2d1ae40777c62429057ef
Signed-off-by: Aaron Durbin <adurbin(a)chromium.org>
---
src/Kconfig | 8 ++
src/include/rmodule.h | 126 ++++++++++++++++++++++++++
src/lib/Makefile.inc | 18 ++++
src/lib/rmodule.c | 245 ++++++++++++++++++++++++++++++++++++++++++++++++++
src/lib/rmodule.ld | 115 ++++++++++++++++++++++++
5 files changed, 512 insertions(+)
diff --git a/src/Kconfig b/src/Kconfig
index 3e24967..7206878 100644
--- a/src/Kconfig
+++ b/src/Kconfig
@@ -357,6 +357,14 @@ config GFXUMA
help
Enable Unified Memory Architecture for graphics.
+config RELOCATABLE_MODULES
+ bool "Relocatable Modules"
+ default n
+ help
+ If RELOCATABLE_MODULES is selected then support is enabled for
+ building relocatable modules in the ram stage. Those modules can be
+ loaded anywhere and all the relocations are handled automatically.
+
config HAVE_ACPI_TABLES
bool
help
diff --git a/src/include/rmodule.h b/src/include/rmodule.h
new file mode 100644
index 0000000..b51700c
--- /dev/null
+++ b/src/include/rmodule.h
@@ -0,0 +1,126 @@
+/*
+ * This file is part of the coreboot project.
+ *
+ * Copyright (C) 2012 ChromeOS Authors
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+#ifndef RMODULE_H
+#define RMODULE_H
+
+#include <stdint.h>
+
+#define RMODULE_MAGIC 0xf8fe
+#define RMODULE_VERSION_1 1
+
+enum {
+ RMODULE_TYPE_SMM,
+};
+
+struct rmodule;
+
+/* Public API for loading rmdoules. */
+int rmodule_parse(void *ptr, struct rmodule *m);
+void *rmodule_parameters(const struct rmodule *m);
+void *rmodule_entry(const struct rmodule *m);
+int rmodule_entry_offset(const struct rmodule *m);
+int rmodule_memory_size(const struct rmodule *m);
+int rmodule_load(void *loc, struct rmodule *m);
+int rmodule_load_alignment(const struct rmodule *m);
+
+#define FIELD_ENTRY(x_) ((u32)&x_)
+#define RMODULE_HEADER(entry_, type_) \
+{ \
+ .magic = RMODULE_MAGIC, \
+ .version = RMODULE_VERSION_1, \
+ .type = type_, \
+ .payload_begin_offset = FIELD_ENTRY(_payload_begin_offset), \
+ .payload_end_offset = FIELD_ENTRY(_payload_end_offset), \
+ .relocations_begin_offset = \
+ FIELD_ENTRY(_relocations_begin_offset), \
+ .relocations_end_offset = \
+ FIELD_ENTRY(_relocations_end_offset), \
+ .module_link_start_address = \
+ FIELD_ENTRY(_module_link_start_addr), \
+ .module_program_size = FIELD_ENTRY(_module_program_size), \
+ .module_entry_point = FIELD_ENTRY(entry_), \
+ .parameters_begin = FIELD_ENTRY(_module_params_begin), \
+ .parameters_end = FIELD_ENTRY(_module_params_end), \
+ .bss_begin = FIELD_ENTRY(_bss_begin), \
+ .bss_end = FIELD_ENTRY(_bss_end), \
+}
+
+#define DEFINE_RMODULE_HEADER(name_, entry_, type_) \
+ struct rmodule_header name_ \
+ __attribute__ ((section (".module_header"))) = \
+ RMODULE_HEADER(entry_, type_)
+
+
+/* Private data structures below should not be used directly. */
+
+/* All fields with '_offset' in the name are byte offsets into the flat blob.
+ * The linker and the linker script takes are of assigning the values. */
+struct rmodule_header {
+ u16 magic;
+ u8 version;
+ u8 type;
+ /* The payload represents the program's loadable code and data. */
+ u32 payload_begin_offset;
+ u32 payload_end_offset;
+ /* Begin and of relocation information about the program module. */
+ u32 relocations_begin_offset;
+ u32 relocations_end_offset;
+ /* The starting address of the linked program. This address is vital
+ * for determining relocation offsets as the reloction info and other
+ * symbols (bss, entry point) need this value as a basis to calculate
+ * the offsets.
+ */
+ u32 module_link_start_address;
+ /* The module_program_size is the size of memory used while running
+ * the program. The program is assumed to consume a contiguos amount
+ * of memory. */
+ u32 module_program_size;
+ /* This is program's execution entry point. */
+ u32 module_entry_point;
+ /* Optional paramter structure that can be used to pass data into
+ * the module. */
+ u32 parameters_begin;
+ u32 parameters_end;
+ /* BSS section information so the loader can clear the bss. */
+ u32 bss_begin;
+ u32 bss_end;
+} __attribute__ ((packed));
+
+struct rmodule {
+ void *location;
+ struct rmodule_header *header;
+ const void *payload;
+ int payload_size;
+ void *relocations;
+};
+
+/* These are the symbols assumed that every module contains. The linker script
+ * provides these symbols. */
+extern char _relocations_begin_offset[];
+extern char _relocations_end_offset[];
+extern char _payload_end_offset[];
+extern char _payload_begin_offset[];
+extern char _bss_begin[];
+extern char _bss_end[];
+extern char _module_program_size[];
+extern char _module_link_start_addr[];
+extern char _module_params_begin[];
+extern char _module_params_end[];
+
+#endif /* RMODULE_H */
diff --git a/src/lib/Makefile.inc b/src/lib/Makefile.inc
index be57f29..fc9509a 100644
--- a/src/lib/Makefile.inc
+++ b/src/lib/Makefile.inc
@@ -107,3 +107,21 @@ endif
$(obj)/lib/uart8250mem.smm.o : $(OPTION_TABLE_H)
$(obj)/lib/uart8250.smm.o : $(OPTION_TABLE_H)
+ifeq ($(CONFIG_RELOCATABLE_MODULES),y)
+ramstage-y += rmodule.c
+
+RMODULE_LDSCRIPT := $(src)/lib/rmodule.ld
+RMODULE_LDFLAGS := -nostartfiles -shared -z defs -nostdlib -Bsymbolic -T $(RMODULE_LDSCRIPT)
+
+# rmodule_link_rules is a function that should be called with:
+# (1) the object name to link
+# (2) the dependencies
+# (3) heap size of the relocatable module
+# It will create the necessary Make rules.
+define rmodule_link =
+$(strip $(1)): $(strip $(2)) $$(RMODULE_LDSCRIPT) $$(obj)/ldoptions
+ $$(LD) $$(RMODULE_LDFLAGS) --defsym=__heap_size=$(strip $(3)) -o $$@ $(strip $(2))
+ $$(NM) -n $$@ > $$(basename $$(a)).map
+endef
+
+endif
diff --git a/src/lib/rmodule.c b/src/lib/rmodule.c
new file mode 100644
index 0000000..56d7c6d
--- /dev/null
+++ b/src/lib/rmodule.c
@@ -0,0 +1,245 @@
+/*
+ * This file is part of the coreboot project.
+ *
+ * Copyright (C) 2012 ChromeOS Authors
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+#include <stdint.h>
+#include <string.h>
+#include <console/console.h>
+#include <rmodule.h>
+
+/* Change this define to get more verbose debugging for module loading. */
+#define PK_ADJ_LEVEL BIOS_NEVER
+
+#if CONFIG_ARCH_X86
+/*
+ * On X86, the only relocations currently allowed are R_386_RELATIVE which
+ * have '0' for the symbol info in the relocation metadata (in r_info).
+ * The reason is that the module is fully linked and just has the relocations'
+ * locations.
+ */
+typedef struct {
+ u32 r_offset;
+ u32 r_info;
+} Elf32_Rel;
+
+#define R_386_RELATIVE 8
+
+#define RELOCTION_ENTRY_SIZE sizeof(Elf32_Rel)
+static inline int rmodule_reloc_offset(const void *reloc)
+{
+ const Elf32_Rel *rel = reloc;
+ return rel->r_offset;
+}
+
+static inline int rmodule_reloc_valid(const void *reloc)
+{
+ const Elf32_Rel *rel = reloc;
+ return (rel->r_info == R_386_RELATIVE);
+}
+
+static inline void *remodule_next_reloc(const void *reloc)
+{
+ const Elf32_Rel *rel = reloc;
+ rel++;
+ return (void *)rel;
+}
+
+#else
+#error Arch needs to add relocation information support for RMODULE
+#endif
+
+static inline int rmodule_is_loaded(const struct rmodule *module)
+{
+ return module->location != NULL;
+}
+
+/* Calculate a loaded program address based on the blob address. */
+static inline void *rmodule_load_addr(const struct rmodule *module,
+ u32 blob_addr)
+{
+ char *loc = module->location;
+ return &loc[blob_addr - module->header->module_link_start_address];
+}
+
+/* Initialize a rmodule structure based on raw data. */
+int rmodule_parse(void *ptr, struct rmodule *module)
+{
+ char *base;
+ struct rmodule_header *rhdr;
+
+ base = ptr;
+ rhdr = ptr;
+
+ if (rhdr == NULL)
+ return -1;
+
+ /* Sanity check the raw data. */
+ if (rhdr->magic != RMODULE_MAGIC)
+ return -1;
+ if (rhdr->version != RMODULE_VERSION_1)
+ return -1;
+
+ /* Indicate the module hasn't been loaded yet. */
+ module->location = NULL;
+
+ /* The rmodule only needs a reference to the reloc_header. */
+ module->header = rhdr;
+
+ /* The payload lives after the header. */
+ module->payload = &base[rhdr->payload_begin_offset];
+ module->payload_size = rhdr->payload_end_offset -
+ rhdr->payload_begin_offset;
+ module->relocations = &base[rhdr->relocations_begin_offset];
+
+ return 0;
+}
+
+int rmodule_memory_size(const struct rmodule *module)
+{
+ return module->header->module_program_size;
+}
+
+void *rmodule_parameters(const struct rmodule *module)
+{
+ if (!rmodule_is_loaded(module))
+ return NULL;
+
+ /* Indicate if there are no parameters. */
+ if (module->header->parameters_begin == module->header->parameters_end)
+ return NULL;
+
+ return rmodule_load_addr(module, module->header->parameters_begin);
+}
+
+int rmodule_entry_offset(const struct rmodule *module)
+{
+ return module->header->module_entry_point -
+ module->header->module_link_start_address;
+}
+
+void *rmodule_entry(const struct rmodule *module)
+{
+ if (!rmodule_is_loaded(module))
+ return NULL;
+
+ return rmodule_load_addr(module, module->header->module_entry_point);
+}
+
+static void rmodule_clear_bss(struct rmodule *module)
+{
+ char *begin;
+ int size;
+
+ begin = rmodule_load_addr(module, module->header->bss_begin);
+ size = module->header->bss_end - module->header->bss_begin;
+ memset(begin, 0, size);
+}
+
+static inline int rmodule_number_relocations(const struct rmodule *module)
+{
+ int r;
+
+ r = module->header->relocations_end_offset;
+ r -= module->header->relocations_begin_offset;
+ r /= RELOCTION_ENTRY_SIZE;
+ return r;
+}
+
+static void rmodule_copy_payload(const struct rmodule *module)
+{
+ printk(BIOS_DEBUG, "Loading module at %p with entry %p. "
+ "filesize: 0x%x memsize: 0x%x\n",
+ module->location, rmodule_entry(module),
+ module->payload_size, rmodule_memory_size(module));
+ memcpy(module->location, module->payload, module->payload_size);
+}
+
+static inline u32 *rmodule_adjustment_location(const struct rmodule *module,
+ const void *reloc)
+{
+ int reloc_offset;
+
+ /* Don't relocate header field entries -- only program relocations. */
+ reloc_offset = rmodule_reloc_offset(reloc);
+ if (reloc_offset < module->header->module_link_start_address)
+ return NULL;
+
+ return rmodule_load_addr(module, reloc_offset);
+}
+
+static int rmodule_relocate(const struct rmodule *module)
+{
+ int num_relocations;
+ const void *reloc;
+ u32 adjustment;
+
+ /* Each relocation needs to be adjusted relative to the beginning of
+ * the loaded program. */
+ adjustment = (u32)rmodule_load_addr(module, 0);
+
+ reloc = module->relocations;
+ num_relocations = rmodule_number_relocations(module);
+
+ printk(BIOS_DEBUG, "Processing %d relocs with adjust value of 0x%08x\n",
+ num_relocations, adjustment);
+
+ while (num_relocations > 0) {
+ u32 *adjust_loc;
+
+ if (!rmodule_reloc_valid(reloc))
+ return -1;
+
+ /* If the adjustment location is non-NULL adjust it. */
+ adjust_loc = rmodule_adjustment_location(module, reloc);
+ if (adjust_loc != NULL) {
+ printk(PK_ADJ_LEVEL, "Adjusting %p: 0x%08x -> 0x%08x\n",
+ adjust_loc, *adjust_loc,
+ *adjust_loc + adjustment);
+ *adjust_loc += adjustment;
+ }
+
+ reloc = remodule_next_reloc(reloc);
+ num_relocations--;
+ }
+
+ return 0;
+}
+
+int rmodule_load_alignment(const struct rmodule *module)
+{
+ /* The load alignment is the start of the program's linked address.
+ * The base address where the program is loaded needs to be a multiple
+ * of the program's starting link address. That way all data alignment
+ * in the program is presered. */
+ return module->header->module_link_start_address;
+}
+
+int rmodule_load(void *base, struct rmodule *module)
+{
+ /*
+ * In order to load the module at a given address, the following steps
+ * take place:
+ * 1. Copy payload to base address.
+ * 2. Clear the bss segment.
+ * 3. Adjust relocations within the module to new base address.
+ */
+ module->location = base;
+ rmodule_copy_payload(module);
+ rmodule_clear_bss(module);
+ return rmodule_relocate(module);
+}
+
diff --git a/src/lib/rmodule.ld b/src/lib/rmodule.ld
new file mode 100644
index 0000000..4c13c84
--- /dev/null
+++ b/src/lib/rmodule.ld
@@ -0,0 +1,115 @@
+OUTPUT_FORMAT("elf32-i386", "elf32-i386", "elf32-i386")
+OUTPUT_ARCH(i386)
+
+/*
+ * This linker script is used to link rmodules (relocatable modules). It
+ * links at zero so that relocation fixups are easy when placing the binaries
+ * anywhere in the address space.
+ *
+ * NOTE: The program's loadable sections (text, module_params, and data) are
+ * packed into the flat blob using the AT directive. The rmodule loader assumes
+ * the entire program resides in one contiguous address space. Therefore,
+ * alignment for a given section (if required) needs to be done at the end of
+ * the preceeding section. e.g. if the data section should be aligned to an 8
+ * byte address the text section should have ALIGN(8) at the end of its section.
+ * Otherwise there won't be a consistent mapping between the flat blob and the
+ * loaded program.
+ */
+
+BASE_ADDRESS = 0x00000;
+
+SECTIONS
+{
+ . = BASE_ADDRESS;
+
+ .header : AT (0) {
+ *(.module_header);
+ . = ALIGN(8);
+ }
+
+ /* Align the start of the module program to a large enough alignment
+ * so that any data in the program with an alignement property is met.
+ * Essentially, this alignment is the maximum possible data alignment
+ * property a program can have. */
+ . = ALIGN(4096);
+ _module_link_start_addr = .;
+ _payload_begin_offset = LOADADDR(.header) + SIZEOF(.header);
+
+ .text : AT (_payload_begin_offset) {
+ /* C code of the module. */
+ *(.text);
+ *(.text.*);
+ /* C read-only data. */
+ . = ALIGN(16);
+ *(.rodata);
+ *(.rodata.*);
+ . = ALIGN(4);
+ }
+
+ .module_params : AT (LOADADDR(.text) + SIZEOF(.text)) {
+ /* The parameters section can be used to pass parameters
+ * to a module, however there has to be an prior agreement
+ * on how to interpret the parameters. */
+ _module_params_begin = .;
+ *(.module_parameters);
+ _module_params_end = .;
+ . = ALIGN(4);
+ }
+
+ .data : AT (LOADADDR(.module_params) + SIZEOF(.module_params)) {
+ _sdata = .;
+ *(.data);
+ . = ALIGN(4);
+ _edata = .;
+ }
+
+ /* _payload_end marks the end of the module's code and data. */
+ _payload_end_offset = LOADADDR(.data) + SIZEOF(.data);
+
+ .bss (NOLOAD) : {
+ /* C uninitialized data of the SMM handler */
+ _bss_begin = .;
+ *(.bss);
+ *(.sbss);
+ *(COMMON);
+ . = ALIGN(8);
+ _bss_end = .;
+ }
+
+ .heap (NOLOAD) : {
+ /*
+ * Place the heap after BSS. The heap size is passed in by
+ * by way of ld --defsym=__heap_size=<>
+ */
+ _heap = .;
+ . = . + __heap_size;
+ _eheap = .;
+ }
+
+ /* _module_program_size is the total memory used by the program. */
+ _module_program_size = _eheap - _module_link_start_addr;
+
+ /* The relocation information is linked on top of the BSS section
+ * because the BSS section takes no space on disk. The relocation data
+ * resides directly after the data section in the flat binary. */
+ .relocations ADDR(.bss) : AT (_payload_end_offset) {
+ *(.rel.*);
+ }
+ _relocations_begin_offset = LOADADDR(.relocations);
+ _relocations_end_offset = _relocations_begin_offset +
+ SIZEOF(.relocations);
+
+ /DISCARD/ : {
+ /* Drop unnecessary sections. Since these modules are linked
+ * as shared objects there are dynamic sections. These sections
+ * aren't needed so drop them. */
+ *(.comment);
+ *(.note);
+ *(.note.*);
+ *(.dynamic);
+ *(.dynsym);
+ *(.dynstr);
+ *(.gnu.hash);
+ *(.eh_frame);
+ }
+}
Stefan Reinauer (stefan.reinauer(a)coreboot.org) just uploaded a new patch set to gerrit, which you can find at http://review.coreboot.org/2690
-gerrit
commit a40d0983aaa1e958178024597a372378d2123342
Author: Aaron Durbin <adurbin(a)chromium.org>
Date: Fri Dec 21 21:22:07 2012 -0600
haswell: include TSEG region in cacheable memory
The SMRR takes precedence over the MTRR entries. Therefore, if the TSEG
region is setup as cacheable through the MTTRs, accesses to the TSEG
region before SMM relocation are cached. This allows for the setup of
SMM relocation to be faster by caching accesses to the future TSEG
(SMRAM) memory.
MC MAP: TOM: 0x140000000
MC MAP: TOUUD: 0x18f600000
MC MAP: MESEG_BASE: 0x13f000000
MC MAP: MESEG_LIMIT: 0x7fff0fffff
MC MAP: REMAP_BASE: 0x13f000000
MC MAP: REMAP_LIMIT: 0x18f5fffff
MC MAP: TOLUD: 0xafa00000
MC MAP: BGSM: 0xad800000
MC MAP: BDSM: 0xada00000
MC MAP: TESGMB: 0xad000000
MC MAP: GGC: 0x209
TSEG->BGSM:
PCI: 00:00.0 resource base ad000000 size 800000 align 0 gran 0 limit 0 flags f0004200 index 4
BGSM->TOLUD:
PCI: 00:00.0 resource base ad800000 size 2200000 align 0 gran 0 limit 0 flags f0000200 index 5
Setting variable MTRR 0, base: 0MB, range: 2048MB, type WB
Setting variable MTRR 1, base: 2048MB, range: 512MB, type WB
Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
Adding hole at 2776MB-2816MB
Setting variable MTRR 3, base: 2776MB, range: 8MB, type UC
Setting variable MTRR 4, base: 2784MB, range: 32MB, type UC
Zero-sized MTRR range @0KB
Allocate an msr - basek = 00400000, sizek = 0023d800,
Setting variable MTRR 5, base: 4096MB, range: 2048MB, type WB
Setting variable MTRR 6, base: 6144MB, range: 256MB, type WB
Adding hole at 6390MB-6400MB
Setting variable MTRR 7, base: 6390MB, range: 2MB, type UC
MTRR translation from MB to addresses:
MTRR 0: 0x00000000 -> 0x80000000 WB
MTRR 1: 0x80000000 -> 0xa0000000 WB
MTRR 2: 0xa0000000 -> 0xb0000000 WB
MTRR 3: 0xad800000 -> 0xae000000 UC
MTRR 4: 0xae000000 -> 0xb0000000 UC
I'm not a fan of the marking physical address space with MTRRs as being
UC which is PCI space, but it is technically correct.
Lastly, drop a comment describing AP startup flow through coreboot.
Change-Id: Ic63c0377b9c20102fcd3f190052fb32bc5f89182
Signed-off-by: Aaron Durbin <adurbin(a)chromium.org>
---
src/northbridge/intel/haswell/northbridge.c | 26 ++++++++++++++++++++++++--
1 file changed, 24 insertions(+), 2 deletions(-)
diff --git a/src/northbridge/intel/haswell/northbridge.c b/src/northbridge/intel/haswell/northbridge.c
index d6869c1..55c9c6b 100644
--- a/src/northbridge/intel/haswell/northbridge.c
+++ b/src/northbridge/intel/haswell/northbridge.c
@@ -340,7 +340,8 @@ static void mc_add_dram_resources(device_t dev)
* cacheable and reserved
* - SMM_DEFAULT_BASE + SMM_DEFAULT_SIZE -> 0xa0000 : cacheable
* - 0xc0000 -> TSEG : cacheable
- * - TESG -> TOLUD: not cacheable with standard MTRRs and reserved
+ * - TESG -> BGSM: cacheable with standard MTRRs and reserved
+ * - BGSM -> TOLUD: not cacheable with standard MTRRs and reserved
* - 4GiB -> TOUUD: cacheable
*
* The default SMRAM space is reserved so that the range doesn't
@@ -354,6 +355,10 @@ static void mc_add_dram_resources(device_t dev)
* is not omitted the mtrr code will setup the area as cacheable
* causing VGA access to not work.
*
+ * The TSEG region is mapped as cacheable so that one can perform
+ * SMRAM relocation faster. Once the SMRR is enabled the SMRR takes
+ * precedence over the existing MTRRs covering this region.
+ *
* It should be noted that cacheable entry types need to be added in
* order. The reason is that the current MTRR code assumes this and
* falls over itself if it isn't.
@@ -386,9 +391,17 @@ static void mc_add_dram_resources(device_t dev)
size_k = (unsigned long)(mc_values[TSEG_REG] >> 10) - base_k;
ram_resource(dev, index++, base_k, size_k);
- /* TSEG -> TOLUD */
+ /* TSEG -> BGSM */
resource = new_resource(dev, index++);
resource->base = mc_values[TSEG_REG];
+ resource->size = mc_values[BGSM_REG] - resource->base;
+ resource->flags = IORESOURCE_MEM | IORESOURCE_FIXED |
+ IORESOURCE_STORED | IORESOURCE_RESERVE |
+ IORESOURCE_ASSIGNED | IORESOURCE_CACHEABLE;
+
+ /* BGSM -> TOLUD */
+ resource = new_resource(dev, index++);
+ resource->base = mc_values[BGSM_REG];
resource->size = mc_values[TOLUD_REG] - resource->base;
resource->flags = IORESOURCE_MEM | IORESOURCE_FIXED |
IORESOURCE_STORED | IORESOURCE_RESERVE |
@@ -580,6 +593,15 @@ static const struct pci_driver mc_driver_hsw_ult __pci_driver = {
static void cpu_bus_init(device_t dev)
{
+ /*
+ * This calls into the gerneic initialize_cpus() which attempts to
+ * start APs on the APIC bus in the devicetree. No APs get started
+ * because there is only the BSP and placeholder (disabled) in the
+ * devicetree. initialize_cpus() also does SMM initialization by way
+ * of smm_init(). It will eventually call cpu_initialize(0) which calls
+ * dev_ops->init(). For Haswell the dev_ops->init() starts up the APs
+ * by way of intel_cores_init().
+ */
initialize_cpus(dev->link_list);
}
Stefan Reinauer (stefan.reinauer(a)coreboot.org) just uploaded a new patch set to gerrit, which you can find at http://review.coreboot.org/2689
-gerrit
commit 4920f6b3db5b0182fe7fa324b546249239cc0483
Author: Aaron Durbin <adurbin(a)chromium.org>
Date: Fri Dec 21 22:18:58 2012 -0600
haswell: Fix BDSM and BGSM indicies in memory map
This wasn't previously spotted because the printk's were correct.
However if one needed to get the value of the BDSM or BGSM register
the value would reflect the other register's value.
Change-Id: Ieec7360a74a65292773b61e14da39fc7d8bfad46
Signed-off-by: Aaron Durbin <adurbin(a)chromium.org>
---
src/northbridge/intel/haswell/northbridge.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/northbridge/intel/haswell/northbridge.c b/src/northbridge/intel/haswell/northbridge.c
index 7059e2c..d6869c1 100644
--- a/src/northbridge/intel/haswell/northbridge.c
+++ b/src/northbridge/intel/haswell/northbridge.c
@@ -298,8 +298,8 @@ static struct map_entry memory_map[NUM_MAP_ENTRIES] = {
[REMAP_BASE_REG] = MAP_ENTRY_BASE_64(REMAPBASE, "REMAP_BASE"),
[REMAP_LIMIT_REG] = MAP_ENTRY_LIMIT_64(REMAPLIMIT, "REMAP_LIMIT"),
[TOLUD_REG] = MAP_ENTRY_BASE_32(TOLUD, "TOLUD"),
- [BGSM_REG] = MAP_ENTRY_BASE_32(BDSM, "BDSM"),
- [BDSM_REG] = MAP_ENTRY_BASE_32(BGSM, "BGSM"),
+ [BDSM_REG] = MAP_ENTRY_BASE_32(BDSM, "BDSM"),
+ [BGSM_REG] = MAP_ENTRY_BASE_32(BGSM, "BGSM"),
[TSEG_REG] = MAP_ENTRY_BASE_32(TSEG, "TESGMB"),
};