Kyösti Mälkki (kyosti.malkki(a)gmail.com) just uploaded a new patch set to gerrit, which you can find at http://review.coreboot.org/7206
-gerrit
commit 9872888fd6cf0e8a150ffe81dabfca80fa755b44
Author: Kyösti Mälkki <kyosti.malkki(a)gmail.com>
Date: Mon Oct 27 08:01:55 2014 +0200
usbdebug: Fix migration to ramstage
On entry to ramstage CBMEM is looked for a copy of an already initialized
EHCI debug dongle state. If a copy is found, it contained the state before
CAR migration and the USB protocol data toggle can be out of sync. It's an
even/odd kind of a parity check, so roughly every other build would
show the problem as invalid first line: 'ug found in CBMEM.'
After CAR migration, re-direct the state changes to correct CBMEM table.
Change-Id: I7c54e76ce29af5c8ee5e9ce6fd3dc6bdf700dcf1
Signed-off-by: Kyösti Mälkki <kyosti.malkki(a)gmail.com>
---
src/drivers/usb/ehci_debug.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/src/drivers/usb/ehci_debug.c b/src/drivers/usb/ehci_debug.c
index c60fbaa..02104fa 100644
--- a/src/drivers/usb/ehci_debug.c
+++ b/src/drivers/usb/ehci_debug.c
@@ -95,11 +95,15 @@ static int dbgp_enabled(void);
#define DBGP_MICROFRAME_RETRIES 10
#define DBGP_MAX_PACKET 8
-static struct ehci_debug_info glob_dbg_info CAR_GLOBAL;
+static struct ehci_debug_info glob_dbg_info_ CAR_GLOBAL;
+static struct ehci_debug_info * glob_dbg_info CAR_GLOBAL;
static inline struct ehci_debug_info *dbgp_ehci_info(void)
{
- return car_get_var_ptr(&glob_dbg_info);
+ if (car_get_var(glob_dbg_info) == NULL)
+ car_set_var(glob_dbg_info, &glob_dbg_info_);
+
+ return car_get_var(glob_dbg_info);
}
static int dbgp_wait_until_complete(struct ehci_dbg_port *ehci_debug)
@@ -905,6 +909,7 @@ static void migrate_ehci_debug(void)
return;
memcpy(dbg_info_cbmem, dbg_info, sizeof(*dbg_info));
+ car_set_var(glob_dbg_info, dbg_info_cbmem);
}
CAR_MIGRATE(migrate_ehci_debug);
#endif
Kyösti Mälkki (kyosti.malkki(a)gmail.com) just uploaded a new patch set to gerrit, which you can find at http://review.coreboot.org/3868
-gerrit
commit 0d64603ff228fecba83f12f3c93e1a21a731ec79
Author: Kyösti Mälkki <kyosti.malkki(a)gmail.com>
Date: Thu Jun 6 10:33:39 2013 +0300
usbdebug: Some fix for dongle compatibility
Not sure what this is about.
Required for BeagleBone (not Black) with HUB in the middle, also
old FX2 senses extra reset if we do this.
Change-Id: I86878f8f570911ed1ed3ec844c232ac91e934072
Signed-off-by: Kyösti Mälkki <kyosti.malkki(a)gmail.com>
---
src/drivers/usb/ehci_debug.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/src/drivers/usb/ehci_debug.c b/src/drivers/usb/ehci_debug.c
index 800dc76..a71f1fe 100644
--- a/src/drivers/usb/ehci_debug.c
+++ b/src/drivers/usb/ehci_debug.c
@@ -558,10 +558,12 @@ try_next_port:
}
dprintk(BIOS_INFO, "EHCI debug port enabled.\n");
+#if 0
/* Completely transfer the debug device to the debug controller */
portsc = read32((unsigned long)&ehci_regs->port_status[debug_port - 1]);
portsc &= ~PORT_PE;
write32((unsigned long)&ehci_regs->port_status[debug_port - 1], portsc);
+#endif
dbgp_mdelay(100);
Marc Jones (marc.jones(a)se-eng.com) just uploaded a new patch set to gerrit, which you can find at http://review.coreboot.org/6958
-gerrit
commit 5452665ce02a79f4b9ec6dce85754b7b3f450380
Author: Gabe Black <gabeblack(a)google.com>
Date: Fri Feb 21 01:01:06 2014 -0800
cbfstool: If compression fails, warn and use the uncompressed data.
The LZMA compression algorithm, currently the only one available, will fail
if you ask it to write more data to the output than you've given it space for.
The code that calls into LZMA allocates an output buffer the same size as the
input, so if compression increases the size of the output the call will fail.
The caller(s) were written to assume that the call succeeded and check the
returned length to see if the size would have increased, but that will never
happen with LZMA.
Rather than try to rework the LZMA library to dynamically resize the output
buffer or try to guess what the maximal size the data could expand to is, this
change makes the caller simply print a warning and disable compression if the
call failed for some reason.
This may lead to images that are larger than necessary if compression fails
for some other reason and the user doesn't notice, but since compression
errors were ignored entirely until very recently that will hopefully not be
a problem in practice, and we should be guaranteed to at least produce a
correct image.
Original-Change-Id: I5f59529c2d48e9c4c2e011018b40ec336c4fcca8
Original-Signed-off-by: Gabe Black <gabeblack(a)google.com>
Original-Reviewed-on: https://chromium-review.googlesource.com/187365
Original-Reviewed-by: David Hendricks <dhendrix(a)chromium.org>
Original-Tested-by: Gabe Black <gabeblack(a)chromium.org>
Original-Commit-Queue: Gabe Black <gabeblack(a)chromium.org>
(cherry picked from commit b9f622a554d5fb9a9aff839c64e11acb27785f13)
Signed-off-by: Isaac Christensen <isaac.christensen(a)se-eng.com>
Change-Id: I5f59529c2d48e9c4c2e011018b40ec336c4fcca8
---
util/cbfstool/cbfs-mkpayload.c | 54 +++++++++++++++++++-----------------------
util/cbfstool/cbfs-mkstage.c | 11 ++++++---
2 files changed, 32 insertions(+), 33 deletions(-)
diff --git a/util/cbfstool/cbfs-mkpayload.c b/util/cbfstool/cbfs-mkpayload.c
index 38cc482..d5bcca0 100644
--- a/util/cbfstool/cbfs-mkpayload.c
+++ b/util/cbfstool/cbfs-mkpayload.c
@@ -206,26 +206,24 @@ int parse_elf_to_payload(const struct buffer *input,
segs[segments].type = PAYLOAD_SEGMENT_DATA;
segs[segments].load_addr = phdr[i].p_paddr;
segs[segments].mem_len = phdr[i].p_memsz;
- segs[segments].compression = algo;
segs[segments].offset = doffset;
+ /* If the compression failed or made the section is larger,
+ use the original stuff */
+
int len;
if (compress((char *)&header[phdr[i].p_offset],
- phdr[i].p_filesz, output->data + doffset, &len)) {
- buffer_delete(output);
- ret = -1;
- goto out;
- }
- segs[segments].len = len;
-
- /* If the compressed section is larger, then use the
- original stuff */
-
- if ((unsigned int)len > phdr[i].p_filesz) {
+ phdr[i].p_filesz, output->data + doffset, &len) ||
+ (unsigned int)len > phdr[i].p_filesz) {
+ WARN("Compression failed or would make the data bigger "
+ "- disabled.\n");
segs[segments].compression = 0;
segs[segments].len = phdr[i].p_filesz;
memcpy(output->data + doffset,
&header[phdr[i].p_offset], phdr[i].p_filesz);
+ } else {
+ segs[segments].compression = algo;
+ segs[segments].len = len;
}
doffset += segs[segments].len;
@@ -275,15 +273,13 @@ int parse_flat_binary_to_payload(const struct buffer *input,
segs[0].mem_len = input->size;
segs[0].offset = doffset;
- if (compress(input->data, input->size, output->data + doffset, &len)) {
- buffer_delete(output);
- return -1;
- }
- segs[0].compression = algo;
- segs[0].len = len;
-
- if ((unsigned int)len >= input->size) {
- WARN("Compressing data would make it bigger - disabled.\n");
+ if (!compress(input->data, input->size, output->data + doffset, &len) &&
+ (unsigned int)len < input->size) {
+ segs[0].compression = algo;
+ segs[0].len = len;
+ } else {
+ WARN("Compression failed or would make the data bigger "
+ "- disabled.\n");
segs[0].compression = 0;
segs[0].len = input->size;
memcpy(output->data + doffset, input->data, input->size);
@@ -404,15 +400,13 @@ int parse_fv_to_payload(const struct buffer *input,
segs[0].mem_len = input->size;
segs[0].offset = doffset;
- if (compress(input->data, input->size, output->data + doffset, &len)) {
- buffer_delete(output);
- return -1;
- }
- segs[0].compression = algo;
- segs[0].len = len;
-
- if ((unsigned int)len >= input->size) {
- WARN("Compressing data would make it bigger - disabled.\n");
+ if (!compress(input->data, input->size, output->data + doffset, &len) &&
+ (unsigned int)len < input->size) {
+ segs[0].compression = algo;
+ segs[0].len = len;
+ } else {
+ WARN("Compression failed or would make the data bigger "
+ "- disabled.\n");
segs[0].compression = 0;
segs[0].len = input->size;
memcpy(output->data + doffset, input->data, input->size);
diff --git a/util/cbfstool/cbfs-mkstage.c b/util/cbfstool/cbfs-mkstage.c
index 8c77ee5..4a2f4d8 100644
--- a/util/cbfstool/cbfs-mkstage.c
+++ b/util/cbfstool/cbfs-mkstage.c
@@ -155,12 +155,17 @@ int parse_elf_to_stage(const struct buffer *input, struct buffer *output,
* to fill out the header. This seems backward but it works because
* - the output header is a known size (not always true in many xdr's)
* - we do need to know the compressed output size first
+ * If compression fails or makes the data bigger, we'll warn about it
+ * and use the original data.
*/
if (compress(buffer, data_end - data_start,
(output->data + sizeof(struct cbfs_stage)),
- &outlen) < 0) {
- free(buffer);
- return -1;
+ &outlen) < 0 || outlen > data_end - data_start) {
+ WARN("Compression failed or would make the data bigger "
+ "- disabled.\n");
+ memcpy(output->data + sizeof(struct cbfs_stage),
+ buffer, data_end - data_start);
+ algo = CBFS_COMPRESS_NONE;
}
free(buffer);
Marc Jones (marc.jones(a)se-eng.com) just uploaded a new patch set to gerrit, which you can find at http://review.coreboot.org/6958
-gerrit
commit bc265063888263bb1b8fcae0e0669e8e3cd59782
Author: Gabe Black <gabeblack(a)google.com>
Date: Fri Feb 21 01:01:06 2014 -0800
cbfstool: If compression fails, warn and use the uncompressed data.
The LZMA compression algorithm, currently the only one available, will fail
if you ask it to write more data to the output than you've given it space for.
The code that calls into LZMA allocates an output buffer the same size as the
input, so if compression increases the size of the output the call will fail.
The caller(s) were written to assume that the call succeeded and check the
returned length to see if the size would have increased, but that will never
happen with LZMA.
Rather than try to rework the LZMA library to dynamically resize the output
buffer or try to guess what the maximal size the data could expand to is, this
change makes the caller simply print a warning and disable compression if the
call failed for some reason.
This may lead to images that are larger than necessary if compression fails
for some other reason and the user doesn't notice, but since compression
errors were ignored entirely until very recently that will hopefully not be
a problem in practice, and we should be guarnateed to at least produce a
correct image.
Original-Change-Id: I5f59529c2d48e9c4c2e011018b40ec336c4fcca8
Original-Signed-off-by: Gabe Black <gabeblack(a)google.com>
Original-Reviewed-on: https://chromium-review.googlesource.com/187365
Original-Reviewed-by: David Hendricks <dhendrix(a)chromium.org>
Original-Tested-by: Gabe Black <gabeblack(a)chromium.org>
Original-Commit-Queue: Gabe Black <gabeblack(a)chromium.org>
(cherry picked from commit b9f622a554d5fb9a9aff839c64e11acb27785f13)
Signed-off-by: Isaac Christensen <isaac.christensen(a)se-eng.com>
Change-Id: I5f59529c2d48e9c4c2e011018b40ec336c4fcca8
---
util/cbfstool/cbfs-mkpayload.c | 46 ++++++++++++++++++------------------------
util/cbfstool/cbfs-mkstage.c | 11 +++++++---
2 files changed, 28 insertions(+), 29 deletions(-)
diff --git a/util/cbfstool/cbfs-mkpayload.c b/util/cbfstool/cbfs-mkpayload.c
index 38cc482..191e4e7 100644
--- a/util/cbfstool/cbfs-mkpayload.c
+++ b/util/cbfstool/cbfs-mkpayload.c
@@ -206,26 +206,24 @@ int parse_elf_to_payload(const struct buffer *input,
segs[segments].type = PAYLOAD_SEGMENT_DATA;
segs[segments].load_addr = phdr[i].p_paddr;
segs[segments].mem_len = phdr[i].p_memsz;
- segs[segments].compression = algo;
segs[segments].offset = doffset;
+ /* If the compression failed or made the section is larger,
+ use the original stuff */
+
int len;
if (compress((char *)&header[phdr[i].p_offset],
- phdr[i].p_filesz, output->data + doffset, &len)) {
- buffer_delete(output);
- ret = -1;
- goto out;
- }
- segs[segments].len = len;
-
- /* If the compressed section is larger, then use the
- original stuff */
-
- if ((unsigned int)len > phdr[i].p_filesz) {
+ phdr[i].p_filesz, output->data + doffset, &len) ||
+ (unsigned int)len > phdr[i].p_filesz) {
+ WARN("Compression failed or would make the data bigger "
+ "- disabled.\n");
segs[segments].compression = 0;
segs[segments].len = phdr[i].p_filesz;
memcpy(output->data + doffset,
&header[phdr[i].p_offset], phdr[i].p_filesz);
+ } else {
+ segs[segments].compression = algo;
+ segs[segments].len = len;
}
doffset += segs[segments].len;
@@ -275,15 +273,13 @@ int parse_flat_binary_to_payload(const struct buffer *input,
segs[0].mem_len = input->size;
segs[0].offset = doffset;
- if (compress(input->data, input->size, output->data + doffset, &len)) {
- buffer_delete(output);
- return -1;
- }
+ if (!compress(input->data, input->size, output->data + doffset, &len) &&
+ (unsigned int)len < input->size) {
segs[0].compression = algo;
segs[0].len = len;
-
- if ((unsigned int)len >= input->size) {
- WARN("Compressing data would make it bigger - disabled.\n");
+ } else {
+ WARN("Compression failed or would make the data bigger "
+ "- disabled.\n");
segs[0].compression = 0;
segs[0].len = input->size;
memcpy(output->data + doffset, input->data, input->size);
@@ -404,15 +400,13 @@ int parse_fv_to_payload(const struct buffer *input,
segs[0].mem_len = input->size;
segs[0].offset = doffset;
- if (compress(input->data, input->size, output->data + doffset, &len)) {
- buffer_delete(output);
- return -1;
- }
+ if (!compress(input->data, input->size, output->data + doffset, &len) &&
+ (unsigned int)len < input->size) {
segs[0].compression = algo;
segs[0].len = len;
-
- if ((unsigned int)len >= input->size) {
- WARN("Compressing data would make it bigger - disabled.\n");
+ } else {
+ WARN("Compression failed or would make the data bigger "
+ "- disabled.\n");
segs[0].compression = 0;
segs[0].len = input->size;
memcpy(output->data + doffset, input->data, input->size);
diff --git a/util/cbfstool/cbfs-mkstage.c b/util/cbfstool/cbfs-mkstage.c
index 8c77ee5..4a2f4d8 100644
--- a/util/cbfstool/cbfs-mkstage.c
+++ b/util/cbfstool/cbfs-mkstage.c
@@ -155,12 +155,17 @@ int parse_elf_to_stage(const struct buffer *input, struct buffer *output,
* to fill out the header. This seems backward but it works because
* - the output header is a known size (not always true in many xdr's)
* - we do need to know the compressed output size first
+ * If compression fails or makes the data bigger, we'll warn about it
+ * and use the original data.
*/
if (compress(buffer, data_end - data_start,
(output->data + sizeof(struct cbfs_stage)),
- &outlen) < 0) {
- free(buffer);
- return -1;
+ &outlen) < 0 || outlen > data_end - data_start) {
+ WARN("Compression failed or would make the data bigger "
+ "- disabled.\n");
+ memcpy(output->data + sizeof(struct cbfs_stage),
+ buffer, data_end - data_start);
+ algo = CBFS_COMPRESS_NONE;
}
free(buffer);
Marc Jones (marc.jones(a)se-eng.com) just uploaded a new patch set to gerrit, which you can find at http://review.coreboot.org/7205
-gerrit
commit e08e0287b1e47ca70ac1a6e3bcbcd2cd202e3e33
Author: Gabe Black <gabeblack(a)google.com>
Date: Fri Feb 21 01:01:06 2014 -0800
cbfstool: If compression fails, warn and use the uncompressed data.
The LZMA compression algorithm, currently the only one available, will fail
if you ask it to write more data to the output than you've given it space for.
The code that calls into LZMA allocates an output buffer the same size as the
input, so if compression increases the size of the output the call will fail.
The caller(s) were written to assume that the call succeeded and check the
returned length to see if the size would have increased, but that will never
happen with LZMA.
Rather than try to rework the LZMA library to dynamically resize the output
buffer or try to guess what the maximal size the data could expand to is, this
change makes the caller simply print a warning and disable compression if the
call failed for some reason.
This may lead to images that are larger than necessary if compression fails
for some other reason and the user doesn't notice, but since compression
errors were ignored entirely until very recently that will hopefully not be
a problem in practice, and we should be guarnateed to at least produce a
correct image.
Original-Change-Id: I5f59529c2d48e9c4c2e011018b40ec336c4fcca8
Original-Signed-off-by: Gabe Black <gabeblack(a)google.com>
Original-Reviewed-on: https://chromium-review.googlesource.com/187365
Original-Reviewed-by: David Hendricks <dhendrix(a)chromium.org>
Original-Tested-by: Gabe Black <gabeblack(a)chromium.org>
Original-Commit-Queue: Gabe Black <gabeblack(a)chromium.org>
(cherry picked from commit b9f622a554d5fb9a9aff839c64e11acb27785f13)
Signed-off-by: Isaac Christensen <isaac.christensen(a)se-eng.com>
Change-Id: I97897ce49ee970eba095b64b3464236654b2fd42
---
util/cbfstool/cbfs-mkpayload.c | 46 ++++++++++++++++++------------------------
util/cbfstool/cbfs-mkstage.c | 11 +++++++---
2 files changed, 28 insertions(+), 29 deletions(-)
diff --git a/util/cbfstool/cbfs-mkpayload.c b/util/cbfstool/cbfs-mkpayload.c
index 38cc482..191e4e7 100644
--- a/util/cbfstool/cbfs-mkpayload.c
+++ b/util/cbfstool/cbfs-mkpayload.c
@@ -206,26 +206,24 @@ int parse_elf_to_payload(const struct buffer *input,
segs[segments].type = PAYLOAD_SEGMENT_DATA;
segs[segments].load_addr = phdr[i].p_paddr;
segs[segments].mem_len = phdr[i].p_memsz;
- segs[segments].compression = algo;
segs[segments].offset = doffset;
+ /* If the compression failed or made the section is larger,
+ use the original stuff */
+
int len;
if (compress((char *)&header[phdr[i].p_offset],
- phdr[i].p_filesz, output->data + doffset, &len)) {
- buffer_delete(output);
- ret = -1;
- goto out;
- }
- segs[segments].len = len;
-
- /* If the compressed section is larger, then use the
- original stuff */
-
- if ((unsigned int)len > phdr[i].p_filesz) {
+ phdr[i].p_filesz, output->data + doffset, &len) ||
+ (unsigned int)len > phdr[i].p_filesz) {
+ WARN("Compression failed or would make the data bigger "
+ "- disabled.\n");
segs[segments].compression = 0;
segs[segments].len = phdr[i].p_filesz;
memcpy(output->data + doffset,
&header[phdr[i].p_offset], phdr[i].p_filesz);
+ } else {
+ segs[segments].compression = algo;
+ segs[segments].len = len;
}
doffset += segs[segments].len;
@@ -275,15 +273,13 @@ int parse_flat_binary_to_payload(const struct buffer *input,
segs[0].mem_len = input->size;
segs[0].offset = doffset;
- if (compress(input->data, input->size, output->data + doffset, &len)) {
- buffer_delete(output);
- return -1;
- }
+ if (!compress(input->data, input->size, output->data + doffset, &len) &&
+ (unsigned int)len < input->size) {
segs[0].compression = algo;
segs[0].len = len;
-
- if ((unsigned int)len >= input->size) {
- WARN("Compressing data would make it bigger - disabled.\n");
+ } else {
+ WARN("Compression failed or would make the data bigger "
+ "- disabled.\n");
segs[0].compression = 0;
segs[0].len = input->size;
memcpy(output->data + doffset, input->data, input->size);
@@ -404,15 +400,13 @@ int parse_fv_to_payload(const struct buffer *input,
segs[0].mem_len = input->size;
segs[0].offset = doffset;
- if (compress(input->data, input->size, output->data + doffset, &len)) {
- buffer_delete(output);
- return -1;
- }
+ if (!compress(input->data, input->size, output->data + doffset, &len) &&
+ (unsigned int)len < input->size) {
segs[0].compression = algo;
segs[0].len = len;
-
- if ((unsigned int)len >= input->size) {
- WARN("Compressing data would make it bigger - disabled.\n");
+ } else {
+ WARN("Compression failed or would make the data bigger "
+ "- disabled.\n");
segs[0].compression = 0;
segs[0].len = input->size;
memcpy(output->data + doffset, input->data, input->size);
diff --git a/util/cbfstool/cbfs-mkstage.c b/util/cbfstool/cbfs-mkstage.c
index 8c77ee5..4a2f4d8 100644
--- a/util/cbfstool/cbfs-mkstage.c
+++ b/util/cbfstool/cbfs-mkstage.c
@@ -155,12 +155,17 @@ int parse_elf_to_stage(const struct buffer *input, struct buffer *output,
* to fill out the header. This seems backward but it works because
* - the output header is a known size (not always true in many xdr's)
* - we do need to know the compressed output size first
+ * If compression fails or makes the data bigger, we'll warn about it
+ * and use the original data.
*/
if (compress(buffer, data_end - data_start,
(output->data + sizeof(struct cbfs_stage)),
- &outlen) < 0) {
- free(buffer);
- return -1;
+ &outlen) < 0 || outlen > data_end - data_start) {
+ WARN("Compression failed or would make the data bigger "
+ "- disabled.\n");
+ memcpy(output->data + sizeof(struct cbfs_stage),
+ buffer, data_end - data_start);
+ algo = CBFS_COMPRESS_NONE;
}
free(buffer);
Marc Jones (marc.jones(a)se-eng.com) just uploaded a new patch set to gerrit, which you can find at http://review.coreboot.org/7204
-gerrit
commit e2f21af5446571fd73ea12b7ac8e6e39fdac8e5c
Author: Aaron Durbin <adurbin(a)chromium.org>
Date: Thu Mar 20 11:08:02 2014 -0500
rmodtool: add support for ARM
Add support for creating ARM rmodules. There are 3 expected
relocations for an ARM rmodule:
- R_ARM_ABS32
- R_ARM_THM_PC22
- R_ARM_THM_JUMP24
R_ARM_ABS32 is the only type that needs to emitted for relocation
as the other 2 are relative relocations.
BUG=chrome-os-partner:27094
BRANCH=None
TEST=Built vbootstub for ARM device.
Original-Change-Id: I0c22d4abca970e82ccd60b33fed700b96e3e52fb
Original-Signed-off-by: Aaron Durbin <adurbin(a)chromuim.org>
Original-Reviewed-on: https://chromium-review.googlesource.com/190922
Original-Reviewed-by: Gabe Black <gabeblack(a)chromium.org>
(cherry picked from commit a642102ba7ace5c1829abe7732199eda6646950a)
Signed-off-by: Marc Jones <marc.jones(a)se-eng.com>
Change-Id: Ib3b3c90ebb672d8d6a537df896b97dc82c6186cc
---
util/cbfstool/elf.h | 1 +
util/cbfstool/rmodule.c | 26 ++++++++++++++++++++++++++
2 files changed, 27 insertions(+)
diff --git a/util/cbfstool/elf.h b/util/cbfstool/elf.h
index d07bb53..8b56a71 100644
--- a/util/cbfstool/elf.h
+++ b/util/cbfstool/elf.h
@@ -2222,6 +2222,7 @@ typedef Elf32_Addr Elf32_Conflict;
#define R_ARM_GOTPC 25 /* 32 bit PC relative offset to GOT */
#define R_ARM_GOT32 26 /* 32 bit GOT entry */
#define R_ARM_PLT32 27 /* 32 bit PLT address */
+#define R_ARM_THM_JUMP24 30
#define R_ARM_ALU_PCREL_7_0 32
#define R_ARM_ALU_PCREL_15_8 33
#define R_ARM_ALU_PCREL_23_15 34
diff --git a/util/cbfstool/rmodule.c b/util/cbfstool/rmodule.c
index 168d71a..966c364 100644
--- a/util/cbfstool/rmodule.c
+++ b/util/cbfstool/rmodule.c
@@ -82,12 +82,38 @@ static int should_emit_386(struct rmod_context *ctx, Elf64_Rela *rel)
return (type == R_386_32);
}
+static int valid_reloc_arm(struct rmod_context *ctx, Elf64_Rela *rel)
+{
+ int type;
+
+ type = ELF64_R_TYPE(rel->r_info);
+
+ /* Only these 3 relocations are expected to be found. */
+ return (type == R_ARM_ABS32 || type == R_ARM_THM_PC22 ||
+ type == R_ARM_THM_JUMP24);
+}
+
+static int should_emit_arm(struct rmod_context *ctx, Elf64_Rela *rel)
+{
+ int type;
+
+ type = ELF64_R_TYPE(rel->r_info);
+
+ /* R_ARM_ABS32 relocations are absolute. Must emit these. */
+ return (type == R_ARM_ABS32);
+}
+
static struct arch_ops reloc_ops[] = {
{
.arch = EM_386,
.valid_type = valid_reloc_386,
.should_emit = should_emit_386,
},
+ {
+ .arch = EM_ARM,
+ .valid_type = valid_reloc_arm,
+ .should_emit = should_emit_arm,
+ },
};
/*