On Mon, Dec 19, 2016 at 9:55 AM, Chauhan, Himanshu
<hschauhan(a)nulltrace.org> wrote:
> On Mon, Dec 19, 2016 at 9:09 PM, Aaron Durbin <adurbin(a)google.com> wrote:
>> On Sun, Dec 18, 2016 at 11:04 PM, Chauhan, Himanshu
>> <hschauhan(a)nulltrace.org> wrote:
>>> On Mon, Dec 19, 2016 at 12:40 AM, Aaron Durbin <adurbin(a)google.com> wrote:
>>>> On Sun, Dec 18, 2016 at 9:37 AM, Chauhan, Himanshu
>>>> <hschauhan(a)nulltrace.org> wrote:
>>>>> Hi Aaron,
>>>>>
>>>>> I figured out the crash. It wan't because wrong load of the ROM image
>>>>> (thanks to the nifty post_code which I could trap on IO). I see that
>>>>> the page fault I am getting is in following code:
>>>>> (gdb) list *(((0xfff81e41 - 0xfff80000)-200)+0x2000000)
>>>>
>>>> I'm curious about the 200 and 16MiB offset being applied.
>>>
>>> 0x2000000 is the new address where romstage is linked. Earlier
>>> (atleast in 2014) the linked address used to be 0xfff80000. This is
>>> the same address (guest physical) where I map the ROM code. In the
>>> above calculation I am taking the offset from 0xfff80000 and adding to
>>> the link address of romstage (0x2000000). The 0x200 is the difference
>>> I see to map the addresses correctly. This calculation seems fine to
>>> me because with this I am able to pin point all the earlier faults and
>>> the post_code trap rIP.
>>>
>>
>> If you provide 'cbfstool print -k' output, I could most likely provide
>> the exact offset mapping. Alternatively you could extract the
>> romstage.elf from the image using 'cbfstool extract -m x86', but it
>> won't have debug info. But it'd provide the information to compare
>> against the pre-relocated image for the correct mapping.
>>
> How exactly to run it? It says unknown option -k (cbfstool in build directory).
./coreboot-builds/sharedutils/cbfstool/cbfstool
coreboot-builds/GOOGLE_REEF/coreboot.rom print -k
That's an example after me building reef with abuild. How old is your
coreboot checkout?
>
>>>>
>>>>> 0x2001d79 is in imd_recover (src/lib/imd.c:139).
>>>>> 134
>>>>> 135 static void imdr_init(struct imdr *ir, void *upper_limit)
>>>>> 136 {
>>>>> 137 uintptr_t limit = (uintptr_t)upper_limit;
>>>>> 138 /* Upper limit is aligned down to 4KiB */
>>>>> 139 ir->limit = ALIGN_DOWN(limit, LIMIT_ALIGN);
>>>>> 140 ir->r = NULL;
>>>>> 141 }
>>>>> 142
>>>>> 143 static int imdr_create_empty(struct imdr *imdr, size_t root_size,
>>>>>
>>>>> I see that this function is being called multiple times (I added some
>>>>> more post_code and see them being trapped). I get a series of page
>>>>> faults which I am able to honour all but last.
>>>>
>>>> I don't see how imdr_init would be faulting. That's just assigning
>>>> fields of a struct sitting on the stack. What's your stack pointer
>>>> value at the time of the faults?
>>>
>>> "ir" should be on stack or on top of the RAM. Right now it looks like
>>> its on top of the RAM. That area is not mapped initially. On a page
>>> fault, I map a 4K page. For the reference, the following is the
>>> register dump of coreboot. RSP is 0x9fe54.
>>>
>>
>> The values should not be striding. That object is always on the stack.
>> Where the stack is located could be in low or high memory. I still
>> need to know what platform you are targeting for the image to provide
>> details. However, it would not be striding.
>
> I am building this for qemu i440-fx.
OK. What is your cmos emulation returning at addresses 0x34, 0x35,
0x5d, 0x5c and 0x5b?
I also don't understand why we're adding 16MiB to
qemu_get_memory_size() unconditionally.
>
>>
>>> GUEST guest0/vcpu0 dump state:
>>>
>>> RAX: 0x9fe80 RBX: 0xfffff8 RCX: 0x1b RDX: 0x53a11439
>>> R08: 0x0 R09: 0x0 R10: 0x0 R11: 0x0
>>> R12: 0x0 R13: 0x0 R14: 0x0 R15: 0x0
>>> RSP: 0x9fe54 RBP: 0xa0000 RDI: 0xfff801e4 RSI: 0x9fe80
>>> RIP: 0xfff81e41
>>>
>>> CR0: 0xe0000011 CR2: 0x0 CR3: 0xa23000 CR4: 0x0
>>> CS : Sel: 0x00000008 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
>>> 1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 11)
>>> DS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
>>> 1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>> ES : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
>>> 1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>> SS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
>>> 1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>> FS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
>>> 1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>> GS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
>>> 1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>> GDT : Sel: 0x00000000 Limit: 0x0000001f Base: 0xfff80200 (G: 0 DB:
>>> 0 L: 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
>>> LDT : Sel: 0x00000000 Limit: 0x0000ffff Base: 0x00000000 (G: 0 DB:
>>> 0 L: 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
>>> IDT : Sel: 0x00000000 Limit: 0x00000000 Base: 0x00000000 (G: 0 DB:
>>> 0 L: 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
>>> TR : Sel: 0x00000000 Limit: 0x0000ffff Base: 0x00000000 (G: 1 DB:
>>> 0 L: 1 AVL: 1 P: 0 DPL: 0 S: 0 Type: 0)
>>> RFLAGS: 0xa [ ]
>>>
>>>
>>>>>
>>>>> (__handle_vm_exception:543) Guest fault: 0x7f7fffc (rIP: 00000000FFF81E41)
>>>>> (__handle_vm_exception:543) Guest fault: 0x7f7effc (rIP: 00000000FFF81E41)
>>>>> (__handle_vm_exception:543) Guest fault: 0x7f7dffc (rIP: 00000000FFF81E41)
>>>>> (__handle_vm_exception:543) Guest fault: 0x7f7cffc (rIP: 00000000FFF81E41)
>>>>> (__handle_vm_exception:543) Guest fault: 0x7f7bffc (rIP: 00000000FFF81E41)
>>>>> (__handle_vm_exception:543) Guest fault: 0x7f7affc (rIP: 00000000FFF81E41)
>>>>> (__handle_vm_exception:543) Guest fault: 0x7f79ffc (rIP: 00000000FFF81E41)
>>>>> (__handle_vm_exception:543) Guest fault: 0x7f78ffc (rIP: 00000000FFF81E41)
>>>>> (__handle_vm_exception:543) Guest fault: 0x7f77ffc (rIP: 00000000FFF81E41)
>>>>> (__handle_vm_exception:543) Guest fault: 0x7f76ffc (rIP: 00000000FFF81E41)
>>>>> <snip>
>>>>
>>>> Are those non-rIP addresses the page fault address?
>>>
>>> Guest fault: 0x7f7fffc is the address which I think is pointing to
>>> "ir". If you look all the faulting addresses are 4K apart which is my
>>> default page size for mapping all the guest pages. It also means that
>>> multiple times "imdr_init" is being called it faults for different
>>> addresses hence the same rIP.
>>
>> I just don't see how we're using that much stack. That doesn't seem
>> right at all.
>>
>
> Yes. Something is terribly wrong. I had this working back in 2014.
> Please take a look at this video that I created at that time.
> https://www.youtube.com/watch?v=jPAzzLQ0NgU
i see you do have serial port. It'd be interesting to get full logs
when the thing is booting to see where it goes off the rails.
>
> I couldn't work on it for quite some time and meantime core boot
> changed a lot. I have one question. In earlier core boot images,
> romstage was linked to 0xfff80000 and now its 0x2000000. Any reason?
It's just linked at CONFIG_ROMSTAGE_ADDR to avoid a double link step.
It's linked once and cbfstool relocates the image when placing it into
CBFS. It previously was linked at a specific address then the xip
address was calculated by performing a pseudo CBFS add operation. Then
romstage was re-linked and added to CBFS.
The offset for address translation is the entry point differences
between the 2 elf files. You can extract the one in coreboot.rom to
get a the entry point of the romstage being ran.
>
>>>
>>>>
>>>>>
>>>>> handle_guest_realmode_page_fault: offset: 0x3ffc fault: 0x1003ffc reg: 0x1000000
>>>>> handle_guest_realmode_page_fault: offset: 0x2ffc fault: 0x1002ffc reg: 0x1000000
>>>>> handle_guest_realmode_page_fault: offset: 0x1ffc fault: 0x1001ffc reg: 0x1000000
>>>>> handle_guest_realmode_page_fault: offset: 0xffc fault: 0x1000ffc reg: 0x1000000
>>>>
>>>> What is the above detailing? I'm not sure what the 'fault' value means.
>>>
>>> These are same as Guest fault above. You can disregard them.
>>>
>>>>
>>>>>
>>>>> (__handle_vm_exception:561) ERROR: No region mapped to guest physical: 0xfffffc
>>>>>
>>>>>
>>>>> I want to understand why imd_recover gets called multiple times
>>>>> starting from top of memory (128MB is what I have assigned to the
>>>>> guest) to 16MB last (after which I can't honour). There is something
>>>>> amiss in my understanding of core boot memory map.
>>>>>
>>>>> Could you please help?
>>>>
>>>> The imd library contains the implementation of cbmem. See
>>>> include/cbmem.h for more details, but how it works is that the
>>>> platform needs to supply the implementation of cbmem_top() which
>>>> defines the exclusive upper boundary to start growing entries downward
>>>> from. There is a large and small object size with large blocks being
>>>> 4KiB in size and small blocks being 32 byes. I don't understand why
>>>> the faulting addresses are offset from 128MiB by 512KiB with a 4KiB
>>>> stride.
>>>>
>>>> What platform are you targeting for your coreboot build? Are you
>>>> restarting the instruction that faults? I'm really curious about the
>>>> current fault patterns. It looks like things are faulting around
>>>> accessing the imd_root_pointer root_offset field. Are these faults
>>>> reads or writes? However, that's assuming cbmem_top() is returning
>>>> 128MiB-512KiB. However, it doesn't explain the successive strides. Do
>>>> you have serial port emulation to get the console messages out?
>>>>
>>>> So in your platform code ensure 2 things are happening:
>>>>
>>>> 1. cbmem_top() returns a highest address in 'ram' of the guest once
>>>> it's online. 128MiB if that's your expectation. The value cbmem_top()
>>>> returns should never change from successive calls aside from NULL
>>>> being returned when ram is not yet available.
>>>> 2. cbmem_initialize_empty() is called one time once the 'ram' is
>>>> online for use in the non-S3 resume path and cbmem_initialize() in the
>>>> S3 resume path. If S3 isn't supported in your guest then just use
>>>> cbmem_initialize_empty().
>>>>
>>>
>>> I will look in it. I see that RAM top is being provided by the CMOS
>>> emulator. I will look at cbmem_initialize_empty().
>>
>> If you could provide me the info on the platform you are targeting
>> coreboot builds with it'd be easier to analyze. Where is this 'CMOS
>> emulator' and why is it needed?
>
> Coreboot calls on port 0x34/0x35 to get the amount of memory. The cmos
> emulator traps these (just like qemu) and provides that information to
> core boot.
>
cbmem_recovery(0) is effectively cbmem_initialize_empty(). That's
being called in src/mainboard/emulation/qemu-i440fx/romstage.c.
Your RSP value of 0x9fe54 aligns with
src/mainboard/emulation/qemu-i440fx/cache_as_ram.inc using 0xa0000 as
the initial stack.
So I don't think imd_recover() is your culprit. Something is changing
the value of cbmem_top() it feels like.
>>
>>>
>>>>>
>>>>> Regards
>>>>> Himanshu
>>>>>
>>>>> On Wed, Dec 14, 2016 at 9:27 PM, Chauhan, Himanshu
>>>>> <hschauhan(a)nulltrace.org> wrote:
>>>>>> Hi Aaron,
>>>>>>
>>>>>> Yes, I am mapping the memory where coreboot.rom is loaded to upper 4GiB. I
>>>>>> create a fixed shadow page table entry for reset vector.
>>>>>>
>>>>>> Coreboot doesn't have a linked address of RIP that I shared. I think with
>>>>>> the increase in size of coreboot (from the previous tag I was using) the
>>>>>> load address (guest physical) has changed. I used to calculate the load
>>>>>> address manually. I will check this and get back.
>>>>>>
>>>>>> Thanks.
>>>>>>
>>>>>> On Wed, Dec 14, 2016 at 8:17 PM, Aaron Durbin <adurbin(a)google.com> wrote:
>>>>>>>
>>>>>>> On Wed, Dec 14, 2016 at 3:11 AM, Chauhan, Himanshu
>>>>>>> <hschauhan(a)nulltrace.org> wrote:
>>>>>>> > Hi,
>>>>>>> >
>>>>>>> > I am working on a hypvervisor and am using coreboot + FILO as guest
>>>>>>> > BIOS.
>>>>>>> > While things were fine a while back, it has stopped working. I see that
>>>>>>> > my
>>>>>>> > hypervisor can't handle address 0xFFFFFC while coreboot's RIP is at
>>>>>>> > 0xfff81e41.
>>>>>>>
>>>>>>>
>>>>>>> How are you loading up coreboot.rom in the VM? Are you just memory
>>>>>>> mapping it at the top of 4GiB address space? If so, what does
>>>>>>> 'cbfstool coreboot.rom print' show?
>>>>>>>
>>>>>>> >
>>>>>>> > The exact register dump of guest is as follow:
>>>>>>> >
>>>>>>> > [guest0/uart0] (__handle_vm_exception:558) ERROR: No region mapped to
>>>>>>> > guest
>>>>>>> > physical: 0xfffffc
>>>>>>> >
>>>>>>> > GUEST guest0/vcpu0 dump state:
>>>>>>> >
>>>>>>> > RAX: 0x9fe80 RBX: 0xfffff8 RCX: 0x1b RDX: 0x53a11439
>>>>>>> > R08: 0x0 R09: 0x0 R10: 0x0 R11: 0x0
>>>>>>> > R12: 0x0 R13: 0x0 R14: 0x0 R15: 0x0
>>>>>>> > RSP: 0x9fe54 RBP: 0xa0000 RDI: 0xfff801e4 RSI: 0x9fe80
>>>>>>> > RIP: 0xfff81e41
>>>>>>> >
>>>>>>> > CR0: 0xe0000011 CR2: 0x0 CR3: 0xa23000 CR4: 0x0
>>>>>>> > CS : Sel: 0x00000008 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>>>>> > L:
>>>>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 11)
>>>>>>> > DS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>>>>> > L:
>>>>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>>>>>> > ES : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>>>>> > L:
>>>>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>>>>>> > SS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>>>>> > L:
>>>>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>>>>>> > FS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>>>>> > L:
>>>>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>>>>>> > GS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>>>>> > L:
>>>>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>>>>>> > GDT : Sel: 0x00000000 Limit: 0x0000001f Base: 0xfff80200 (G: 0 DB: 0
>>>>>>> > L:
>>>>>>> > 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
>>>>>>> > LDT : Sel: 0x00000000 Limit: 0x0000ffff Base: 0x00000000 (G: 0 DB: 0
>>>>>>> > L:
>>>>>>> > 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
>>>>>>> > IDT : Sel: 0x00000000 Limit: 0x00000000 Base: 0x00000000 (G: 0 DB: 0
>>>>>>> > L:
>>>>>>> > 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
>>>>>>> > TR : Sel: 0x00000000 Limit: 0x0000ffff Base: 0x00000000 (G: 1 DB: 0
>>>>>>> > L:
>>>>>>> > 1 AVL: 1 P: 0 DPL: 0 S: 0 Type: 0)
>>>>>>> > RFLAGS: 0xa [ ]
>>>>>>> >
>>>>>>> > I want to know which binary file (.o) should I disassemble to look at
>>>>>>> > the
>>>>>>> > RIP?
>>>>>>> >
>>>>>>> > I was looking at
>>>>>>> > objdump -D -mi386 -Maddr16,data16 generated/ramstage.o
>>>>>>> >
>>>>>>> > but this is prior to linking and thus only has offsets.
>>>>>>> >
>>>>>>> > --
>>>>>>> >
>>>>>>> > Regards
>>>>>>> > [Himanshu Chauhan]
>>>>>>> >
>>>>>>> >
>>>>>>> > --
>>>>>>> > coreboot mailing list: coreboot(a)coreboot.org
>>>>>>> > https://www.coreboot.org/mailman/listinfo/coreboot
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Regards
>>>>>> [Himanshu Chauhan]
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Regards
>>>>> [Himanshu Chauhan]
>>>
>>>
>>>
>>> --
>>>
>>> Regards
>>> [Himanshu Chauhan]
>
>
>
> --
>
> Regards
> [Himanshu Chauhan]
On Mon, Dec 19, 2016 at 9:09 PM, Aaron Durbin <adurbin(a)google.com> wrote:
> On Sun, Dec 18, 2016 at 11:04 PM, Chauhan, Himanshu
> <hschauhan(a)nulltrace.org> wrote:
>> On Mon, Dec 19, 2016 at 12:40 AM, Aaron Durbin <adurbin(a)google.com> wrote:
>>> On Sun, Dec 18, 2016 at 9:37 AM, Chauhan, Himanshu
>>> <hschauhan(a)nulltrace.org> wrote:
>>>> Hi Aaron,
>>>>
>>>> I figured out the crash. It wan't because wrong load of the ROM image
>>>> (thanks to the nifty post_code which I could trap on IO). I see that
>>>> the page fault I am getting is in following code:
>>>> (gdb) list *(((0xfff81e41 - 0xfff80000)-200)+0x2000000)
>>>
>>> I'm curious about the 200 and 16MiB offset being applied.
>>
>> 0x2000000 is the new address where romstage is linked. Earlier
>> (atleast in 2014) the linked address used to be 0xfff80000. This is
>> the same address (guest physical) where I map the ROM code. In the
>> above calculation I am taking the offset from 0xfff80000 and adding to
>> the link address of romstage (0x2000000). The 0x200 is the difference
>> I see to map the addresses correctly. This calculation seems fine to
>> me because with this I am able to pin point all the earlier faults and
>> the post_code trap rIP.
>>
>
> If you provide 'cbfstool print -k' output, I could most likely provide
> the exact offset mapping. Alternatively you could extract the
> romstage.elf from the image using 'cbfstool extract -m x86', but it
> won't have debug info. But it'd provide the information to compare
> against the pre-relocated image for the correct mapping.
>
How exactly to run it? It says unknown option -k (cbfstool in build directory).
>>>
>>>> 0x2001d79 is in imd_recover (src/lib/imd.c:139).
>>>> 134
>>>> 135 static void imdr_init(struct imdr *ir, void *upper_limit)
>>>> 136 {
>>>> 137 uintptr_t limit = (uintptr_t)upper_limit;
>>>> 138 /* Upper limit is aligned down to 4KiB */
>>>> 139 ir->limit = ALIGN_DOWN(limit, LIMIT_ALIGN);
>>>> 140 ir->r = NULL;
>>>> 141 }
>>>> 142
>>>> 143 static int imdr_create_empty(struct imdr *imdr, size_t root_size,
>>>>
>>>> I see that this function is being called multiple times (I added some
>>>> more post_code and see them being trapped). I get a series of page
>>>> faults which I am able to honour all but last.
>>>
>>> I don't see how imdr_init would be faulting. That's just assigning
>>> fields of a struct sitting on the stack. What's your stack pointer
>>> value at the time of the faults?
>>
>> "ir" should be on stack or on top of the RAM. Right now it looks like
>> its on top of the RAM. That area is not mapped initially. On a page
>> fault, I map a 4K page. For the reference, the following is the
>> register dump of coreboot. RSP is 0x9fe54.
>>
>
> The values should not be striding. That object is always on the stack.
> Where the stack is located could be in low or high memory. I still
> need to know what platform you are targeting for the image to provide
> details. However, it would not be striding.
I am building this for qemu i440-fx.
>
>> GUEST guest0/vcpu0 dump state:
>>
>> RAX: 0x9fe80 RBX: 0xfffff8 RCX: 0x1b RDX: 0x53a11439
>> R08: 0x0 R09: 0x0 R10: 0x0 R11: 0x0
>> R12: 0x0 R13: 0x0 R14: 0x0 R15: 0x0
>> RSP: 0x9fe54 RBP: 0xa0000 RDI: 0xfff801e4 RSI: 0x9fe80
>> RIP: 0xfff81e41
>>
>> CR0: 0xe0000011 CR2: 0x0 CR3: 0xa23000 CR4: 0x0
>> CS : Sel: 0x00000008 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
>> 1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 11)
>> DS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
>> 1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>> ES : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
>> 1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>> SS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
>> 1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>> FS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
>> 1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>> GS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
>> 1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>> GDT : Sel: 0x00000000 Limit: 0x0000001f Base: 0xfff80200 (G: 0 DB:
>> 0 L: 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
>> LDT : Sel: 0x00000000 Limit: 0x0000ffff Base: 0x00000000 (G: 0 DB:
>> 0 L: 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
>> IDT : Sel: 0x00000000 Limit: 0x00000000 Base: 0x00000000 (G: 0 DB:
>> 0 L: 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
>> TR : Sel: 0x00000000 Limit: 0x0000ffff Base: 0x00000000 (G: 1 DB:
>> 0 L: 1 AVL: 1 P: 0 DPL: 0 S: 0 Type: 0)
>> RFLAGS: 0xa [ ]
>>
>>
>>>>
>>>> (__handle_vm_exception:543) Guest fault: 0x7f7fffc (rIP: 00000000FFF81E41)
>>>> (__handle_vm_exception:543) Guest fault: 0x7f7effc (rIP: 00000000FFF81E41)
>>>> (__handle_vm_exception:543) Guest fault: 0x7f7dffc (rIP: 00000000FFF81E41)
>>>> (__handle_vm_exception:543) Guest fault: 0x7f7cffc (rIP: 00000000FFF81E41)
>>>> (__handle_vm_exception:543) Guest fault: 0x7f7bffc (rIP: 00000000FFF81E41)
>>>> (__handle_vm_exception:543) Guest fault: 0x7f7affc (rIP: 00000000FFF81E41)
>>>> (__handle_vm_exception:543) Guest fault: 0x7f79ffc (rIP: 00000000FFF81E41)
>>>> (__handle_vm_exception:543) Guest fault: 0x7f78ffc (rIP: 00000000FFF81E41)
>>>> (__handle_vm_exception:543) Guest fault: 0x7f77ffc (rIP: 00000000FFF81E41)
>>>> (__handle_vm_exception:543) Guest fault: 0x7f76ffc (rIP: 00000000FFF81E41)
>>>> <snip>
>>>
>>> Are those non-rIP addresses the page fault address?
>>
>> Guest fault: 0x7f7fffc is the address which I think is pointing to
>> "ir". If you look all the faulting addresses are 4K apart which is my
>> default page size for mapping all the guest pages. It also means that
>> multiple times "imdr_init" is being called it faults for different
>> addresses hence the same rIP.
>
> I just don't see how we're using that much stack. That doesn't seem
> right at all.
>
Yes. Something is terribly wrong. I had this working back in 2014.
Please take a look at this video that I created at that time.
https://www.youtube.com/watch?v=jPAzzLQ0NgU
I couldn't work on it for quite some time and meantime core boot
changed a lot. I have one question. In earlier core boot images,
romstage was linked to 0xfff80000 and now its 0x2000000. Any reason?
>>
>>>
>>>>
>>>> handle_guest_realmode_page_fault: offset: 0x3ffc fault: 0x1003ffc reg: 0x1000000
>>>> handle_guest_realmode_page_fault: offset: 0x2ffc fault: 0x1002ffc reg: 0x1000000
>>>> handle_guest_realmode_page_fault: offset: 0x1ffc fault: 0x1001ffc reg: 0x1000000
>>>> handle_guest_realmode_page_fault: offset: 0xffc fault: 0x1000ffc reg: 0x1000000
>>>
>>> What is the above detailing? I'm not sure what the 'fault' value means.
>>
>> These are same as Guest fault above. You can disregard them.
>>
>>>
>>>>
>>>> (__handle_vm_exception:561) ERROR: No region mapped to guest physical: 0xfffffc
>>>>
>>>>
>>>> I want to understand why imd_recover gets called multiple times
>>>> starting from top of memory (128MB is what I have assigned to the
>>>> guest) to 16MB last (after which I can't honour). There is something
>>>> amiss in my understanding of core boot memory map.
>>>>
>>>> Could you please help?
>>>
>>> The imd library contains the implementation of cbmem. See
>>> include/cbmem.h for more details, but how it works is that the
>>> platform needs to supply the implementation of cbmem_top() which
>>> defines the exclusive upper boundary to start growing entries downward
>>> from. There is a large and small object size with large blocks being
>>> 4KiB in size and small blocks being 32 byes. I don't understand why
>>> the faulting addresses are offset from 128MiB by 512KiB with a 4KiB
>>> stride.
>>>
>>> What platform are you targeting for your coreboot build? Are you
>>> restarting the instruction that faults? I'm really curious about the
>>> current fault patterns. It looks like things are faulting around
>>> accessing the imd_root_pointer root_offset field. Are these faults
>>> reads or writes? However, that's assuming cbmem_top() is returning
>>> 128MiB-512KiB. However, it doesn't explain the successive strides. Do
>>> you have serial port emulation to get the console messages out?
>>>
>>> So in your platform code ensure 2 things are happening:
>>>
>>> 1. cbmem_top() returns a highest address in 'ram' of the guest once
>>> it's online. 128MiB if that's your expectation. The value cbmem_top()
>>> returns should never change from successive calls aside from NULL
>>> being returned when ram is not yet available.
>>> 2. cbmem_initialize_empty() is called one time once the 'ram' is
>>> online for use in the non-S3 resume path and cbmem_initialize() in the
>>> S3 resume path. If S3 isn't supported in your guest then just use
>>> cbmem_initialize_empty().
>>>
>>
>> I will look in it. I see that RAM top is being provided by the CMOS
>> emulator. I will look at cbmem_initialize_empty().
>
> If you could provide me the info on the platform you are targeting
> coreboot builds with it'd be easier to analyze. Where is this 'CMOS
> emulator' and why is it needed?
Coreboot calls on port 0x34/0x35 to get the amount of memory. The cmos
emulator traps these (just like qemu) and provides that information to
core boot.
>
>>
>>>>
>>>> Regards
>>>> Himanshu
>>>>
>>>> On Wed, Dec 14, 2016 at 9:27 PM, Chauhan, Himanshu
>>>> <hschauhan(a)nulltrace.org> wrote:
>>>>> Hi Aaron,
>>>>>
>>>>> Yes, I am mapping the memory where coreboot.rom is loaded to upper 4GiB. I
>>>>> create a fixed shadow page table entry for reset vector.
>>>>>
>>>>> Coreboot doesn't have a linked address of RIP that I shared. I think with
>>>>> the increase in size of coreboot (from the previous tag I was using) the
>>>>> load address (guest physical) has changed. I used to calculate the load
>>>>> address manually. I will check this and get back.
>>>>>
>>>>> Thanks.
>>>>>
>>>>> On Wed, Dec 14, 2016 at 8:17 PM, Aaron Durbin <adurbin(a)google.com> wrote:
>>>>>>
>>>>>> On Wed, Dec 14, 2016 at 3:11 AM, Chauhan, Himanshu
>>>>>> <hschauhan(a)nulltrace.org> wrote:
>>>>>> > Hi,
>>>>>> >
>>>>>> > I am working on a hypvervisor and am using coreboot + FILO as guest
>>>>>> > BIOS.
>>>>>> > While things were fine a while back, it has stopped working. I see that
>>>>>> > my
>>>>>> > hypervisor can't handle address 0xFFFFFC while coreboot's RIP is at
>>>>>> > 0xfff81e41.
>>>>>>
>>>>>>
>>>>>> How are you loading up coreboot.rom in the VM? Are you just memory
>>>>>> mapping it at the top of 4GiB address space? If so, what does
>>>>>> 'cbfstool coreboot.rom print' show?
>>>>>>
>>>>>> >
>>>>>> > The exact register dump of guest is as follow:
>>>>>> >
>>>>>> > [guest0/uart0] (__handle_vm_exception:558) ERROR: No region mapped to
>>>>>> > guest
>>>>>> > physical: 0xfffffc
>>>>>> >
>>>>>> > GUEST guest0/vcpu0 dump state:
>>>>>> >
>>>>>> > RAX: 0x9fe80 RBX: 0xfffff8 RCX: 0x1b RDX: 0x53a11439
>>>>>> > R08: 0x0 R09: 0x0 R10: 0x0 R11: 0x0
>>>>>> > R12: 0x0 R13: 0x0 R14: 0x0 R15: 0x0
>>>>>> > RSP: 0x9fe54 RBP: 0xa0000 RDI: 0xfff801e4 RSI: 0x9fe80
>>>>>> > RIP: 0xfff81e41
>>>>>> >
>>>>>> > CR0: 0xe0000011 CR2: 0x0 CR3: 0xa23000 CR4: 0x0
>>>>>> > CS : Sel: 0x00000008 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>>>> > L:
>>>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 11)
>>>>>> > DS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>>>> > L:
>>>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>>>>> > ES : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>>>> > L:
>>>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>>>>> > SS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>>>> > L:
>>>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>>>>> > FS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>>>> > L:
>>>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>>>>> > GS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>>>> > L:
>>>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>>>>> > GDT : Sel: 0x00000000 Limit: 0x0000001f Base: 0xfff80200 (G: 0 DB: 0
>>>>>> > L:
>>>>>> > 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
>>>>>> > LDT : Sel: 0x00000000 Limit: 0x0000ffff Base: 0x00000000 (G: 0 DB: 0
>>>>>> > L:
>>>>>> > 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
>>>>>> > IDT : Sel: 0x00000000 Limit: 0x00000000 Base: 0x00000000 (G: 0 DB: 0
>>>>>> > L:
>>>>>> > 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
>>>>>> > TR : Sel: 0x00000000 Limit: 0x0000ffff Base: 0x00000000 (G: 1 DB: 0
>>>>>> > L:
>>>>>> > 1 AVL: 1 P: 0 DPL: 0 S: 0 Type: 0)
>>>>>> > RFLAGS: 0xa [ ]
>>>>>> >
>>>>>> > I want to know which binary file (.o) should I disassemble to look at
>>>>>> > the
>>>>>> > RIP?
>>>>>> >
>>>>>> > I was looking at
>>>>>> > objdump -D -mi386 -Maddr16,data16 generated/ramstage.o
>>>>>> >
>>>>>> > but this is prior to linking and thus only has offsets.
>>>>>> >
>>>>>> > --
>>>>>> >
>>>>>> > Regards
>>>>>> > [Himanshu Chauhan]
>>>>>> >
>>>>>> >
>>>>>> > --
>>>>>> > coreboot mailing list: coreboot(a)coreboot.org
>>>>>> > https://www.coreboot.org/mailman/listinfo/coreboot
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Regards
>>>>> [Himanshu Chauhan]
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Regards
>>>> [Himanshu Chauhan]
>>
>>
>>
>> --
>>
>> Regards
>> [Himanshu Chauhan]
--
Regards
[Himanshu Chauhan]
Hello everybody!
I have a Chromebook (Medion Akoya S2013) which is the german version of
the Haier Chromebook 11 (veyron_jaq).
And I fail to build the coreboot.
Here what I've done so far (which you can reproduce yourself):
1. like told me on https://www.coreboot.org/Build_HOWTO
$ git clone http://review.coreboot.org/p/coreboot
$ cd coreboot
$ git submodule update --init --checkout
2. $ make menuconfig
There I only changed "Mainboard" > "Mainboard vendor" & "Mainboard
model" to Google / Haier Chromebook 11 (veyron_jaq). The ROM size of 4
MB is also correct. I read the flash chip and got a 4 MB image.
3. Still in menuconfig I selected GRUB2 in "Payload" > "Add a Payload".
4. $ make crosstools-arm
succeeded without errors
5. $ make
Fails with error:
HOSTCC cbfstool/xxhash.o
HOSTCC cbfstool/linux_trampoline.o
HOSTCC cbfstool/cbfs-payload-linux.o
HOSTCC cbfstool/cbfstool (link)
Performing operation on 'COREBOOT' region...
Created CBFS (capacity = 4062936 bytes)
Performing operation on 'BOOTBLOCK' region...
W: Written area will abut bottom of target region: any unused space will
keep its current contents
Performing operation on 'COREBOOT' region...
Performing operation on 'COREBOOT' region...
CBFS fallback/romstage
Performing operation on 'COREBOOT' region...
CBFS fallback/ramstage
Performing operation on 'COREBOOT' region...
CBFS config
Performing operation on 'COREBOOT' region...
CBFS revision
Performing operation on 'COREBOOT' region...
CBFS fallback/payload
Performing operation on 'COREBOOT' region...
Performing operation on 'COREBOOT' region...
ERROR: Ramstage region _ramstage overlapped by: fallback/payload
make: *** [Makefile.inc:992: check-ramstage-overlaps] Error 1
See full log here: https://files.b-root-force.de/coreboot.txt
What I find a bit suspicious is the output:
...
GRUB2 will be compiled with following components:
Platform: i386-coreboot
...
Cause I am building for ARM.
So, does anyone have an idea what I can try next?
Kind regards
Martin
On Sun, Dec 18, 2016 at 11:04 PM, Chauhan, Himanshu
<hschauhan(a)nulltrace.org> wrote:
> On Mon, Dec 19, 2016 at 12:40 AM, Aaron Durbin <adurbin(a)google.com> wrote:
>> On Sun, Dec 18, 2016 at 9:37 AM, Chauhan, Himanshu
>> <hschauhan(a)nulltrace.org> wrote:
>>> Hi Aaron,
>>>
>>> I figured out the crash. It wan't because wrong load of the ROM image
>>> (thanks to the nifty post_code which I could trap on IO). I see that
>>> the page fault I am getting is in following code:
>>> (gdb) list *(((0xfff81e41 - 0xfff80000)-200)+0x2000000)
>>
>> I'm curious about the 200 and 16MiB offset being applied.
>
> 0x2000000 is the new address where romstage is linked. Earlier
> (atleast in 2014) the linked address used to be 0xfff80000. This is
> the same address (guest physical) where I map the ROM code. In the
> above calculation I am taking the offset from 0xfff80000 and adding to
> the link address of romstage (0x2000000). The 0x200 is the difference
> I see to map the addresses correctly. This calculation seems fine to
> me because with this I am able to pin point all the earlier faults and
> the post_code trap rIP.
>
If you provide 'cbfstool print -k' output, I could most likely provide
the exact offset mapping. Alternatively you could extract the
romstage.elf from the image using 'cbfstool extract -m x86', but it
won't have debug info. But it'd provide the information to compare
against the pre-relocated image for the correct mapping.
>>
>>> 0x2001d79 is in imd_recover (src/lib/imd.c:139).
>>> 134
>>> 135 static void imdr_init(struct imdr *ir, void *upper_limit)
>>> 136 {
>>> 137 uintptr_t limit = (uintptr_t)upper_limit;
>>> 138 /* Upper limit is aligned down to 4KiB */
>>> 139 ir->limit = ALIGN_DOWN(limit, LIMIT_ALIGN);
>>> 140 ir->r = NULL;
>>> 141 }
>>> 142
>>> 143 static int imdr_create_empty(struct imdr *imdr, size_t root_size,
>>>
>>> I see that this function is being called multiple times (I added some
>>> more post_code and see them being trapped). I get a series of page
>>> faults which I am able to honour all but last.
>>
>> I don't see how imdr_init would be faulting. That's just assigning
>> fields of a struct sitting on the stack. What's your stack pointer
>> value at the time of the faults?
>
> "ir" should be on stack or on top of the RAM. Right now it looks like
> its on top of the RAM. That area is not mapped initially. On a page
> fault, I map a 4K page. For the reference, the following is the
> register dump of coreboot. RSP is 0x9fe54.
>
The values should not be striding. That object is always on the stack.
Where the stack is located could be in low or high memory. I still
need to know what platform you are targeting for the image to provide
details. However, it would not be striding.
> GUEST guest0/vcpu0 dump state:
>
> RAX: 0x9fe80 RBX: 0xfffff8 RCX: 0x1b RDX: 0x53a11439
> R08: 0x0 R09: 0x0 R10: 0x0 R11: 0x0
> R12: 0x0 R13: 0x0 R14: 0x0 R15: 0x0
> RSP: 0x9fe54 RBP: 0xa0000 RDI: 0xfff801e4 RSI: 0x9fe80
> RIP: 0xfff81e41
>
> CR0: 0xe0000011 CR2: 0x0 CR3: 0xa23000 CR4: 0x0
> CS : Sel: 0x00000008 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
> 1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 11)
> DS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
> 1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
> ES : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
> 1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
> SS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
> 1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
> FS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
> 1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
> GS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
> 1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
> GDT : Sel: 0x00000000 Limit: 0x0000001f Base: 0xfff80200 (G: 0 DB:
> 0 L: 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
> LDT : Sel: 0x00000000 Limit: 0x0000ffff Base: 0x00000000 (G: 0 DB:
> 0 L: 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
> IDT : Sel: 0x00000000 Limit: 0x00000000 Base: 0x00000000 (G: 0 DB:
> 0 L: 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
> TR : Sel: 0x00000000 Limit: 0x0000ffff Base: 0x00000000 (G: 1 DB:
> 0 L: 1 AVL: 1 P: 0 DPL: 0 S: 0 Type: 0)
> RFLAGS: 0xa [ ]
>
>
>>>
>>> (__handle_vm_exception:543) Guest fault: 0x7f7fffc (rIP: 00000000FFF81E41)
>>> (__handle_vm_exception:543) Guest fault: 0x7f7effc (rIP: 00000000FFF81E41)
>>> (__handle_vm_exception:543) Guest fault: 0x7f7dffc (rIP: 00000000FFF81E41)
>>> (__handle_vm_exception:543) Guest fault: 0x7f7cffc (rIP: 00000000FFF81E41)
>>> (__handle_vm_exception:543) Guest fault: 0x7f7bffc (rIP: 00000000FFF81E41)
>>> (__handle_vm_exception:543) Guest fault: 0x7f7affc (rIP: 00000000FFF81E41)
>>> (__handle_vm_exception:543) Guest fault: 0x7f79ffc (rIP: 00000000FFF81E41)
>>> (__handle_vm_exception:543) Guest fault: 0x7f78ffc (rIP: 00000000FFF81E41)
>>> (__handle_vm_exception:543) Guest fault: 0x7f77ffc (rIP: 00000000FFF81E41)
>>> (__handle_vm_exception:543) Guest fault: 0x7f76ffc (rIP: 00000000FFF81E41)
>>> <snip>
>>
>> Are those non-rIP addresses the page fault address?
>
> Guest fault: 0x7f7fffc is the address which I think is pointing to
> "ir". If you look all the faulting addresses are 4K apart which is my
> default page size for mapping all the guest pages. It also means that
> multiple times "imdr_init" is being called it faults for different
> addresses hence the same rIP.
I just don't see how we're using that much stack. That doesn't seem
right at all.
>
>>
>>>
>>> handle_guest_realmode_page_fault: offset: 0x3ffc fault: 0x1003ffc reg: 0x1000000
>>> handle_guest_realmode_page_fault: offset: 0x2ffc fault: 0x1002ffc reg: 0x1000000
>>> handle_guest_realmode_page_fault: offset: 0x1ffc fault: 0x1001ffc reg: 0x1000000
>>> handle_guest_realmode_page_fault: offset: 0xffc fault: 0x1000ffc reg: 0x1000000
>>
>> What is the above detailing? I'm not sure what the 'fault' value means.
>
> These are same as Guest fault above. You can disregard them.
>
>>
>>>
>>> (__handle_vm_exception:561) ERROR: No region mapped to guest physical: 0xfffffc
>>>
>>>
>>> I want to understand why imd_recover gets called multiple times
>>> starting from top of memory (128MB is what I have assigned to the
>>> guest) to 16MB last (after which I can't honour). There is something
>>> amiss in my understanding of core boot memory map.
>>>
>>> Could you please help?
>>
>> The imd library contains the implementation of cbmem. See
>> include/cbmem.h for more details, but how it works is that the
>> platform needs to supply the implementation of cbmem_top() which
>> defines the exclusive upper boundary to start growing entries downward
>> from. There is a large and small object size with large blocks being
>> 4KiB in size and small blocks being 32 byes. I don't understand why
>> the faulting addresses are offset from 128MiB by 512KiB with a 4KiB
>> stride.
>>
>> What platform are you targeting for your coreboot build? Are you
>> restarting the instruction that faults? I'm really curious about the
>> current fault patterns. It looks like things are faulting around
>> accessing the imd_root_pointer root_offset field. Are these faults
>> reads or writes? However, that's assuming cbmem_top() is returning
>> 128MiB-512KiB. However, it doesn't explain the successive strides. Do
>> you have serial port emulation to get the console messages out?
>>
>> So in your platform code ensure 2 things are happening:
>>
>> 1. cbmem_top() returns a highest address in 'ram' of the guest once
>> it's online. 128MiB if that's your expectation. The value cbmem_top()
>> returns should never change from successive calls aside from NULL
>> being returned when ram is not yet available.
>> 2. cbmem_initialize_empty() is called one time once the 'ram' is
>> online for use in the non-S3 resume path and cbmem_initialize() in the
>> S3 resume path. If S3 isn't supported in your guest then just use
>> cbmem_initialize_empty().
>>
>
> I will look in it. I see that RAM top is being provided by the CMOS
> emulator. I will look at cbmem_initialize_empty().
If you could provide me the info on the platform you are targeting
coreboot builds with it'd be easier to analyze. Where is this 'CMOS
emulator' and why is it needed?
>
>>>
>>> Regards
>>> Himanshu
>>>
>>> On Wed, Dec 14, 2016 at 9:27 PM, Chauhan, Himanshu
>>> <hschauhan(a)nulltrace.org> wrote:
>>>> Hi Aaron,
>>>>
>>>> Yes, I am mapping the memory where coreboot.rom is loaded to upper 4GiB. I
>>>> create a fixed shadow page table entry for reset vector.
>>>>
>>>> Coreboot doesn't have a linked address of RIP that I shared. I think with
>>>> the increase in size of coreboot (from the previous tag I was using) the
>>>> load address (guest physical) has changed. I used to calculate the load
>>>> address manually. I will check this and get back.
>>>>
>>>> Thanks.
>>>>
>>>> On Wed, Dec 14, 2016 at 8:17 PM, Aaron Durbin <adurbin(a)google.com> wrote:
>>>>>
>>>>> On Wed, Dec 14, 2016 at 3:11 AM, Chauhan, Himanshu
>>>>> <hschauhan(a)nulltrace.org> wrote:
>>>>> > Hi,
>>>>> >
>>>>> > I am working on a hypvervisor and am using coreboot + FILO as guest
>>>>> > BIOS.
>>>>> > While things were fine a while back, it has stopped working. I see that
>>>>> > my
>>>>> > hypervisor can't handle address 0xFFFFFC while coreboot's RIP is at
>>>>> > 0xfff81e41.
>>>>>
>>>>>
>>>>> How are you loading up coreboot.rom in the VM? Are you just memory
>>>>> mapping it at the top of 4GiB address space? If so, what does
>>>>> 'cbfstool coreboot.rom print' show?
>>>>>
>>>>> >
>>>>> > The exact register dump of guest is as follow:
>>>>> >
>>>>> > [guest0/uart0] (__handle_vm_exception:558) ERROR: No region mapped to
>>>>> > guest
>>>>> > physical: 0xfffffc
>>>>> >
>>>>> > GUEST guest0/vcpu0 dump state:
>>>>> >
>>>>> > RAX: 0x9fe80 RBX: 0xfffff8 RCX: 0x1b RDX: 0x53a11439
>>>>> > R08: 0x0 R09: 0x0 R10: 0x0 R11: 0x0
>>>>> > R12: 0x0 R13: 0x0 R14: 0x0 R15: 0x0
>>>>> > RSP: 0x9fe54 RBP: 0xa0000 RDI: 0xfff801e4 RSI: 0x9fe80
>>>>> > RIP: 0xfff81e41
>>>>> >
>>>>> > CR0: 0xe0000011 CR2: 0x0 CR3: 0xa23000 CR4: 0x0
>>>>> > CS : Sel: 0x00000008 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>>> > L:
>>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 11)
>>>>> > DS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>>> > L:
>>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>>>> > ES : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>>> > L:
>>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>>>> > SS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>>> > L:
>>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>>>> > FS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>>> > L:
>>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>>>> > GS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>>> > L:
>>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>>>> > GDT : Sel: 0x00000000 Limit: 0x0000001f Base: 0xfff80200 (G: 0 DB: 0
>>>>> > L:
>>>>> > 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
>>>>> > LDT : Sel: 0x00000000 Limit: 0x0000ffff Base: 0x00000000 (G: 0 DB: 0
>>>>> > L:
>>>>> > 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
>>>>> > IDT : Sel: 0x00000000 Limit: 0x00000000 Base: 0x00000000 (G: 0 DB: 0
>>>>> > L:
>>>>> > 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
>>>>> > TR : Sel: 0x00000000 Limit: 0x0000ffff Base: 0x00000000 (G: 1 DB: 0
>>>>> > L:
>>>>> > 1 AVL: 1 P: 0 DPL: 0 S: 0 Type: 0)
>>>>> > RFLAGS: 0xa [ ]
>>>>> >
>>>>> > I want to know which binary file (.o) should I disassemble to look at
>>>>> > the
>>>>> > RIP?
>>>>> >
>>>>> > I was looking at
>>>>> > objdump -D -mi386 -Maddr16,data16 generated/ramstage.o
>>>>> >
>>>>> > but this is prior to linking and thus only has offsets.
>>>>> >
>>>>> > --
>>>>> >
>>>>> > Regards
>>>>> > [Himanshu Chauhan]
>>>>> >
>>>>> >
>>>>> > --
>>>>> > coreboot mailing list: coreboot(a)coreboot.org
>>>>> > https://www.coreboot.org/mailman/listinfo/coreboot
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Regards
>>>> [Himanshu Chauhan]
>>>>
>>>
>>>
>>>
>>> --
>>>
>>> Regards
>>> [Himanshu Chauhan]
>
>
>
> --
>
> Regards
> [Himanshu Chauhan]
On Mon, Dec 19, 2016 at 12:40 AM, Aaron Durbin <adurbin(a)google.com> wrote:
> On Sun, Dec 18, 2016 at 9:37 AM, Chauhan, Himanshu
> <hschauhan(a)nulltrace.org> wrote:
>> Hi Aaron,
>>
>> I figured out the crash. It wan't because wrong load of the ROM image
>> (thanks to the nifty post_code which I could trap on IO). I see that
>> the page fault I am getting is in following code:
>> (gdb) list *(((0xfff81e41 - 0xfff80000)-200)+0x2000000)
>
> I'm curious about the 200 and 16MiB offset being applied.
0x2000000 is the new address where romstage is linked. Earlier
(atleast in 2014) the linked address used to be 0xfff80000. This is
the same address (guest physical) where I map the ROM code. In the
above calculation I am taking the offset from 0xfff80000 and adding to
the link address of romstage (0x2000000). The 0x200 is the difference
I see to map the addresses correctly. This calculation seems fine to
me because with this I am able to pin point all the earlier faults and
the post_code trap rIP.
>
>> 0x2001d79 is in imd_recover (src/lib/imd.c:139).
>> 134
>> 135 static void imdr_init(struct imdr *ir, void *upper_limit)
>> 136 {
>> 137 uintptr_t limit = (uintptr_t)upper_limit;
>> 138 /* Upper limit is aligned down to 4KiB */
>> 139 ir->limit = ALIGN_DOWN(limit, LIMIT_ALIGN);
>> 140 ir->r = NULL;
>> 141 }
>> 142
>> 143 static int imdr_create_empty(struct imdr *imdr, size_t root_size,
>>
>> I see that this function is being called multiple times (I added some
>> more post_code and see them being trapped). I get a series of page
>> faults which I am able to honour all but last.
>
> I don't see how imdr_init would be faulting. That's just assigning
> fields of a struct sitting on the stack. What's your stack pointer
> value at the time of the faults?
"ir" should be on stack or on top of the RAM. Right now it looks like
its on top of the RAM. That area is not mapped initially. On a page
fault, I map a 4K page. For the reference, the following is the
register dump of coreboot. RSP is 0x9fe54.
GUEST guest0/vcpu0 dump state:
RAX: 0x9fe80 RBX: 0xfffff8 RCX: 0x1b RDX: 0x53a11439
R08: 0x0 R09: 0x0 R10: 0x0 R11: 0x0
R12: 0x0 R13: 0x0 R14: 0x0 R15: 0x0
RSP: 0x9fe54 RBP: 0xa0000 RDI: 0xfff801e4 RSI: 0x9fe80
RIP: 0xfff81e41
CR0: 0xe0000011 CR2: 0x0 CR3: 0xa23000 CR4: 0x0
CS : Sel: 0x00000008 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 11)
DS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
ES : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
SS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
FS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
GS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB:
1 L: 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
GDT : Sel: 0x00000000 Limit: 0x0000001f Base: 0xfff80200 (G: 0 DB:
0 L: 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
LDT : Sel: 0x00000000 Limit: 0x0000ffff Base: 0x00000000 (G: 0 DB:
0 L: 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
IDT : Sel: 0x00000000 Limit: 0x00000000 Base: 0x00000000 (G: 0 DB:
0 L: 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
TR : Sel: 0x00000000 Limit: 0x0000ffff Base: 0x00000000 (G: 1 DB:
0 L: 1 AVL: 1 P: 0 DPL: 0 S: 0 Type: 0)
RFLAGS: 0xa [ ]
>>
>> (__handle_vm_exception:543) Guest fault: 0x7f7fffc (rIP: 00000000FFF81E41)
>> (__handle_vm_exception:543) Guest fault: 0x7f7effc (rIP: 00000000FFF81E41)
>> (__handle_vm_exception:543) Guest fault: 0x7f7dffc (rIP: 00000000FFF81E41)
>> (__handle_vm_exception:543) Guest fault: 0x7f7cffc (rIP: 00000000FFF81E41)
>> (__handle_vm_exception:543) Guest fault: 0x7f7bffc (rIP: 00000000FFF81E41)
>> (__handle_vm_exception:543) Guest fault: 0x7f7affc (rIP: 00000000FFF81E41)
>> (__handle_vm_exception:543) Guest fault: 0x7f79ffc (rIP: 00000000FFF81E41)
>> (__handle_vm_exception:543) Guest fault: 0x7f78ffc (rIP: 00000000FFF81E41)
>> (__handle_vm_exception:543) Guest fault: 0x7f77ffc (rIP: 00000000FFF81E41)
>> (__handle_vm_exception:543) Guest fault: 0x7f76ffc (rIP: 00000000FFF81E41)
>> <snip>
>
> Are those non-rIP addresses the page fault address?
Guest fault: 0x7f7fffc is the address which I think is pointing to
"ir". If you look all the faulting addresses are 4K apart which is my
default page size for mapping all the guest pages. It also means that
multiple times "imdr_init" is being called it faults for different
addresses hence the same rIP.
>
>>
>> handle_guest_realmode_page_fault: offset: 0x3ffc fault: 0x1003ffc reg: 0x1000000
>> handle_guest_realmode_page_fault: offset: 0x2ffc fault: 0x1002ffc reg: 0x1000000
>> handle_guest_realmode_page_fault: offset: 0x1ffc fault: 0x1001ffc reg: 0x1000000
>> handle_guest_realmode_page_fault: offset: 0xffc fault: 0x1000ffc reg: 0x1000000
>
> What is the above detailing? I'm not sure what the 'fault' value means.
These are same as Guest fault above. You can disregard them.
>
>>
>> (__handle_vm_exception:561) ERROR: No region mapped to guest physical: 0xfffffc
>>
>>
>> I want to understand why imd_recover gets called multiple times
>> starting from top of memory (128MB is what I have assigned to the
>> guest) to 16MB last (after which I can't honour). There is something
>> amiss in my understanding of core boot memory map.
>>
>> Could you please help?
>
> The imd library contains the implementation of cbmem. See
> include/cbmem.h for more details, but how it works is that the
> platform needs to supply the implementation of cbmem_top() which
> defines the exclusive upper boundary to start growing entries downward
> from. There is a large and small object size with large blocks being
> 4KiB in size and small blocks being 32 byes. I don't understand why
> the faulting addresses are offset from 128MiB by 512KiB with a 4KiB
> stride.
>
> What platform are you targeting for your coreboot build? Are you
> restarting the instruction that faults? I'm really curious about the
> current fault patterns. It looks like things are faulting around
> accessing the imd_root_pointer root_offset field. Are these faults
> reads or writes? However, that's assuming cbmem_top() is returning
> 128MiB-512KiB. However, it doesn't explain the successive strides. Do
> you have serial port emulation to get the console messages out?
>
> So in your platform code ensure 2 things are happening:
>
> 1. cbmem_top() returns a highest address in 'ram' of the guest once
> it's online. 128MiB if that's your expectation. The value cbmem_top()
> returns should never change from successive calls aside from NULL
> being returned when ram is not yet available.
> 2. cbmem_initialize_empty() is called one time once the 'ram' is
> online for use in the non-S3 resume path and cbmem_initialize() in the
> S3 resume path. If S3 isn't supported in your guest then just use
> cbmem_initialize_empty().
>
I will look in it. I see that RAM top is being provided by the CMOS
emulator. I will look at cbmem_initialize_empty().
>>
>> Regards
>> Himanshu
>>
>> On Wed, Dec 14, 2016 at 9:27 PM, Chauhan, Himanshu
>> <hschauhan(a)nulltrace.org> wrote:
>>> Hi Aaron,
>>>
>>> Yes, I am mapping the memory where coreboot.rom is loaded to upper 4GiB. I
>>> create a fixed shadow page table entry for reset vector.
>>>
>>> Coreboot doesn't have a linked address of RIP that I shared. I think with
>>> the increase in size of coreboot (from the previous tag I was using) the
>>> load address (guest physical) has changed. I used to calculate the load
>>> address manually. I will check this and get back.
>>>
>>> Thanks.
>>>
>>> On Wed, Dec 14, 2016 at 8:17 PM, Aaron Durbin <adurbin(a)google.com> wrote:
>>>>
>>>> On Wed, Dec 14, 2016 at 3:11 AM, Chauhan, Himanshu
>>>> <hschauhan(a)nulltrace.org> wrote:
>>>> > Hi,
>>>> >
>>>> > I am working on a hypvervisor and am using coreboot + FILO as guest
>>>> > BIOS.
>>>> > While things were fine a while back, it has stopped working. I see that
>>>> > my
>>>> > hypervisor can't handle address 0xFFFFFC while coreboot's RIP is at
>>>> > 0xfff81e41.
>>>>
>>>>
>>>> How are you loading up coreboot.rom in the VM? Are you just memory
>>>> mapping it at the top of 4GiB address space? If so, what does
>>>> 'cbfstool coreboot.rom print' show?
>>>>
>>>> >
>>>> > The exact register dump of guest is as follow:
>>>> >
>>>> > [guest0/uart0] (__handle_vm_exception:558) ERROR: No region mapped to
>>>> > guest
>>>> > physical: 0xfffffc
>>>> >
>>>> > GUEST guest0/vcpu0 dump state:
>>>> >
>>>> > RAX: 0x9fe80 RBX: 0xfffff8 RCX: 0x1b RDX: 0x53a11439
>>>> > R08: 0x0 R09: 0x0 R10: 0x0 R11: 0x0
>>>> > R12: 0x0 R13: 0x0 R14: 0x0 R15: 0x0
>>>> > RSP: 0x9fe54 RBP: 0xa0000 RDI: 0xfff801e4 RSI: 0x9fe80
>>>> > RIP: 0xfff81e41
>>>> >
>>>> > CR0: 0xe0000011 CR2: 0x0 CR3: 0xa23000 CR4: 0x0
>>>> > CS : Sel: 0x00000008 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>> > L:
>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 11)
>>>> > DS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>> > L:
>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>>> > ES : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>> > L:
>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>>> > SS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>> > L:
>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>>> > FS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>> > L:
>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>>> > GS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>>> > L:
>>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>>> > GDT : Sel: 0x00000000 Limit: 0x0000001f Base: 0xfff80200 (G: 0 DB: 0
>>>> > L:
>>>> > 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
>>>> > LDT : Sel: 0x00000000 Limit: 0x0000ffff Base: 0x00000000 (G: 0 DB: 0
>>>> > L:
>>>> > 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
>>>> > IDT : Sel: 0x00000000 Limit: 0x00000000 Base: 0x00000000 (G: 0 DB: 0
>>>> > L:
>>>> > 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
>>>> > TR : Sel: 0x00000000 Limit: 0x0000ffff Base: 0x00000000 (G: 1 DB: 0
>>>> > L:
>>>> > 1 AVL: 1 P: 0 DPL: 0 S: 0 Type: 0)
>>>> > RFLAGS: 0xa [ ]
>>>> >
>>>> > I want to know which binary file (.o) should I disassemble to look at
>>>> > the
>>>> > RIP?
>>>> >
>>>> > I was looking at
>>>> > objdump -D -mi386 -Maddr16,data16 generated/ramstage.o
>>>> >
>>>> > but this is prior to linking and thus only has offsets.
>>>> >
>>>> > --
>>>> >
>>>> > Regards
>>>> > [Himanshu Chauhan]
>>>> >
>>>> >
>>>> > --
>>>> > coreboot mailing list: coreboot(a)coreboot.org
>>>> > https://www.coreboot.org/mailman/listinfo/coreboot
>>>
>>>
>>>
>>>
>>> --
>>>
>>> Regards
>>> [Himanshu Chauhan]
>>>
>>
>>
>>
>> --
>>
>> Regards
>> [Himanshu Chauhan]
--
Regards
[Himanshu Chauhan]
Dear coreboot folks,
Several devices using the Intel 945 chipset copied code for PCI reset,
costing 200 ms of boot time.
```
/* Force PCIRST# */
pci_write_config16(PCI_DEV(0, 0x1e, 0), BCTRL, SBR);
udelay(200 * 1000);
pci_write_config16(PCI_DEV(0, 0x1e, 0), BCTRL, 0);
```
The change-set Ia37d9f0ecf5655531616edb20b53757d5d47b42f [1] removes
that code from the Lenovo X60.
That code was added for some crypto card on a Roda device.
My question is, if removing that code is fine, or if it should be left
in and be made configurable (Kconfig/NVRAM)?
Are there often cases where there are extensions card with problems,
that need such a PCI reset?
Thanks,
Paul
[1] https://review.coreboot.org/17703
On Sun, Dec 18, 2016 at 9:37 AM, Chauhan, Himanshu
<hschauhan(a)nulltrace.org> wrote:
> Hi Aaron,
>
> I figured out the crash. It wan't because wrong load of the ROM image
> (thanks to the nifty post_code which I could trap on IO). I see that
> the page fault I am getting is in following code:
> (gdb) list *(((0xfff81e41 - 0xfff80000)-200)+0x2000000)
I'm curious about the 200 and 16MiB offset being applied.
> 0x2001d79 is in imd_recover (src/lib/imd.c:139).
> 134
> 135 static void imdr_init(struct imdr *ir, void *upper_limit)
> 136 {
> 137 uintptr_t limit = (uintptr_t)upper_limit;
> 138 /* Upper limit is aligned down to 4KiB */
> 139 ir->limit = ALIGN_DOWN(limit, LIMIT_ALIGN);
> 140 ir->r = NULL;
> 141 }
> 142
> 143 static int imdr_create_empty(struct imdr *imdr, size_t root_size,
>
> I see that this function is being called multiple times (I added some
> more post_code and see them being trapped). I get a series of page
> faults which I am able to honour all but last.
I don't see how imdr_init would be faulting. That's just assigning
fields of a struct sitting on the stack. What's your stack pointer
value at the time of the faults?
>
> (__handle_vm_exception:543) Guest fault: 0x7f7fffc (rIP: 00000000FFF81E41)
> (__handle_vm_exception:543) Guest fault: 0x7f7effc (rIP: 00000000FFF81E41)
> (__handle_vm_exception:543) Guest fault: 0x7f7dffc (rIP: 00000000FFF81E41)
> (__handle_vm_exception:543) Guest fault: 0x7f7cffc (rIP: 00000000FFF81E41)
> (__handle_vm_exception:543) Guest fault: 0x7f7bffc (rIP: 00000000FFF81E41)
> (__handle_vm_exception:543) Guest fault: 0x7f7affc (rIP: 00000000FFF81E41)
> (__handle_vm_exception:543) Guest fault: 0x7f79ffc (rIP: 00000000FFF81E41)
> (__handle_vm_exception:543) Guest fault: 0x7f78ffc (rIP: 00000000FFF81E41)
> (__handle_vm_exception:543) Guest fault: 0x7f77ffc (rIP: 00000000FFF81E41)
> (__handle_vm_exception:543) Guest fault: 0x7f76ffc (rIP: 00000000FFF81E41)
> <snip>
Are those non-rIP addresses the page fault address?
>
> handle_guest_realmode_page_fault: offset: 0x3ffc fault: 0x1003ffc reg: 0x1000000
> handle_guest_realmode_page_fault: offset: 0x2ffc fault: 0x1002ffc reg: 0x1000000
> handle_guest_realmode_page_fault: offset: 0x1ffc fault: 0x1001ffc reg: 0x1000000
> handle_guest_realmode_page_fault: offset: 0xffc fault: 0x1000ffc reg: 0x1000000
What is the above detailing? I'm not sure what the 'fault' value means.
>
> (__handle_vm_exception:561) ERROR: No region mapped to guest physical: 0xfffffc
>
>
> I want to understand why imd_recover gets called multiple times
> starting from top of memory (128MB is what I have assigned to the
> guest) to 16MB last (after which I can't honour). There is something
> amiss in my understanding of core boot memory map.
>
> Could you please help?
The imd library contains the implementation of cbmem. See
include/cbmem.h for more details, but how it works is that the
platform needs to supply the implementation of cbmem_top() which
defines the exclusive upper boundary to start growing entries downward
from. There is a large and small object size with large blocks being
4KiB in size and small blocks being 32 byes. I don't understand why
the faulting addresses are offset from 128MiB by 512KiB with a 4KiB
stride.
What platform are you targeting for your coreboot build? Are you
restarting the instruction that faults? I'm really curious about the
current fault patterns. It looks like things are faulting around
accessing the imd_root_pointer root_offset field. Are these faults
reads or writes? However, that's assuming cbmem_top() is returning
128MiB-512KiB. However, it doesn't explain the successive strides. Do
you have serial port emulation to get the console messages out?
So in your platform code ensure 2 things are happening:
1. cbmem_top() returns a highest address in 'ram' of the guest once
it's online. 128MiB if that's your expectation. The value cbmem_top()
returns should never change from successive calls aside from NULL
being returned when ram is not yet available.
2. cbmem_initialize_empty() is called one time once the 'ram' is
online for use in the non-S3 resume path and cbmem_initialize() in the
S3 resume path. If S3 isn't supported in your guest then just use
cbmem_initialize_empty().
>
> Regards
> Himanshu
>
> On Wed, Dec 14, 2016 at 9:27 PM, Chauhan, Himanshu
> <hschauhan(a)nulltrace.org> wrote:
>> Hi Aaron,
>>
>> Yes, I am mapping the memory where coreboot.rom is loaded to upper 4GiB. I
>> create a fixed shadow page table entry for reset vector.
>>
>> Coreboot doesn't have a linked address of RIP that I shared. I think with
>> the increase in size of coreboot (from the previous tag I was using) the
>> load address (guest physical) has changed. I used to calculate the load
>> address manually. I will check this and get back.
>>
>> Thanks.
>>
>> On Wed, Dec 14, 2016 at 8:17 PM, Aaron Durbin <adurbin(a)google.com> wrote:
>>>
>>> On Wed, Dec 14, 2016 at 3:11 AM, Chauhan, Himanshu
>>> <hschauhan(a)nulltrace.org> wrote:
>>> > Hi,
>>> >
>>> > I am working on a hypvervisor and am using coreboot + FILO as guest
>>> > BIOS.
>>> > While things were fine a while back, it has stopped working. I see that
>>> > my
>>> > hypervisor can't handle address 0xFFFFFC while coreboot's RIP is at
>>> > 0xfff81e41.
>>>
>>>
>>> How are you loading up coreboot.rom in the VM? Are you just memory
>>> mapping it at the top of 4GiB address space? If so, what does
>>> 'cbfstool coreboot.rom print' show?
>>>
>>> >
>>> > The exact register dump of guest is as follow:
>>> >
>>> > [guest0/uart0] (__handle_vm_exception:558) ERROR: No region mapped to
>>> > guest
>>> > physical: 0xfffffc
>>> >
>>> > GUEST guest0/vcpu0 dump state:
>>> >
>>> > RAX: 0x9fe80 RBX: 0xfffff8 RCX: 0x1b RDX: 0x53a11439
>>> > R08: 0x0 R09: 0x0 R10: 0x0 R11: 0x0
>>> > R12: 0x0 R13: 0x0 R14: 0x0 R15: 0x0
>>> > RSP: 0x9fe54 RBP: 0xa0000 RDI: 0xfff801e4 RSI: 0x9fe80
>>> > RIP: 0xfff81e41
>>> >
>>> > CR0: 0xe0000011 CR2: 0x0 CR3: 0xa23000 CR4: 0x0
>>> > CS : Sel: 0x00000008 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>> > L:
>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 11)
>>> > DS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>> > L:
>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>> > ES : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>> > L:
>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>> > SS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>> > L:
>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>> > FS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>> > L:
>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>> > GS : Sel: 0x00000010 Limit: 0xffffffff Base: 0x00000000 (G: 1 DB: 1
>>> > L:
>>> > 0 AVL: 0 P: 1 DPL: 0 S: 1 Type: 3)
>>> > GDT : Sel: 0x00000000 Limit: 0x0000001f Base: 0xfff80200 (G: 0 DB: 0
>>> > L:
>>> > 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
>>> > LDT : Sel: 0x00000000 Limit: 0x0000ffff Base: 0x00000000 (G: 0 DB: 0
>>> > L:
>>> > 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
>>> > IDT : Sel: 0x00000000 Limit: 0x00000000 Base: 0x00000000 (G: 0 DB: 0
>>> > L:
>>> > 0 AVL: 0 P: 0 DPL: 0 S: 0 Type: 0)
>>> > TR : Sel: 0x00000000 Limit: 0x0000ffff Base: 0x00000000 (G: 1 DB: 0
>>> > L:
>>> > 1 AVL: 1 P: 0 DPL: 0 S: 0 Type: 0)
>>> > RFLAGS: 0xa [ ]
>>> >
>>> > I want to know which binary file (.o) should I disassemble to look at
>>> > the
>>> > RIP?
>>> >
>>> > I was looking at
>>> > objdump -D -mi386 -Maddr16,data16 generated/ramstage.o
>>> >
>>> > but this is prior to linking and thus only has offsets.
>>> >
>>> > --
>>> >
>>> > Regards
>>> > [Himanshu Chauhan]
>>> >
>>> >
>>> > --
>>> > coreboot mailing list: coreboot(a)coreboot.org
>>> > https://www.coreboot.org/mailman/listinfo/coreboot
>>
>>
>>
>>
>> --
>>
>> Regards
>> [Himanshu Chauhan]
>>
>
>
>
> --
>
> Regards
> [Himanshu Chauhan]
Thank you for the hint, I have 'installed' cbmem by making tthe appropriate menuconfig changes and in util/cbmem "$make LD_FLAGS=-static". Though (and I apologize for my nieve-ness) how exactly to I use the "$ cbmem -c" command, specifically wher exactly in boot/start up process should I call it- I cant call it from the OS, and I haven't been able to find anything on google relating to how to access coreboot userspace...
HN
________________________________________
From: Idwer Vollering <vidwer(a)gmail.com>
Sent: Wednesday, December 14, 2016 5:46 PM
To: Haleigh Novak
Cc: coreboot(a)coreboot.org
Subject: Re: [coreboot] post_code to text file
cbmem, in util/cbmem/, should be what you are looking for.
2016-12-15 2:35 GMT+01:00 Haleigh Novak <haleigh(a)edt.com>:
> Hello All,
>
> I was wondering if it would be possible to add a few lines in the post_code
> method so it also writes all the codes to a text file and then keep that
> text file around so it could be read once the system is running - for
> debugging purposes because a post_code reader is currently unavailable to
> me. I know that the BIOS handles the post codes and my chosen BIOS is
> coreboot and I have added just a couple lines to the post code method right
> after mainboard_post() is called in order to print the codes to a file in /
> as well. But I don't have any idea how to pass a small text file along from
> coreboot up to yocto jethro during boot - is this even possible?
>
> Any comments, thoughts, ideas, guidance would be greatly appriciated. Thank
> you.
>
> HN
>
> --
> coreboot mailing list: coreboot(a)coreboot.org
> https://www.coreboot.org/mailman/listinfo/coreboot
Thanks you for your answers, finally i use decode-dimms for check value
because i have an spb binary with others reference component very close, i
had compare with documentation.
Be careful: Only problems, the input file of decode-dimms take output
hexdump -C exactly ( with numbers line ) else decode-dimms doesn't work.
thank you for your contribution.
2016-12-17 13:51 GMT+01:00 Pok Gu <pokgoo002(a)gmail.com>:
> The difficulty with calculating by hand is there are some checksum bits.
> If your goal is to overclock the ram to maximize its performance, you
> probably have to do it again and again for servel times so that the ram
> reaches its best performance without failing the memtest (the higher the
> speed the more likely to fail the memtest). Without a tool or a script the
> checksum bits has to be calculated everytime by hand...I am calling for a
> tool to make this eaiser...
>
> Possibly you don't need to invest extra money (but just some time) to use
> that tool. Thaiphoon Burner has a free version that it does not allow you
> to save the spd.bin but it has a hex editor that can show you the modified
> hex bits and the calculated checksum. It can be used as a reference. Then
> manually modify the spd.bin file and write it using some free tools (e.g.,
> RWEveryting on Windows or eeprom/i2c-tool on Linux).
>
> 2016-12-17 5:21 GMT+08:00 David Hendricks <dhendrix(a)google.com>:
>
>>
>> On Fri, Dec 16, 2016 at 11:01 AM, Pok Gu <pokgoo002(a)gmail.com> wrote:
>>
>>> Thaiphoon Burne is the most popular and probably the only tool for
>>> modifying the spd.bin. It has many built-in templates and you only need to
>>> select the speed, clocking, and voltage you like (e.g., DDR3-1688/DDR3-1333
>>> and CL10/CL11 and 1.35V/1.5V) and it will do all the rest calculations and
>>> generate the spd.bin for you. Then, the only thing you need to do is to
>>> flash the spd.bin to the spd chip on the ram, and reboot, and your ram
>>> stick is overclocked.
>>>
>>> Unfortunately, it is only available on Windows. So I have one harddisk
>>> loaded with Windows 7 and Thaiphoon Burne specialized for this work.
>>>
>>> I haven't found any linux software can do this unless you calculate and
>>> modify the spd.bin by hand. (If anyone found one let me know)
>>>
>>
>> Cool - I never knew about that tool. Seems like a good investment if the
>> goal is to generate an spd.bin from scratch.
>>
>> Sebastien - What are you trying to do, exactly? If you only need to
>> change one or two parameters, then a hex editor and a calculator should be
>> sufficient. The relevant specification for SPDs is JEDEC Standard No. 21-C
>> Annex K for DDR3 and Annex L for DDR4. (you'll need to register with
>> jedec.org to download the specs)
>>
>> --
>> David Hendricks (dhendrix)
>> Systems Software Engineer, Google Inc.
>>
>
>
--
Sébastien Basset