[coreboot] Add coreboot storage driver

Zoran Stojsavljevic zoran.stojsavljevic at gmail.com
Mon Feb 13 09:21:48 CET 2017


Hello Andrey,

> Does any of that make sense? Perhaps somebody thought of this before?
Let's see what may be other ways to deal with this challenge.

No, it does not. What you are proposing, in-fact, is to make boot-loader as
quasi (adding scheduler) HW multithreading OS sans MMU (actually, dealing
with two (for now) HW threads). And you have chosen Coreboot to implement
this.

I will suggest what you are proposing first to be done in true BIOS, so
IBVs can work on this proposal, and see how BIOS boot-up time will improve
(by this parallelism). Besides, BIOS is much slower (UEFI BIOSes boot in
the range of 30 seconds), and should be faster. And... BIOS is closed
source, thus there is major business task which should go there, Project
Management and some few millions of $$ USD to be spent on this project.
Paid for by INTEL and INTEL BIOS Vendors. ;-)

Besides, one only knows what the next challenge is (repeating your
words:  *"...this
is just a tip of iceberg and there are packs of other issues we would need
to deal with."*

Since, very soon, you'll run to shared HW resource, and then you'll need to
implement semaphores, atomic operations and God knows what!?

My two cent thinking (after all, this is only me, Zoran, independept
self-contributor),
Zoran

On Mon, Feb 13, 2017 at 8:19 AM, Andrey Petrov <andrey.petrov at intel.com>
wrote:

> Hi there,
>
> tl;dr:
> We are considering adding early parallel code execution in coreboot. We
> need to discuss how this can be done.
>
> Nowadays we see firmware getting more complicated. At the same time CPUs
> do not necessarily catch up. Furthermore, recent increases in performance
> can be largely attributed to parallelism and stuffing more cores on die
> rather than sheer core computing power. However, firmware typically runs on
> just one CPU and is effectively barred from all the parallelism goodies
> available to OS software.
>
> For example Apollolake is struggling to finish firmware boot with all the
> whistles and bells (vboot, tpm and our friendly, ever-vigilant TXE) under
> one second. Interestingly, great deal of tasks that needs to be done are
> not even computation-bound. They are IO bound. In case of SDHCI below it is
> possible to train eMMC link to switch from default low-freq single data
> rate (sdr50) mode to high frequency dual data rate mode (hs400). This link
> training increases eMMC throughput by factor by 15-20. As result time it
> takes to load kernel in depthcharge goes down from 130ms to 10ms. However,
> the training sequence requires constant by frequent CPU attention. As
> result, it doesn't make any sense to try to turn on higher-frequency modes
> because you don't get any net win. We also experimented by starting work in
> current MPinit code. Unfortunately it starts pretty late in the game and we
> do not have enough parallel time to reap meaningful benefit.
>
> In order to address this problem we can do following things:
> 1. Add scheduler, early or not
> 2. Add early MPinit code
>
> For [1] I am aware of one scheduler discussion in 2013, but that was long
> time ago and things may have moved a bit. I do not want to be a necromancer
> and reanimate old discussion, but does anybody see it as a useful/viable
> thing to do?
>
> For [2] we have been working on prototype for Apollolake that does
> pre-memory MPinit. We've got to a stage where we can run C code on another
> core before DRAM is up (please do not try that at home, because you'd need
> custom experimental ucode). However, there are many questions what model to
> use and how to create infrastructure to run code in parallel in such early
> stage. Shall we just add "run this (mini) stage on this core" concept? Or
> shall we add tasklet/worklet structures that would allow code to live in
> run and when migration to DRAM happens have infrastructure take care of
> managing context and potentially resume it? One problem is that code
> running with CAR needs to stop by the time system is ready to tear down CAR
> and migrate to DRAM. We don't want to delay that by waiting on such task to
> complete. At the same time, certain task may have largely fluctuating run
> times so you would want to continue them. It is actually may be possible
> just to do that, if we use same address space for CAR and DRAM. But come to
> think of it, this is just a tip of iceberg and there are packs of other
> issues we would need to deal with.
>
> Does any of that make sense? Perhaps somebody thought of this before?
> Let's see what may be other ways to deal with this challenge.
>
> thanks
> Andrey
>
>
>
> On 01/25/2017 03:16 PM, Guvendik, Bora wrote:
>
>> Port sdhci and mmc driver from depthcharge to coreboot. The purpose is
>> to speed up boot time by starting
>>
>> storage initialization on another CPU in parallel. On the Apollolake
>> systems we checked, we found that cpu can take
>>
>> up to 300ms sending CMD1s to HW, so we can avoid this delay by
>> parallelizing.
>>
>>
>>
>> - Why not add this parallelization in the payload instead?
>>
>>                 There is potentially more time to parallelize things in
>> coreboot. Payload execution is much faster,
>>
>>                 so we don't get much parallel execution time.
>>
>>
>>
>> - Why not send CMD1 once in coreboot to trigger power-up and let HW
>> initialize using only 1 cpu?
>>
>>                 Jedec spec requires the CPU to keep sending CMD1s when
>> the hardware is busy (section 6.4.3). We tested
>>
>>                 with real-world hardware and it indeed didn't work with
>> a single CMD1.
>>
>>
>>
>> - Why did you port the driver from depthcharge?
>>
>>                 I wanted to use a driver that is proven to avoid bugs.
>> It is also easier to apply patches back and forth.
>>
>>
>>
>> https://review.coreboot.org/#/c/18105
>>
>>
>>
>> Thanks
>>
>> Bora
>>
>>
>>
>>
>>
>>
> --
> coreboot mailing list: coreboot at coreboot.org
> https://www.coreboot.org/mailman/listinfo/coreboot
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.coreboot.org/pipermail/coreboot/attachments/20170213/f52c9c6c/attachment.html>


More information about the coreboot mailing list