[coreboot] Add coreboot storage driver

Andrey Petrov andrey.petrov at intel.com
Mon Feb 13 08:19:05 CET 2017


Hi there,

tl;dr:
We are considering adding early parallel code execution in coreboot. We 
need to discuss how this can be done.

Nowadays we see firmware getting more complicated. At the same time CPUs 
do not necessarily catch up. Furthermore, recent increases in 
performance can be largely attributed to parallelism and stuffing more 
cores on die rather than sheer core computing power. However, firmware 
typically runs on just one CPU and is effectively barred from all the 
parallelism goodies available to OS software.

For example Apollolake is struggling to finish firmware boot with all 
the whistles and bells (vboot, tpm and our friendly, ever-vigilant TXE) 
under one second. Interestingly, great deal of tasks that needs to be 
done are not even computation-bound. They are IO bound. In case of SDHCI 
below it is possible to train eMMC link to switch from default low-freq 
single data rate (sdr50) mode to high frequency dual data rate mode 
(hs400). This link training increases eMMC throughput by factor by 
15-20. As result time it takes to load kernel in depthcharge goes down 
from 130ms to 10ms. However, the training sequence requires constant by 
frequent CPU attention. As result, it doesn't make any sense to try to 
turn on higher-frequency modes because you don't get any net win. We 
also experimented by starting work in current MPinit code. Unfortunately 
it starts pretty late in the game and we do not have enough parallel 
time to reap meaningful benefit.

In order to address this problem we can do following things:
1. Add scheduler, early or not
2. Add early MPinit code

For [1] I am aware of one scheduler discussion in 2013, but that was 
long time ago and things may have moved a bit. I do not want to be a 
necromancer and reanimate old discussion, but does anybody see it as a 
useful/viable thing to do?

For [2] we have been working on prototype for Apollolake that does 
pre-memory MPinit. We've got to a stage where we can run C code on 
another core before DRAM is up (please do not try that at home, because 
you'd need custom experimental ucode). However, there are many questions 
what model to use and how to create infrastructure to run code in 
parallel in such early stage. Shall we just add "run this (mini) stage 
on this core" concept? Or shall we add tasklet/worklet structures that 
would allow code to live in run and when migration to DRAM happens have 
infrastructure take care of managing context and potentially resume it? 
One problem is that code running with CAR needs to stop by the time 
system is ready to tear down CAR and migrate to DRAM. We don't want to 
delay that by waiting on such task to complete. At the same time, 
certain task may have largely fluctuating run times so you would want to 
continue them. It is actually may be possible just to do that, if we use 
same address space for CAR and DRAM. But come to think of it, this is 
just a tip of iceberg and there are packs of other issues we would need 
to deal with.

Does any of that make sense? Perhaps somebody thought of this before? 
Let's see what may be other ways to deal with this challenge.

thanks
Andrey


On 01/25/2017 03:16 PM, Guvendik, Bora wrote:
> Port sdhci and mmc driver from depthcharge to coreboot. The purpose is
> to speed up boot time by starting
>
> storage initialization on another CPU in parallel. On the Apollolake
> systems we checked, we found that cpu can take
>
> up to 300ms sending CMD1s to HW, so we can avoid this delay by
> parallelizing.
>
>
>
> - Why not add this parallelization in the payload instead?
>
>                 There is potentially more time to parallelize things in
> coreboot. Payload execution is much faster,
>
>                 so we don't get much parallel execution time.
>
>
>
> - Why not send CMD1 once in coreboot to trigger power-up and let HW
> initialize using only 1 cpu?
>
>                 Jedec spec requires the CPU to keep sending CMD1s when
> the hardware is busy (section 6.4.3). We tested
>
>                 with real-world hardware and it indeed didn't work with
> a single CMD1.
>
>
>
> - Why did you port the driver from depthcharge?
>
>                 I wanted to use a driver that is proven to avoid bugs.
> It is also easier to apply patches back and forth.
>
>
>
> https://review.coreboot.org/#/c/18105
>
>
>
> Thanks
>
> Bora
>
>
>
>
>



More information about the coreboot mailing list