Yes Patrick, you get me right.
Why should we spread the work that can be commonly done by the I2C
controller driver
over several slave drivers.
I was concerned of several issues on the I2C bus due to the following
policy:
I2C is a slow, low bandwidth bus, let us do the single transfers byte
wise in the background task.
This has two disadvantages:
1. If you have a serious bug and you have to look at the physical bus
level to get a clue what is going on there,
you have to capture a lot of time with an oscilloscope and your single
transfers will be interrupted by huge gaps
(because the transfers are spitted and other things happens in between).
This makes it really hard to analyze.
2. There is a bigger chance to mess up the slave. If that is happen, you
can get a hanging I2C bus. And in the worst case
you will have to power-cycle your slave (and in the most cases your
board as well) to get the slave in the working condition again.
In my point of view, it is much better to keep transfers on I2C bus as
close as possible together to avoid errors.
Werner
Am 20.02.2015 um 17:30 schrieb Patrick Georgi via coreboot:
> 2015-02-19 21:12 GMT+01:00 Peter Stuge <peter(a)stuge.se>:
>>> I think the question is really what we would gain from this.
>> I think it's less about performance and more about an accurate and
>> clean model being available to mainboard code when needed.
> From discussing things with Werner, one of his concerns (as I
> understood them) was higher stability in light of picky I2C devices:
> When you schedule the entire communication in one pass, the
> (sufficiently capable) controller makes sure that things happen in the
> right order and at the right time. If some of that is arbitrated by
> CPU code, there's more room for error.
>
> Even for the I2C controllers that essentially bitbang things with no
> help by the controller chip, that should help avoid mistakes, since
> all the nasty warts of I2C (of which were seem to be many) are managed
> in the bus driver, not in every single slave driver.
>
>
> Patrick