See patch
Stefan Reinauer wrote:
Add support for the SMSC LPC47n227 SuperI/O chip
Kconfig | 2 Makefile.inc | 1 lpc47n227/Config.lb | 21 ++ lpc47n227/Makefile.inc | 20 ++ lpc47n227/chip.h | 29 +++ lpc47n227/lpc47n227.h | 29 +++ lpc47n227/lpc47n227_early_serial.c | 150 +++++++++++++++ lpc47n227/superio.c | 352 +++++++++++++++++++++++++++++++++++++ 8 files changed, 604 insertions(+)
Signed-off-by: Stefan Reinauer stepan@coresystems.de
Acked-by: Peter Stuge peter@stuge.se
Hi Stefan,
Congrats for the great work, having a newly supported laptop is wonderful!
Is this SuperIO chip very different from the generic SMSC ones whose support code can be found in src/superio/smscsuperio? For avoiding code duplication it would be nice to move it there if it's not too hard.
Regards, Cristi
Dear Cristi,
Is this SuperIO chip very different from the generic SMSC ones whose support code can be found in src/superio/smscsuperio? For avoiding code duplication it would be nice to move it there if it's not too hard.
Unfortunately, the generic SMSC driver is not suitable, as it assumes a SuperI/O chip with logical devices, which the LPC47N227 does not have.
I'm generally not a big fan of the accumulative drivers in coreboot, such as the smscsuperio or the i82801xx driver.
While the approach definitely makes sense on the OS level, certain revisions of certain chips may need other workarounds than other chips. Look at the i945 and i82801gx driver for some examples: A lot of work coreboot drivers have to do these days is to cope with such pecularities of hardware and supporting such errata workarounds and fixes is quite hard for a all revisions of a given chip. If we have to take care of cross dependencies for certain groups of chips, the code will become unmaintainable, and buggy. I think the 82801xx driver is an excellent example of the problems such a driver can bring.
What it boils down to is that at time X the driver works for hardware A and then someone sends a patch for hardware B and at time Y only hardware B will be operational. Testing this on one chip is hard but possible. Especially if the maintainer of that component has access to that system. But if I were to update an errata for the LPC47N227, how could I possibly make sure it won't break 25 other SuperIO chips that I don't have? I could be very careful when writing the driver, and guard every line with if (hw_id == XX && hw_revision == YY) { ... } but then that's just two drivers in one, increasing code that has to be loaded at boot time from slow flashes and a decision that I can easily and safely take at compile time has to be redone every time at run time while increasing the burden on the code maintainer with every change, making improvements to the code very hard.
I guess in the end this is a matter of where you draw the line between unification and specialization. In my experience, using the same driver for many revisions of a chip worked pretty well while using the same driver for several generations of chips from a given vendor introduced breakage of some kind or another.
Best regards, Stefan
On Sun, Jan 17, 2010 at 3:51 PM, Stefan Reinauer stepan@coresystems.de wrote:
I'm generally not a big fan of the accumulative drivers in coreboot, such as the smscsuperio or the i82801xx driver.
I agree.
What it boils down to is that at time X the driver works for hardware A and then someone sends a patch for hardware B and at time Y only hardware B will be operational. Testing this on one chip is hard but possible. Especially if the maintainer of that component has access to that system. But if I were to update an errata for the LPC47N227, how could I possibly make sure it won't break 25 other SuperIO chips that I don't have? I could be very careful when writing the driver, and guard every line with if (hw_id == XX && hw_revision == YY) { ... } but then that's just two drivers in one, increasing code that has to be loaded at boot time from slow flashes and a decision that I can easily and safely take at compile time has to be redone every time at run time while increasing the burden on the code maintainer with every change, making improvements to the code very hard.
This has actually happened on coreboot. Older boards with older via NB or SB parts stopped working when those chips were modified with newer boards. It's very easy to break chip support with innocent-looking changes that "can't hurt". Chips are prone to change from rev to rev or part to part. Chips are not software.
I think for chip support forking the support software may look bad from a "software engineering" point of view, but is at times the only practical option because (to be as polite as possible) the implementation of many of these chips is not that great.
thanks ron
Ron Minnich wrote:
This has actually happened on coreboot. Older boards with older via NB or SB parts stopped working when those chips were modified with newer boards. It's very easy to break chip support with innocent-looking changes that "can't hurt". Chips are prone to change from rev to rev or part to part. Chips are not software.
Absolutely.
I think for chip support forking the support software may look bad from a "software engineering" point of view, but is at times the only practical option because (to be as polite as possible) the implementation of many of these chips is not that great.
We should also keep in mind that these problems occur because people make newer versions of their chips in a way comparable to how software designers write software.... If I remember, unix libraries libfoo.so.1 and libfoo.so.2 may have a different API while libfoo.so.1.0 and libfoo.so.1.1 should not change the API. Just that the API for them is features of the hardware and electronical attributes. In the cases of our closed source antagonists, the silicon vendor usually writes the silicon support code (aka initialization code/BIOS), too. Because of that they see no significant advantage in making their chips compatible on an initialization code level across generations. I think the hilarious part of this is that hardware designers want to make their hardware as generic as possible and that's what causes work on the firmware side. We shouldn't lose the twinkle in the eye if we call that "a hardware design issue"....
Stefan
ron minnich wrote:
I think for chip support forking the support software may look bad from a "software engineering" point of view, but is at times the only practical option
Hopefully we can find a good balance, so that general things which are indeed the same across many parts can be implemented only once.
//Peter
On 18.01.2010 00:57, ron minnich wrote:
I think for chip support forking the support software may look bad from a "software engineering" point of view, but is at times the only practical option because (to be as polite as possible) the implementation of many of these chips is not that great.
While this may be true for coreboot, this method caused huge pains in flashrom because people didn't even read the code before they copied/modified it. The worst cases were 100% copy-paste jobs where only a search/replace of the function names had been done. Over time, some of the copied files got modified and others stayed the same. A sizable chunk of old drivers contradict the datasheets because people forgot to actually adapt the driver to the chip after copying. Fortunately, a detailed analysis of code vs. datasheets allowed us to overcome the duplication and now ~80% of the old chip drivers can be killed thanks to the improved infrastructure.
I do recognize that coreboot support for a chipset is way harder to write and check than flashrom support for flash chips and agree that "looks safe" changes to unified multi-chipset code in coreboot can be a huge source of bugs.
What strikes me as odd is the fact that many chipset files in the coreboot source tree just got a search/replace treatment from another chipset, even if some feature is not even present on the new chipset. Would a dedicated "template" chipset directory help people implementing support for completely new chipsets?
Regards, Carl-Daniel