Greg Watson gwatson@lanl.gov writes:
We now seem to have a chip keyword, in addition to config, device, driver and object keywords. We have a device_operations structure, a cpu_device_id structure and a cpu_driver structure as well as a pci_driver structure. The cpu tree has been reorganized to include vendor directories as well as architecture directories. We now seem to have a drivers directory tree.
All of the northbridge/southbridge/superio keywords were condensed into a single keyword chip. A simplification.
device specifies a logical subdevice of a chip.
chip specifies where the code lives, generally this is a subdirectory with code for a specific ASIC. device describes a logical device within chip, and an entry in the device tree.
And of course the device tree shows up in static.c like it has done for a long while.
As for the cpus on x86 they were reorganized so that as much as possible they work like any other device. Since in a lot of cases it is possible to plug in multiple cpus into the same motherboard.
The design is for per architecture code to figure out which cpu it is running on and then setup the methods appropriately.
driver and object are essentially the same keyword. The difference is that if you are marked object you are only linked in if there is an external reference to you from elsewhere, this is good for library functions. Something marked driver is always linked in.
Can someone please provide the following information:
What does each keyword actually do now?
How do the keywords relate to the data structures?
What files are now auto generated, and how is everything linked together?
What keywords/data structures are actually needed, and by which devices?
How/where are the data structures used?
What is the rationale behind the reorganization of the cpu tree, and how is
it supposed to work?
- What is the purpose of the drivers tree, how do drivers differ from devices,
and why is the console not included?
There is no drivers tree, just the device tree. The code does like it has done for a long time in v2. Some motherboard specific code runs, and then we enter hardwaremain. In hardwaremain we look at the specified set of devices, and we look at the code for those devices and we see if we can find some more. And then we setup the devices.
There has been an effort to remove special cases. We are currently left with the timer subsystem, the console, and hard_reset. All of which are optional. I don't see a way for those things to be both useful and not be a special case.
The configuration process now seems so complicated that I can't see how we could expect someone to port to a new board without some basic description of how everything is supposed to work.
Which is reasonable. Working examples may help as well.
I have a major vendor interested in doing this, but at the moment I can't offer them any help.
The big change from v1 to v2 is that in v1 everything was handled top down, while the code is structured in v2 to encourage building the code bottom up which tends to be more reusable.
So in v2 the basic model is: We detect that a piece of hardware exists. We find an ID for that hardware. With that ID we find a driver for that hardware. With that driver we configure the resources for that device. After the resources are configured we enabled them. With everything setup we make a last pass and do device specific initialization.
If a device cannot be discovered by the generic code it needs to be present in the static device tree. And if a device is always present on the motherboard it is recommend that it be present in the static tree.
The last big round of changes was largely simplifications to expose the primary model on how things work. We generate a device tree directly from Config.lb without an intermediate chip tree. cpus were pushed into the device tree. All of this was done with an eye towards the fact that things were getting too complex. Once the dust clears the infrastructure should be fairly stable from here on out.
Eric