I've decided to take on the resource allocator. I think it can be written to work in three passes that will be clearly defined.
Here's a summary of how it works right now (as best I can tell)
There's a function called compute_allocate_resource in device/device.c which is the main part of the function.
It takes a bus, a resource, a resource-type mask, and a type.
1. It calls read bus resources on the bus 2. In a loop, it goes through all the resources (largest to smallest) and assigns them values. It marks them assigned, checks that they don't conflict with PCI, etc. 3. As it assigns values, it increments the base address it uses. 4. After it assigns the values, it sets the resource's size
Part of what makes this function confusing is how often it gets called.
It gets called in the device code during phase4, but it also gets called every time a PCI bus resource is "recorded." So it gets called lots of times, and thus there's a need for the have_resources flag which means "have read resources."
My proposal is to split it into distinct pieces. I've implemented most of it but it's not quite there.
1. read_resources - Just read resources. No setting PCI Config values, etc. 2. compute_resource_needs - Go through using the same algorithm as compute_allocate_resources, but don't assign the values. Just sum them up. This is also the time to propagate minimum resource alignments, etc. It is a depth-first dts traversal. 3. assign_resource_values - Go through and assign actual values to the resources, and make sure everything is reachable. This is also mostly the same as compute_allocate_resources, but it goes through breadth first instead of depth first. 4. set_resources
I'm hoping to move the resource-specific code out of device/pci_device.c, and much of the PCI-specific code out of device/device.c It doesn't seem like the allocation code should know or care how to disable any specific type of resource when it has size 0. Same thing for the resource code. It shouldn't fiddle with start and end values. The have_resources flag is completely gone since resources only get read once at the beginning.
Right now I'm trying to understand the legacy PCI decoding problem. I understand that legacy PCI only decoded 10 bits, so there are aliasing problems. I just don't know how much of the address space is affected. The check in the code right now just checks to see if bits 8 and 9 (0x300) are set.
if (resource->flags & IORESOURCE_IO) { /* Don't allow potential aliases over the legacy PCI * expansion card addresses. The legacy PCI decodes * only 10 bits, uses 0x100 - 0x3ff. Therefore, only * 0x00 - 0xff can be used out of each 0x400 block of * I/O space. */ if ((base & 0x300) != 0) { base = (base & ~0x3ff) + 0x400; } /* Don't allow allocations in the VGA I/O range. * PCI has special cases for that. */ else if ((base >= 0x3b0) && (base <= 0x3df)) { base = 0x3e0; } }
That makes it so that an allocation at 0x1100 triggers this check. Is it really the whole I/O address space that has this problem?
The other thing I'm not sure about is prefetchable memory. I see that it has its own resource in k8, but it doesn't seem clear how it was meant to be used. For right now it's unimportant, but I really need it to work correctly later.
I'll post some logs when it's closer to being ready. I just needed to write this down before the weekend.
Thanks for listening,
Myles