On Thu, Apr 19, 2012 at 04:08:41PM +0200, Vasilis Liaskovitis wrote:
> The memory device generation is guided by qemu paravirt info. Seabios
> first uses the info to setup SRAT entries for the hotplug-able memory slots.
> Afterwards, build_memssdt uses the created SRAT entries to generate
> appropriate memory device objects. One memory device (and corresponding SRAT
> entry) is generated for each hotplug-able qemu memslot. Currently no SSDT
> memory device is created for …
[View More]initial system memory (the method can be
> generalized to all memory though).
>
> Signed-off-by: Vasilis Liaskovitis <vasilis.liaskovitis(a)profitbricks.com>
> ---
> src/acpi.c | 151 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--
> 1 files changed, 147 insertions(+), 4 deletions(-)
>
> diff --git a/src/acpi.c b/src/acpi.c
> index 30888b9..5580099 100644
> --- a/src/acpi.c
> +++ b/src/acpi.c
> @@ -484,6 +484,131 @@ build_ssdt(void)
> return ssdt;
> }
>
> +static unsigned char ssdt_mem[] = {
> + 0x5b,0x82,0x47,0x07,0x4d,0x50,0x41,0x41,
This patch looks like it uses the SSDT generation mechanism that was
present in SeaBIOS v1.6.3. Since then, however, the runtime AML code
generation has been improved to be more dynamic. Any runtime
generated AML code should be updated to use the newer mechanisms.
-Kevin
[View Less]
On Mon, Apr 23, 2012 at 02:31:15PM +0200, Vasilis Liaskovitis wrote:
> Hi,
>
> On Sun, Apr 22, 2012 at 05:20:59PM +0300, Gleb Natapov wrote:
> > On Sun, Apr 22, 2012 at 05:13:27PM +0300, Avi Kivity wrote:
> > > On 04/22/2012 05:09 PM, Gleb Natapov wrote:
> > > > On Sun, Apr 22, 2012 at 05:06:43PM +0300, Avi Kivity wrote:
> > > > > On 04/22/2012 04:56 PM, Gleb Natapov wrote:
> > > > > > start. We will need it for migration …
[View More]anyway.
> > > > > >
> > > > > > > hotplug-able memory slots i.e. initial system memory is not modeled with
> > > > > > > memslots. The concept could be generalized to include all memory though, or it
> > > > > > > could more closely follow kvm-memory slots.
> > > > > > OK, I hope final version will allow for memory < 4G to be hot-pluggable.
> > > > >
> > > > > Why is that important?
> > > > >
> > > > Because my feeling is that people that want to use this kind of feature
> > > > what to start using it with VMs smaller than 4G. Of course not all
> > > > memory have to be hot unpluggable. Making first 1M or event first 128M not
> > > > unpluggable make perfect sense.
> > >
> > > Can't you achieve this with -m 1G, -device dimm,size=1G,populated=true
> > > -device dimm,size=1G,populated=false?
> > >
> > From this:
> >
> > (for hw/pc.c PCI hole is currently [below_4g_mem_size, 4G), so
> > hotplugged memory should start from max(4G, above_4g_mem_size).
> >
> > I understand that hotpluggable memory can start from above 4G only. With
> > the config above we will have memory hole from 1G to PCI memory hole.
> > May be not a big problem, but I do not see technical reason for the constrain.
> >
> The 440fx spec mentions: "The address range from the top of main DRAM to 4
> Gbytes (top of physical memory space supported by the 440FX PCIset) is normally
> mapped to PCI. The PMC forwards all accesses within this address range to PCI."
>
> What we probably want is that the initial memory map creation takes into account
> all dimms specified (both populated/unpopulated)
Yes.
> So "-m 1G, -device dimm,size=1G,populated=true -device dimm,size=1G,populated=false"
> would create a system map with top of memory and start of PCI-hole at 2G.
>
What -m 1G means on this command line? Isn't it redundant?
May be we should make -m create non unplaggable, populated slot starting
at address 0. Ten you config above will specify 3G memory with 2G
populated (first of which is not removable) and 1G unpopulated. PCI hole
starts above 3G.
> This may require some shifting of physical address offsets around
> 3.5GB-4GB - is this the minimum PCI hole allowed?
Currently it is 1G in QEMU code.
>
> E.g. if we specify 4x1GB DIMMs (onlt the first initially populated)
> -m 1G, -device dimm,size=1G,populated=true -device dimm,size=1G,populated=false
> -device dimm,size=1G,populated=false -device dimm,size=1G,populated=false
>
> we create the following memory map:
> dimm0: [0,1G)
> dimm1: [1G, 2G)
> dimm2: [2G, 3G)
> dimm3: [4G, 5G) or dimm3 is split into [3G, 3.5G) and [4G, 4.5G)
>
> does either of these options sound reasonable?
>
We shouldn't split dimms IMO. Just unnecessary complication. Better make
bigger PCI hole.
--
Gleb.
[View Less]
On Mon, Apr 23, 2012 at 01:27:40PM +0200, Vasilis Liaskovitis wrote:
> On Sun, Apr 22, 2012 at 04:58:47PM +0300, Gleb Natapov wrote:
> > On Thu, Apr 19, 2012 at 04:08:46PM +0200, Vasilis Liaskovitis wrote:
> > > Hotplugged memory is not persistent in the e820 memory maps. After hotplugging
> > > a memslot and rebooting the VM, the hotplugged device is not present.
> > >
> > > A possible solution is to add an e820 for the new memslot in the …
[View More]acpi_piix4
> > > hot-add handler. On a reset, Seabios (see next patch in series) will enable all
> > > memory devices for which it finds an e820 entry that covers the devices's address
> > > range.
> > >
> > > On hot-remove, the acpi_piix4 handler will try to remove the e820 entry
> > > corresponding to the device. This will work when no VM reboots happen
> > > between hot-add and hot-remove, but it is not a sufficient solution in
> > > general: Seabios and GuestOS merge adjacent e820 entries on machine reboot,
> > > so the sequence hot-add/ rebootVM / hot-remove will fail to remove a
> > > corresponding e820 entry at the hot-remove phase.
> > >
> > Why do you need this path and the next one? Bios can restore the state
> > of memslots and build e820 map by reading mems_sts.
>
> i see, that is a simpler solution. Since qemu currently creates most ram e820map
Quite the contrary. Qemu creates only one entry that Seabios can't
figure by itself.
> entries and passes them to seabios, I tried to follow the same approach. But
> your suggestion makes things easier and we don't have to worry about merged e820
> entries on hot-remove. I 'll rework it.
> thanks,
>
> Vasilis
--
Gleb.
[View Less]
Hello All,
I have a small problem with call32, after I make the call and the
32bit code is executed the parameter I am sending is incorrect, so
evidently I am doing something incorrect.
Here is my 16bit code:
dprintf(1,"Operation 16bit drive at %p\n",op->drive_g);
extern void _cfunc32flat_xen_blk_op_read(struct disk_op_s *);
return call32(_cfunc32flat_xen_blk_op_read,op,DISK_RET_EPARAM);
Here is the 32Bit code:
int VISIBLE32INIT
xen_blk_op_read(struct disk_op_s *op){
dprintf(1,"Xen …
[View More]Disk buffer %x lba %x count %d command
%d\n",op->buf_fl,op->lba,op->count,op->command);
dprintf(1,"Searching for drive, loc: %p\n",op->drive_g);
struct xendrive_s * xendrive = container_of(op->drive_g, struct
xendrive_s, drive); //the global struct is extracted
if(xendrive==NULL)
return -1;
...
}
I know something is wrong because the output is like this:
Operation 16bit drive at 0x0000d630
DEBUG call32: func 0x0f7e90a7 eax 832 err 1
Xen Disk buffer f000ff53 lba f000ff53 count -268370093 command 65363
Searching for drive, loc: 0xf000ff53
As you can see op->drive_g are located in different addreses in 16 and
32 bit modes.
Can someone help me out to find where the mistake is. The code seems
to be valid so I guess I am passing the pointer wrongly.
Thanks,
Daniel
--
+-=====---------------------------+
| +---------------------------------+ | This space intentionally blank
for notetaking.
| | | Daniel Castro, |
| | | Consultant/Programmer.|
| | | U Andes |
+-------------------------------------+
[View Less]
On 04/19/2012 04:08 PM, Vasilis Liaskovitis wrote:
> Extend the DSDT to include methods for handling memory hot-add and hot-remove
> notifications and memory device status requests. These functions are called
> from the memory device SSDT methods.
>
> Eject has only been tested with level gpe event, but will be changed to edge gpe
> event soon, according to recent master patch for other ACPI hotplug events.
>
> Signed-off-by: Vasilis Liaskovitis<vasilis.liaskovitis(a)…
[View More]profitbricks.com>
> ---
> src/acpi-dsdt.dsl | 68 +++++++++++++++++++++++++++++++++++++++++++++++++++-
> 1 files changed, 66 insertions(+), 2 deletions(-)
>
> diff --git a/src/acpi-dsdt.dsl b/src/acpi-dsdt.dsl
> index 4bdc268..184daf0 100644
> --- a/src/acpi-dsdt.dsl
> +++ b/src/acpi-dsdt.dsl
> @@ -709,9 +709,72 @@ DefinitionBlock (
> }
> Return(One)
> }
> - }
>
> + /* Objects filled in by run-time generated SSDT */
> + External(MTFY, MethodObj)
> + External(MEON, PkgObj)
> +
> + Method (CMST, 1, NotSerialized) {
> + // _STA method - return ON status of memdevice
> + // Local0 = MEON flag for this cpu
> + Store(DerefOf(Index(MEON, Arg0)), Local0)
> + If (Local0) { Return(0xF) } Else { Return(0x0) }
> + }
> + /* Memory eject notify method */
> + OperationRegion(MEMJ, SystemIO, 0xaf40, 32)
> + Field (MEMJ, ByteAcc, NoLock, Preserve)
> + {
> + MPE, 256
> + }
> +
> + Method (MPEJ, 2, NotSerialized) {
> + // _EJ0 method - eject callback
> + Store(ShiftLeft(1,Arg0), MPE)
> + Sleep(200)
> + }
MPE is write only and only one memslot is ejected at a time. Why 256 bit-field is here then?
Could we use just 1 byte and write a slot number into it and save some io address space this way?
> +
> + /* Memory hotplug notify method */
> + OperationRegion(MEST, SystemIO, 0xaf20, 32)
It's more a suggestion: move it a bit farther to allow maybe 1024 cpus in the future.
That will prevent compatibility a headache, if we decide to expand support to more then
256 cpus.
Or event better to make this address configurable in run-time and build this var along
with SSDT (converting along the way all other hard-coded io ports to the same generic
run-time interface). This wish is out of scope of this patch-set, but what
do you think about the idea?
> + Field (MEST, ByteAcc, NoLock, Preserve)
> + {
> + MES, 256
> + }
> +
> + Method(MESC, 0) {
> + // Local5 = active memdevice bitmap
> + Store (MES, Local5)
> + // Local2 = last read byte from bitmap
> + Store (Zero, Local2)
> + // Local0 = memory device iterator
> + Store (Zero, Local0)
> + While (LLess(Local0, SizeOf(MEON))) {
> + // Local1 = MEON flag for this memory device
> + Store(DerefOf(Index(MEON, Local0)), Local1)
> + If (And(Local0, 0x07)) {
> + // Shift down previously read bitmap byte
> + ShiftRight(Local2, 1, Local2)
> + } Else {
> + // Read next byte from memdevice bitmap
> + Store(DerefOf(Index(Local5, ShiftRight(Local0, 3))), Local2)
> + }
> + // Local3 = active state for this memory device
> + Store(And(Local2, 1), Local3)
>
> + If (LNotEqual(Local1, Local3)) {
> + // State change - update MEON with new state
> + Store(Local3, Index(MEON, Local0))
> + // Do MEM notify
> + If (LEqual(Local3, 1)) {
> + MTFY(Local0, 1)
> + } Else {
> + MTFY(Local0, 3)
> + }
> + }
> + Increment(Local0)
> + }
> + Return(One)
> + }
> + }
> /****************************************************************
> * General purpose events
> ****************************************************************/
> @@ -732,7 +795,8 @@ DefinitionBlock (
> Return(\_SB.PRSC())
> }
> Method(_L03) {
> - Return(0x01)
> + // Memory hotplug event
> + Return(\_SB.MESC())
> }
> Method(_L04) {
> Return(0x01)
--
-----
Igor
[View Less]
Hi,
This patch series redesigns the existing pciinit.c code and introduces
linked list operations.
Changes are more about structures definitions rather than functionality.
There are no more arrays of bases and counts in new implementation. The
new implementation is based on dynamic allocation of pci_region_entry
structures.
Each pci_region_entry structure could be a PCI bar or a downstream PCI
region (bridge). Each entry has a set of attributes: type (IO, MEM,
PREFMEM), is64bit, size, base …
[View More]address, PCI device owner, and a
pointer to the pci_bus it belongs to.
1. Introduce List operations
2. Add pci_region_entry and linked lists operations in place while still
using the array system to do the allocations.
3. Switch to lists operations
4. Get rid of size element from bus->r structure, now we use entry->size
for the same purpose
src/pci.h | 5 -
src/pciinit.c | 258 ++++++++++++++++++++++++++-------------------------------
src/util.h | 21 +++++
3 files changed, 137 insertions(+), 147 deletions(-)
[View Less]
On 04/19/2012 05:08 PM, Vasilis Liaskovitis wrote:
> Implement -memslot qemu-kvm command line option to define hotplug-able memory
> slots.
> Syntax: "-memslot id=name,start=addr,size=sz,node=nodeid"
>
> e.g. "-memslot id=hot1,start=4294967296,size=1073741824,node=0"
> will define a 1G memory slot starting at physical address 4G, belonging to numa
> node 0. Defining no node will automatically add a memslot to node 0.
start=4G,size=1G ought to work too, no?
--
error …
[View More]compiling committee.c: too many arguments to function
[View Less]