I'm trying to get OpenBSD to install on an x220 Thinkpad with
Coreboot/SeaBIOS but I'm running into two problems: the ethernet device
doesn't work and OpenBSD doesn't detect my HDD. dmesg said em0 wouldn't
load because the EEPROM had an invalid signature. I have no idea why
OpenBSD doesn't see my HDD though. It's strange because everything works
fine under Linux. And I cannot seem to mount a usb drive under the
OpenBSD installer to attach dmesg errors.
I originally posted this as a bug report to bug report mailing list but
Theo said it would be better suited for Coreboot's and wasn't a bug in
In that case, I'd also like to point you to the deadline for submitting
main track talks which is tomorrow(!).
Having a coreboot/LinuxBoot talk there would be awesome. Ron/David,
could you submit something or do you have someone in mind who can do that?
There's also lightning talks, deadline is a bit later.
OK, I'm submitting a request for a stand. I need a backup contact for
the stand. Who is willing to do that? AFAICS we can still change the
backup contact later if life happens.
On 02.11.2018 20:48, David Hendricks wrote:
> On Fri, Nov 2, 2018 at 9:15 AM 'Ron Minnich' via linuxboot
> <linuxboot(a)googlegroups.com <mailto:firstname.lastname@example.org>> wrote:
> I"m leaning to yes, by which I mean if you do it, I'll show up.
> I can't believe I said that.
> On Fri, Nov 2, 2018 at 7:20 AM Carl-Daniel Hailfinger
> <mailto:email@example.com>> wrote:
> > Hi!
> > FOSDEM next year will be on 2 & 3 February 2019.
> > The deadline for applying for a stand is today.
> > Do we want a coreboot/flashrom/LinuxBoot stand/booth?
> Same as what Ron said. I think someone from FB can be there to talk
> about coreboot/LinuxBoot stuff and perhaps bring some hardware to demo.
On 03.04.2018 20:03, Ivan Ivanov wrote:
> I have noticed that both coreboot and seabios are using the very old
> versions of LZMA SDK.
True. I introduced the lzma code in coreboot (back when it was called
LinuxBIOS) when we were working on OLPC XO-1 support.
> If we will upgrade our LZMA libraries from the
> outdated-by-12-years 4.42 to the current version 18.04 , speed and
> compression ratio should improve and maybe a few bugs will be fixed.
Do you have any numbers for this? An improved compression ratio and
improved speed would be nice indeed, but how does the size of the
decompression code change? If the decompression code grows more than the
size reduction from better compression, it would be a net loss. A
significantly reduced decompression speed would also be a problem.
Decompression speed would have to be measured both for stream
decompression (i.e. the decompressor gets the compressed data in
single-byte or multibyte chunks) as well as full-size decompression
(i.e. the decompressor can access all compressed data at once). We also
have to make sure that stream decompression still works after the change.
> Do you think it should be done, or you are OK with using such an
> outdated version?
A size benefit for the resulting image is a good reason to switch.
is there a branch that has all the outstanding KGPE-D16 changes merged? I will be happy to test, but I feat I won't find the time to test all those fixes each in a separate branch. Also some specification of what tests need to be conducted would help.
The outstanding KGPE-D16 bugs on my personal list are boot failures that require to turn off/on AC and the power consumption issues in idle that forced me to pull out one of the CPU packages and its RAM. But as far as I understood these were not targeted in the recent changes but could be "accidentially" fixed (e.g. by removal of some old buggy code) - correct?
I have a question reguarding the build process.
Since crossgcc has been updated and gcc is now at version 8.1 I always
encounter an error when building coreboot:
coreboot/src/console/vtxprintf.c:102: undefined reference to
Since coreboot uses crossgcc and its own libgcc libraries, I figured
that the __udivmoddi4 function has not yet been implemented.
Anyway I have incurred in this failure several times and even with the
latest 4.9 release. Shouldn't I be able to build at least this latest
release using unmodified crossgcc or am I missing something else?
Dne 30.5.2018 v 16:06 Mike Banon napsal(a):
> Hi Rudolf,
> Regarding this part:
> " To check if IMC is active check if PCI 0:14.3 0x40 bit7 set. "
> what command do I need to use to check this?
sudo setpci -s 14.3 40.b
Despite command name, it will print the value.
I tried to adapt coreboot to HiFive-Unleashed and boot bbl with coreboot and run linux.
My changes are as follows:
My code can run bbl, but it doesn't respond when bbl exits m-mode and enters linux.
I use freedom-u-sdk to compile bbl. In order not to conflict with the coreboot memory address, execute the following command.
riscv64-elf-objcopy --change-addresses 0x200000 work/riscv-pk/bbl ../coreboot/payload.elf
I don't know what I missed, what should I do, I hope to get your help.
Had a lot of access problems recently, both to coreboot website and
its' repositories. Was it a DDoS attack? How to make sure that the
sources at review coreboot repository haven't been modified during
Season's greetings to everyone! :-)
I've been able to get my Asus KGPE-D16 running with coreboot 4.6 and Qubes 4 and I'm pleased to report it has been nice and stable over the holiday period, save for a few minor issues.
Suspend works fine on a fresh install of Qubes 4, however applying the latest updates then stops this from working. (It goes in to suspend but does not resume properly). I'm assuming this is now a Qubes issue.
My Intel Optane 900p NVME drive does not work reliably with Qubes 4. It initially appears fine but anywhere between 4-12 hours after boot the system will panic and switch to a read-only file system - you can't run any commands without an "Input/Output Error" even in Dom 0. The only course of action when this is encountered appears to be a hard reset. Unsurprisingly, getting any kind of logs about what is actually happening has proven fruitless so far :-(
I have therefore switched to a normal SATA drive for the last week and this has been nice and stable. I suspect my best course of action to get a higher performing filesystem is to move to 4 SATA SSDs in RAID 10. If anyone has any other suggestions to get faster drive access with this platform (particularly good random read/write performance for using lots of concurrent VMs in Qubes) or possible approaches to fix my NVME drive issue, they would be much appreciated.
My only other minor issue at the moment is lack of fan control. If I run "sudo pwmconfig" in dom0 I get the message "There are no pwm-capable sensor modules installed". I suspect I need to enable some additional modules, so if anyone can provide me some explicit directions on how to do this, it would again be much appreciated.