dolphin-emu compile error: ambiguating new declaration of u64 _xgetbv(u32)

If you follow the official instructions for compiling Dolphin game emulation on your own, you are likely to encounter this error at the ‘make’ stage in the step 3:

dolphin/Source/Core/Common/x64CPUDetect.cpp:41:12: error: ambiguating new declaration of 'u64 _xgetbv(u32)'
static u64 _xgetbv(u32 index)

Googling this error, you will see some solutions from as far back as 2018. You don’t need to bother with these.

This error happens because you were actually reading the instructions (as opposed to skimming through them), and did ‘git checkout tags/5.0’ to get the “stable” version. Sadly, the 5.0 version is super outdated – it was last updated in 2016.

So, you were misled by an outdated wiki page that nobody bothered to fix. Here’s how to compile Dolphin nevertheless.

# Assuming you've already created and entered the 'Build' directory, do this:
cd ..; rm -rf Build
# Then, switch to the main branch, where up-to-date development actually happens
git checkout master
git submodule update --init # just in case! might not be needed

Then, start again from the step 3 – ‘mkdir’, ‘cmake’ and all. The build should succeed.

This compiles the latest development version. It might have bugs – however, it seems to me that Dolphin has given up on releasing ‘stable’ versions. My understanding is, this is as good as you’ll get.

Nobody with access to Dolphin Wiki bothered to fix this old and quite outdated instruction. I hope that happens soon. In the meantime, this post shall serve as a guidance for anyone encountering this exact error.

If someone with editing access to Dolphin Wiki is reading this – remove the ‘checkout tags/5.0’ thing, the 5.0 tag is broken and won’t compile for anyone not running a system out of 2016.

mullvad (or other wireguard) and tailscale coexistence – tailscale not pinging

my linux to linux tailscale connection would not ping, except that ‘tailscale ping’ itself worked. i rememnbered I also had mullwad. looking a bit, I found this wonderful post with explanations on how it all actually works. the guy rebooted to have it work tho, and I didn’t want to reboot.

I did the systemctl fix he suggested (despite the “before” and “after” confusion in his conclusion and “fix” sectiion, weird), but restarting units a few times in different orders (including the tailscaled unit) didn’t help and the rules stayed the same as in his “broken” example, tailscale rule after mullwad rule in ‘ip rule’ order.

do read the post for insights, but still, in short, this helped:

sudo ip rule add preference 5207 from all lookup 52
sudo ip rule delete preference 5270

looks like this can be put in a script, too, and the rules stay consistent between installations? idk. likeeee, check ‘ip rule’ output and script things at your own risk.

importing vm from virtualbox to qemu, and a small virt-aa-helper fight

I used ubuntu 20.04 or something

First, I used this tutorial to convert the vdi image to qcow2. I then installed qemu and libvirt and co on my new vm host system, moved the files there and used some long command I no longer remember to create a new virtual machine using some libvirt thing? I had to activate “default” network for that command to work, IIRC.

virt-viewer helped me connect to the vm’s screen, and it showed “booting from hard disk” and got stuck there, turned out I had to add an UEFI image because that’s what VirtualBox also used or smth. I used “virsh edit” for that IIRC, opening an xml file that was unexpectedly easy to read and modify, and this snippet in the <os> section helped:

<loader readonly="yes" type="pflash">/usr/share/OVMF/OVMF_CODE.fd</loader>

We did it, Reddit. This is all you need.
It’s tempting, but wrong, to add this too:


You absolutely should not add the <nvram> section, only the <loader> one. Seems like qemu will try to write into the <nvram>-located file, and if you give it the OVMF_VARS.fd file, virt-aa-helper will throw apparmor-looking errors. saying “error: skipped restricted file” and even “internal error: cannot load AppArmor profile”.

It might look like apparmor is shouting at you, but if you look at syslog and kernel.log messages carefully, it’s actually virt-aa-helper, an additional layer of defense and protection way before apparmor even gets a chance to react. Adding or editing apparmor profiles will do nothing!

Changing the virt-aa-helper’s behaviour requires recompiling libvirt, but you don’t have to – just don’t add any nvram store and let qemu/libvirt/whoever add its own store, it will appear on its own in the xml file next time you launch the machine, I think.

from there… don’t touch it, I guess? or touch it, I’m not a cop. Anyway, that’s how my 1h+ detour into apparmor finished, with “just remove the nvram section with the /usr/share default file”. Makes sense that each guest gets its own nvram that’s assigned automatically, tbf.

Kdenlive WAV audio track muted after cutting

I was editing a small video clip and had to cut a WAV file I put in an audio track. Right after cutting off and removing its beginning with “Split at Playhead”, it became completely silent – both in the preview and in a rendered clip. In addition to that, it behaved weirdly when cutting off its end, playing a different portion of the file than the one where I cut. The solution was to convert the audio file into an mp3 file with ffmpeg, which worked flawlessly.

How to compile your own kernel for Debian/Ubuntu – without paying $20 and violating GPL

Update: there’s a tutorial I found on this same topic that covers a lot of the same ground, you should check it out too!

Turns out there’s a company that provides expensiveish linux-next-built .debs for Ubuntu/Debian with some .config improvements – proprietary, of course. You need to pay $20 per machine or something, I haven’t even checked the website, really, who cares. It’s not clear if they’re violating GPLv2 by not providing the source because nobody seems to have asked for it yet, but something tells me they won’t just give you the sourcecode if you buy it. They are definitely violating GPLv2 by having you accept an EULA before you can use the kernel, and their way of bundling OpenZFS is specifically the way that big players avoid because it’d violate GPLv2… I feel pretty safe saying that’s a triple violation of GPLv2, and I’m not even a doctor… or whoever you need to be to diagnose GPL violations.

If you’d like to bother with asking them for source code – there’s “free trial” kernels that shut down your PC after 3 hours of runtime, you can download that kernel, then email them and request sourcecode for that kernel, they’re required by GPLv3 to provide it to you after you download the trial kernel and then message them requesting the sourcecode. I won’t bother, but you – knock yourself out! If you succeed (lol, good luck), please do post it online, you’re allowed to – I wouldn’t mind looking at their EULA check code, for one.

I’ve found an interesting comment, from a user who seems to have just created their account and only used it for this one comment under this specific post. Here it goes:

Is someone forcing you? Do you have the ability to build such a kernel and with such capabilities? Not? Then don’t bother people. A bunch of talkers can not to compiled kernel.

Thanks to assholes like you, they made their project non-public. Now are you going to compiled the kernels for us?

Not implying they’re a throwaway created by someone involved in the project who’s mad at this post or whatever. Let’s address the substance – can I compile a kernel that requires an EULA, does a hardware-fingerprinted license check and shuts down the machine after 3 hours of use? And then violate GPLv2 thrice while distributing it? Probably not – point taken.

Am I going to compile kernels for everyone? No, I certainly don’t have neither the processing power nor time, so they got me there, too.

What I can certainly do is show you how you can compile your own, latest, kernel with minimal effort – nicely .deb-packaged, no less! Only takes 7 commands and about 10 minutes of preparation + however long it’d take your machine to build a kernel (total of 40 minutes for my Ryzen 3500U laptop), and then you can just “dpkg -i” three packages and reboot.


Make a separate directory inside which you’d do all the work. It’s $HOME/kernel for me, you can just cd $HOME && mkdir kernel && cd kernel. This is needed to avoid cluttering your $HOME with .debs – you will see, just trust me, ok?


Go to , right click on the latest “stable” kernel’s “Tarball” link and use “Copy link”. I know, this is not a command, but bear with me.

1. Wget it:


2. Untar it:

tar xf linux-5.12.8.tar.xz

3. Cd into it:

cd linux-5.12.8/

4. Copy your current config over:

cp /boot/config-$(uname -r) .config

5. Update the config:

make oldconfig

This will present you with a slew of configuration options that got added in the time period between releases of your current kernel and the one you’re going to install. My own strategy is – answer “m” where that’s an option, answer “y” otherwise unless it’s a CONFIG_DEBUG option of some kind. Use ? and your search engine of choice liberally if you’d like to know what the options you’re adding actually stand for.

BTW, if you need to add some kernel patches – that’s an option and this is the step where you can do it.

6. Build it as .deb

nice -n10 make -j8 bindeb-pkg

Vary the nice -n (process scheduling priority) and the make -j (thread count) parameters up/down if you’d like (lower niceness for higher priority, range is from 20 to -20), these two are just what I use on my 4c8t Ryzen 3500U so that my music playing in a YouTube tab doesn’t stutter.

7. Install the .deb files

Now that you’ve finished compiling, you have three .debs to install. Provided you’re installing a 5.12.8 kernel like this example mentions, do this to install them all in one go:

sudo dpkg -i *5.12.8*.deb

Now reboot and you will have a new kernel that you will booted with when using your default grub entry, you can use ‘uname -a’ in console to check it’s really the new kernel after you’ve booted with it – and you can use the grub menu to boot with an earlier version in case booting the new kernel fails – it never does for me, but it’s an option if you need it..

TTP229 ghost keypresses because of ESP32 WiFi

When working with a self-designed TTP229 capacitative touch keypad for a project of mine, I started getting ghost keypresses. When I pressed one key, another key (or two) would be registered at the same time, or tenth of a second later. No ghost keypresses would appear on their own, however, they’d appear fairly often when I pressed a key – even appearing in the same serial data read, so a tick-based filtering (“keypress happening too quickly after the last one”) wouldn’t help.

The culprit was WiFi of the ESP32 that the keypad was connected to – at some point during debugging, I realized the problem never appeared until I turned WiFi on. Turning WiFi off in MicroPython REPL and re-testing confirmed this – the false keypresses only happened when WiFi was on (and connected AFAIU). This will likely also apply to ESP8266, TTP223 , TTP224 etc.

My solution is to keep ESP32 WiFi turned off at all times and only turning it on when I need to send data to my HTTP endpoint. Thankfully, my project is send-only and doesn’t need to poll anything over a network connection, so it works out quite nicely for me.

Working around a A64 thermal sensor miscalibration – recompiling .dtb to change the kernel driver trip point

I got a SoPine (Pine64-based board) 2 years ago. Hooked it up just now, trying to make it boot since I want to use it for a project. Flashed Armbian Focal on a 8GB MicroSD (this version: Armbian_20.08.1_Pine64so_focal_current_5.8.5.img). It wouldn’t boot, so I connected to the board’s serial port, and after all the uboot logs, here’s what I got after “Starting kernel”:

[    3.070744] thermal thermal_zone0: critical temperature reached (188 C), shutting down
[    3.082243] reboot: Power down

Everything was cold – all ICs on the SoPine board were cold, all ICs on the baseboard were cold, whatever was 200 degrees hot, I couldn’t find it anywhere.

Trying to “bisect” the issue, I loaded an old Armbian Ubuntu 16.04 image from January of 2019 and it actually booted the board without instantly shutting down – that was a good start. What did I notice?

1) armbianmonitor -m actually gave reasonable temperatures, though it did throw some weird Bash errors:

22:37:39: 1152MHz 0.11 8% 1% 2% 0% 5% 0%/usr/bin/armbianmonitor: line 385: read: read error: 0: Operation not permitted
/usr/bin/armbianmonitor: line 386: [: -ge: unary operator expected
°C 41°C 0/7

2) dmesg had output that indicated the kernel couldn’t even read the thermal zone properly
[ +1.967998] thermal thermal_zone0: failed to read out thermal zone 0
[ +1.968005] thermal thermal_zone0: failed to read out thermal zone 0
[ +1.967997] thermal thermal_zone0: failed to read out thermal zone 0
[ +1.967971] thermal thermal_zone0: failed to read out thermal zone 0

Well, perhaps it was just an old kernel, who knows.

After much talking on Pine64 Discord, we’ve found a relevant-ish bugtracker issue. Sounds like it’s possible for A64 temperature sensors to not be properly calibrated at the factory, and that’s what apparently happened to me too. The image wouldn’t even start booting because the shutdown was initiated by the Linux thermal driver and would immediately shutdown the kernel as soon as it booted up and noticed the temps were outrageously high.

The bugtracker issue:

1) Mentioned a tool that let me see the calibration register values that, apparently, were responsible for temperature calibration.

root@pine64so:~# ./regtool a64-sid
0x01c14234 : 07ab07b1
0x01c14238 : 000007b4

2) mentioned you could recompile the .dtb files to change the trip points that the kernel driver should use.

Well, I couldn’t guarantee that I would be able to both change the temperature calibration values and figure out the right values to make the A64 CPU show the right temperature – the factory probably uses some sort of algorithm to calculate those values and flash them into the CPU. However, I could definitely change the trip points.

I initially went the “set up a kernel compile environment and compile the dtb files from there” route, however, that’s not required. Just mount the OS SD card, and go to its /boot/dts/. Decompile the SoPine file:

dtc -I dtb -O dts -f sun50i-a64-sopine-baseboard.dtb -o sun5i-a64-sopine-baseboard.dts

Ctrl+F (or Ctrl+W if you’re in nano) for “thermal”, you’ll see a “trips {“ section. Start with cpu_alert0, change the “temperature” section to something large in hexadecimal, like, 230000 (230 degrees) => 0x38270, and also up the next alerts to, say, 240000 (0x3a980) and 250000 (0x3d090):

Then, compile the dts file back into a dtb file:

dtc -I dts -O dtb -o sun5i-a64-sopine-baseboard.dtb sun50i-a64-sopine-baseboard.dts

You might need to compile with sudo since you’re replacing a file owned by root on the SD card’s filesystem. That should set trippoints to a value larger than the bogus value returned by the sensors.

In the end, I successfully booted into the latest OS!

root@pine64so:~# armbianmonitor -m
Stop monitoring using [ctrl]-[c]
Time CPU load %cpu %sys %usr %nice %io %irq CPU C.St.

13:16:25: 1008MHz 0.37 24% 2% 2% 0% 18% 0% 198.5°C 0/7
13:16:30: 1008MHz 0.34 1% 0% 0% 0% 1% 0% 198.2°C 0/7

That’s one hot CPU. Shame it can’t be used as a space heater – with how little SoPine consumes while heating up the CPU so much, I could really save on my heating bills. Oh, BTW – temps stayed at 199 even after an apt dist-upgrade that heated the CPU up quite a bit.

Oh, also – any OS upgrade (with apt dist-upgrade) might install a new version of dtb files that’ll set the trip points back to normal. I should consider some kind of long-term solution to this issue.

Also, I took my .dts files from the mainline kernel tree, and with them, I2C wouldn’t work on SoPine – there was only /dev/i2c-0, and cat /sys/class/i2c-adapter/i2c-0/name highlighted that it’s the HDMI I2C port. I needed to 1) enable the I2C ports in the .dts files and compile 2) add external I2C pullup resistors, since my SoPine board didn’t have any pullup resistors on the Pi header I2C pins. The symptom was that the “i2cdetect -y 1” command ran painfully slow and didn’t show any devices attached even if there were some sensors hooked up to the pins.

dpkg error processing package fuse3

Using Debian Bullseye Testing? Your fuse3 package might fail to install like this:

Setting up fuse3 (3.4.1-1)
dpkg: error processing package fuse3 (--configure):
 installed fuse3 package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:

No worries, however, you can just wget and install a newer version.

dpkg -i libfuse3-3_3.9.0-1_amd64.deb
dpkg -i fuse3_3.9.0-1_amd64.deb

Substitute “amd64” if needed, use other Debian repo if desired. Can’t remove the fuse3 package for some reason? Use this “nuclear option”:

dpkg --remove --force-remove-reinstreq fuse3

RAID sync speed slow when installing Debian

I’m installing Debian on a computer where the rootfs will be stored on two 250GB NVMe drives that are RAIDed (RAID1) together. I couldn’t figure out a good way to also RAID (and add some redundancy to) the ESP partition, unfortunately, and I don’t think that’s possible – though it would be cool to have a working ESP partition, no matter which drive might fail. In the end, I followed this tutorial and RAIDed together two 249GB partitions I made on each on the drives – using the TUI (ncurses) interface of the Debian installer. Then, I decided to not install the system until mdadm would finish the sync, did Ctrl-Alt-F2 to switch to a terminal, then did cat /proc/mdstat – only to see 1000K/s speed and “30 hours left” estimate. Given that the drives were NVMe, this was very weird.

However, it seems like some settings in the Debian installer environment artificially limit the RAID sync speed to 1000K/s. Following this tutorial, I removed the limit using this command:

echo 1000000 > /proc/sys/dev/raid/speed_limit_max

Then, all went well and the array synced at full speed (1G/s in this case). Hope this helps you too!

‘python3 sdist bdist_wheel’ fail – possible fix

error: can't create or remove files in install directory
The following error occurred while trying to add or remove files in the installation directory:
[Errno 13] Permission denied: '/usr/local/lib/python3.6/dist-packages/test-easy-install-32566.write-test'
The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was: 
Perhaps your account does not have write access to this directory? If the installation directory is a system-owned directory, you may need to sign in as the administrator or "root" account. If you do not have administrative access to this machine, you may wish to choose a different installation directory, preferably one that is listed in your PYTHONPATH environmentvariable. For information on other options, you may wish to consult the documentation at: Please make the appropriate changes for your system and try again.

You might want to update your setuptools – maybe also pip and so on:

sudo python3 -m pip install -U pip setuptools

That solved this for me.