This is not a complaint.
This is not a rant.
I do like solydxk, otherwise I wouldn’t contribute by posting to this forum.
My intention is to report my findings and - maybe - give you some warnings.
So I was happily running SolydXK KDE Plasma v9 for some time as one of my 5 or 6 Linux distros, next to W10, OpenSUSE, LMDE, Q4OS and Kubuntu 20.04, when I saw distrowatch report that v10 was out. OK, so I thought I should go for it!
Negative experience with the upgrade script
As you are being warned in the upgrade announcement, the upgrade script has not been tested thoroughly and should be used at your own risk, and only after making backups.
My own experience is negative. I have tested both the manual run (where you have to manually handle all identified configuration discrepancies and have to confirm some other actions) and the unattended one. I was pretty amazed that the script pulls in loads of packages (I would say 5 to 6 GB), which obliged me to massively clean up my partition before even coming to an end with the script. I had no severe warnings during the script execution, however in none of my trials the upgraded system would even boot beyond a black screen and empty desktop, only with a bunch of KDE error messages popping up. So I decided to go for a clean reinstall.
ISO not compatible with multisystem’s live usb
I usually copy ISOs onto my multisystem usb stick and install them from there. Before adding ISOs to the pool, multisystem can detect and refuse ISOs that are not compatible, so I was happy that the SolydK ISO was digested without complaints, and two new boot entries were created (one called “force vesa”). But both options wouldn’t boot. So I dd’ed the ISO onto a virgin stick and booted and installed it from there.
Installer formats / and swap, generating new UUID’s
I usually keep parts of the target partition intact from earlier installs, after renaming them so that they wouldn’t cause conflicts (eg /home/user.old or /etc/fstab.old etc). This helps me to be up and running quicker by re-using old content.
Unfortunately, the installer would not let me keep the target partition unformatted. I didn’t bother much since I had backed up the residual content elsewhere.
More dramatically, I must have overlooked (or forgot to unselect the option) that also swap would be reformatted. As swap is shared among all installed distros, this was almost fatal (continue reading).
However I appreciated the ease to select a target partition for grub (instead of EFI), which is often more difficult to find.
One of the purposes of my multiboot setup is that even if I screw one partition / one distro, I can still work with the others and attempt a repair from the remaining one(s). Now obviously the shared swap partition is a single point of failure and unfortunately SolydXK hit that spot. I also learned the hard way that the swap UUID is not only referred to in fstab, but (sometimes) also in initramfs. So SolydXK’s formatting of swap led to some partitions no longer boot at all, while some others would hang during boot for 30 or 90 seconds before resuming. Only OpenSUSE did not seem to bother, and booted normally.
I started to change all my fstabs before I realized that in support of the kernel’s resume ability, the swap UUID is (at least sometimes) engraved in the initramfs and so would need to be changed by means of sudo update-initramfs -u -k all (see discussion here). Instead, I decided to revert the swap’s UUID to its earlier value which I was able to find in one of my backups (see 3rd answer here). With this recovery, all my systems did again work normally.
Questions about SolydX and SolydK installation.
2 posts • Page 1 of 1
Thanks for your feedback.
I'll take a look at the swap issue.
I'll take a look at the swap issue.
Who is online
Users browsing this forum: No registered users and 6 guests