I was about to say, I’ve only come across that particular issue since moving to KDE, but I know what you mean about the lack of options, but then I looked in the settings, and found this:

It’s getting there!
I was about to say, I’ve only come across that particular issue since moving to KDE, but I know what you mean about the lack of options, but then I looked in the settings, and found this:

It’s getting there!
That’s exactly why. You can manage users no problem. Multiple machines was never the paradigm.
90% of the current development effort (containers, virtualization) is about copying the working machine and giving it a nice safe space to run in, where no outside forces can reach in and disturb its peace.
Did you just try to theme my app? We’re opinionated software, and that’s bigotry.
Also slimbook! They’re offering similar machines to tuxedo, but my Executive 14 13700 was slightly cheaper than the equivalent Tuxedo Infinity book pro (same clevo laptop base), and dropped the second nvme slot for a full 99Wh battery.
Additionally, I has no problems shipping from Spain to the US.


Oh, oh I know this one!
If your keyboard shortcut contains control characters, it will be interpreting the keypresses with the control characters you’re holding for the shortcut. Alt+a super+b etc.
Some keyboard shortcuts trigger on press, they can also trigger on release. This is why you need the sleep statement, to give you time to release the keys before it starts typing. You want the shortcuts to trigger after release.
I can set the difference in my window manager, but I’m not sure about doing it in (GNOME?) Ubuntu. Even assuming you can set the shortcut to only run on release, you still need to let go of all the keys instantly, so chaining with sleep is probably the best approach.
Chaining bash sleep and ydotool works for me in my window manager. Consider using “&&” instead of “;” to run the ydotool type command. Whatever is written after the “&&” only executes if the previous command (sleep 2) succeeds. The “;” might be interpreted by the keyboard shortcut system as an end of the statement:
sleep 2 && ydotool type abcde12345
Or perhaps the shortcut system is just executing the programs, not necessarily through a bash shell. In that case we would need to actually run bash to use its sleep function and the “;” or “&&” bit. Wrapping the lot in a bash command might look like this:
bash -c "sleep 2 && ydotool type abcde12345"
Assuming that doesn’t work, I see nothing wrong with running a script to do it. You just need to get past whatever in the shortcut system is cutting off the command after the sleep statement.
Running ydotoold at user level is preferred and recommended. It keeps it inside your user context, which is better for security.
They’ll just take the ick out of his name when he assumes his PR/Community manager role
Long live the Executive 14 99Wh - you’ll pry mine from my cold dead hands


I once had to edit and dump a Cisco config from a 10 switch stack over 9600 baud.
It took ages, and then I realised my fancy new terminal still had a default scrollback limit set, and had to do it again.
Actual torture.


So the package is a specific driver version, which will keep you on the 580 diver version through updates. This package would be installed to provide the drivers and requires the matched utils package.
You would install this, rather than just installing the meta-package from the official repositories. As shown in the AUR page:
Conflicts: nvidia, NVIDIA-MODULE, nvidia-open-dkms
Provides: nvidia, NVIDIA-MODULE
This is also a DKMS package. This will let it build against whatever kernel you’re running, so you can keep using the module through regular system qns kernel upgrades.
So, the idea would be, remove the nvidia drivers you have, install this one, and it’ll be like the upgrade and support drop never happened. You won’t get driver upgrades, but you wouldn’t anyway. It’s the mostly safe way to version pin the package without actually pinning it in pacman. That would count as a partial upgrade, which is unsupported
I’m using stow, and then git for versioning. The only question I’m currently facing is whether to keep my stow packages as individual got repos (so I can switch branches for radically different configs or new setups) or treat the whole lot as a big repo, and set the others up as subtrees.


His instance and mine, “sh.itjust.works” was federated with lemm.ee
Mine (Thunder) doesn’t recognize tagging the code block as a specific syntax, it just shows it as preformatted block, with no highlighting.


If you’ve ever had to go through the audit process CAs are subjected to, not violating the compliance controls and ensuring audit compliance is a massive chunk of your attention for a lot of the year.
Can I ask what client you’re using?


I assumed that the primary account had full control over secondary user profiles, will have to revisit and confirm - thanks for the tip!


I’m aware of what’s happening in the states. I’m talking from a resourcing perspective. You’d already have to know what you were after to confirm its absence from the phone, if the wipe can be done silently.
If you could load in to your dummy profile, while deleting the keys to your main profile, which could then be freed up as storage space, all silently, with the right unlock password, that’d be pretty hard to prove in a way that warranted arresting everyone.
This would limit this charge to only those that announced it as a political statement or who were already being targeted specifically.


what would be really cool is if it binned the storage keys for one user and not the other, silently. That way you could actually protect your data, without being martyred.
They’d have to prove a lot in the first instance to warrant arresting you then and there, like that the knew you’d done it
Looks like that might have changed, libc-gconv-modules-extra has an i386 package for 2.42-5 added at like midnight UTC+1. Given the sources only update every 6 hours, might be you found an unlucky update in between?
Struggled to find a time for the release, but the changelog has one, unsure how true to package-available time that is:
glibc (2.42-5) unstable; urgency=medium
[ Martin Bagge ]
* Update Swedish debconf translation. Closes: #1121991.
[ Aurelien Jarno ]
* debian/control.in/main: change libc-gconv-modules-extra to Multi-Arch:
same as it contains libraries.
* debian/libc6.symbols.i386, debian/libc6-i386.symbols.{amd64,x32}: force
the minimum libc6 version to >= 2.42, to ensure GLIBC_ABI_GNU_TLS is
available, given symbols in .gnu.version_r section are currently not
handled by dpkg-shlibdeps.
-- Aurelien Jarno <aurel32@debian.org> Sat, 06 Dec 2025 23:02:46 +0100
glibc (2.42-4) unstable; urgency=medium
* Upload to unstable.
-- Aurelien Jarno <aurel32@debian.org> Wed, 03 Dec 2025 23:03:48 +0100
I thought about this for a long while, and realised I wasn’t sure why, just that most of my work has gravitated towards Arch for a while.
Eventually, I’ve decided the reason for the move is because of three specific issues, that are really all the same problem - namely I don’t want to learn the nix config language to do the things I want to do right now.
I’ve read lots of material on flakes, even first modified then wrote a flake to get not-yet-packaged nvidia 5080 modules installed (for a corporate local llm POC-turned-PROD, was very glad I could use nix for it!) I still just don’t really get how all the pieces hang together intuitively, and my barrier is interest and time.
Lanzaboote for secure boot. I’m going to encrypt disks, and I’m going to use the TPM for unlocking after measured uki, despite the concerns of cold-boot attacks, because they aren’t a problem in my threat model. Like the nvidia flake, I don’t really get how it hangs together intuitively.
Home management and home-manager. Nix config language is something I really want to get and understand, but I’ve been maintaining my home directory since before 2010, and I have tools and methods for dealing with lots of things already. The conversion would take more time than I’m prepared to devote.
Most of the benefits of nix are things I already have in some format, like configuration management and package tracking with git/stow, ansible for deployment, btrfs for snapshots, rollback and versioning. It’s not all integrated in one system, but it is all known to me, and that makes me resistant to change.
I know that if I had a week of personal time to dig in and learn, to shake off all the old fleas and crutch methods learned for admin on systems that aren’t declarative, I’d probably come away with a whole new appreciation for what my systems actually look like, and have them all reproducible from a readable config sheet. I’m just not able to make that time investment, especially for something that doesn’t solve more problems than I’ve already solved.
An interesting argument would be to require the training data to be shared to prove it was never exposed to the original source it’s ripping off.
It might help set a precedent that would make this sort of thing less attractive