Welcome to my web log. See the first post for an introduction. See the archive page for all posts. (There is an english language feed if you don’t want to see Finnish.)

Archives Tags Moderation policy Main site

Me on Mastodon, for anything that is too small to warrant a blog post.

All content outside of comments is copyrighted by Lars Wirzenius, and licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License. Comments are copyrighted by their authors. (No new comments are allowed.)

Rant: year of Linux on the desktop

A rant about “year of Linux on the desktop” from a tired old man. I’ve been part of the Linux community since before Linux was called Linux. Over the years there’s been many people telling me directly that Linux is silly or wrong or imperfect, or that free and open source software is foolish or pointless. A lot more people have, of course, pontificated along those lines in public, and not directed it at me. I’m not claiming to be targeted at that, but I’ve been around and active for long enough that things accumulate. It’s the end of a long year for me, and I though I’d let off some steam myself. Hence this rant.

Over time, the goal posts of success keep being moved by the naysayers. I’m too tired to dig up all important milestones and dates, or references, but here’s highlights of the timeline as I have experienced it (years may be a little off):

  • 1991: It’s not possible for a hobbyist to write their own operating system kernel.
  • 1992: Well OK, you have a kernel, but there’s no networking or a graphical desktop, so it’s not actually useful. Anyway, ain’t nobody got time to compile all their software.
  • 1993: Fine, there’s some kind of desktop, but it doesn’t support drag and drop so it’s a toy. Also, it won’t go anywhere unless it fully and equally supports every graphics card ever made for the PC. Oh, and these pre-build distributions are insecure. Don’t download code from the Internet and run that, that’s just stupid.
  • 1994: Linux is pointless, since it only runs on the PC and not on any other kind of computer.
  • 1997: All these Linux users are just hobbyists, nobody will use it for anything important.
  • 1998: Corporations will never put money into developing this free software thing.
  • 2000: Linux and free software are a cancer on the IT industry.
  • 2001: The dotcom bubble burst, so Linux will now die, since nobody can afford to continue to develop it, as there’s no profit in free software.
  • 2003: Uh, okay, Linux didn’t die. But it’s too hard to install.
  • 2005: Ubuntu is just a toy.
  • 2006: So Linux is used a lot, but only on servers. It will never work on phones or embedded devices.

(skipping ahead so I don’t drag up too many bad memories)

  • 2022: Linux runs all top 500 super computers, billions of personal devices, most servers on the Internet, on all continents, on all oceans, in the air, in orbit, and on Mars. Oh, and in the air on Mars. All big corporations use open source in some form.
  • Also 2022: Linux will always be a hobbyist toy, unless solves all these new problems we’ve just thought about.

Next year, 2023, will be my thirtieth year of Linux on the desktop, and the thirtieth year of being told it’s not possible to use Linux on the desktop, or to only use Linux on the desktop. Some of the people telling me this weren’t born when I started using Linux on the desktop.

Despite everything, it’s been fun. I’ve been lucky to have been able to take part of this journey.

Can you run your test suite successfully 1000 times in a row?
seq 1000 | while read i; do echo $i; echo run test suite; echo; done

All quality software should have an automated test suite: some way to ensure that at least the primary happy path works. Ideally, the test suite would verify much more than that, but at least that.

Unfortunately, it is the nature of software to be buggy, sometimes in weird ways. Automated test suites are software. A particularly unpleasant way for a test suite to fail is intermittently. A test will fail sometimes, but not always. It’s called a flaky test. This is particularly bad, because it tends to undermine trust in the test suite as a whole. If a test always fails, there’s clearly a bug in the test or the code under test, and it clearly needs to be fixed.

If a test fails once every now and then, it might be problem in the test, in the code under test, in the environment, or something else. who knows? The threshold to just disabling or deleting the test can be low.

A test can be flaky for any number of reasons. How do you know if you have flaky tests?

An easy way to get some confidence is to run the test suite many times. The more times you run the test in row, the less likely the test is flaky due to random chance. It might be flaky due to other reasons, of course, but running the test suite 1000 times is a good baseline.

Every time I do this to a new project with a test suite of significant size, there are problems. Which I then fix.

Can your test suite pass 1000 repetitions of the test suite in a row? Are you sure?

Keyboardio Model100 keyboard: review at one month
Keyboardio Model100 with screwdriver, key cap puller, spare key switches, and other included parts.

I bought a new keyboard and have been using it for about a month now as my daily driver. It’s wonderful.

In 2019 I was given a Keyboardio Model01 keyboard as a gift. I had been curious of it for a while, so this gave me a chance to try one out without spending a big chunk of my own money. It’s a split ortholinear keyboard with custom designed key caps and a wooden enclosure in the shape of butterfly wings. It took me a day or two to start getting used to typing on, and about three months to be comfortably fast. I’m a software developer, and most of my day consists of typing, and so typing comfort and speed is important to me.

The switch to the new keyboard also involved switching to the US keyboard layout from my native Finnish one. I’d been touch typing with the Finnish keyboard since about 1983, so the layout change was also a big change. I switched so that typing program code would be easier. The Finnish layout hides some common characters (especially braces, brackets, and backslashes) behind modifier keys in a way that’s sometimes a little uncomfortable. The Model01 would’ve allowed me to keep using the Finnish layout almost without problems, I just chose not to. (The Swedish letter a-with-ring, or å, was a little difficult on the Model01, but I could’ve lived with it had I wanted to.)

The Model01 quickly became my favorite keyboard. Every time I used my laptop keyboard I found it quite uncomfortable, both mentally (“where is my Enter?!”) and physically (“oh my poor wrists!”). Part of the discomfort was due to the US/FI layout switch, but mostly it was how typing on a laptop keyboard feels, and how the keys are physically laid out, and how my hands and arms have to twist. There’s just no comparison with the Model01. For a while I even carried the Model01 to cafes and on overseas trips, but stopped doing that because it is quite a large extra thing to lug around. It’s easier to type with two fingers.

I’ve recently upgraded to the Model100, buying it from the Kickstarter campaign. It’s nearly identical to the Model01, but the key switches are different. I chose switches that are silent, but tactile, and it’s quite a quiet keyboard. My spouse appreciates the lack of noise. I appreciate that the typing feel is awesome.

The Model100 is without doubt the best keyboard I’ve ever used.

It’s not without issues. The wood enclosure will move with the seasons, like wood does. I had some issues with that with the Model01, so it’s not a new problem. I will cope, but I wish I could get an enclosure without this problem. Possibly one made out of plywood? Or metal?

I also wish the keyboard didn’t taunt me with all the possibilities that come from being able to, and encouraged to, change its firmware. I really want to, but I know it can become an endless time sink for a tinkering geek like myself, so I’m trying to resist.

Rust training for FOSS devs: how did it go?

For the past three Saturdays, I’ve been training half a dozen free and open source software developers in the basics of the Rust programming language. It’s gone well. The students have learned at least some Rust and should be able to continue learning on their own. I’ve gotten practice doing the training and have made the course clearer, tighter, and generally better.

The structure of the course was:

  • Session 1: quick start
    • what kind of language is Rust?
    • the cargo tool
    • using Rust libraries
    • error handling in Rust
    • evolution of an enterprise “hello, world” application
  • Session 2: getting things to work
    • memory management in Rust
    • the borrow checker
    • concurrency with threads
    • hands-on practice: compute sha256 checksums concurrently
  • Session 3: getting deeper
    • mob programming to implement some simple Unix tools in Rust

I am going to run the course again. I’ll give people on the waiting list first refusal, but if my proposed times don’t fit them, I’ll ask for more volunteers. Watch my blog to learn about that. If you want to get on the waiting list, follow instructions on the course page.

If you’d like me to teach you and others Rust, ask your employer to pay for it. I can do training online or on-site. See my paid course page

Linus has just recently merged in initial support for Rust in the Linux kernel. If you’re a professional Linux developer and would like to learn Rust, please ask your employer to fund a course for you and your colleagues.

(I’m afraid I don’t publish my materials: they’re not useful on their own, and there’s a lot of really good Rust learning materials out there already.)

Rust training for FOSS programmers

Do you write code for free and open source projects? Would you like to learn the basics of the Rust programming language? I’m offering to teach the basics of Rust to free and open source software programmers, for free.

After the course, you will be able to:

  • understand what kind of language Rust is
  • make informed decisions about using Rust in a project
  • read code written in Rust
  • write simple command line programs in Rust
  • understand memory management in Rust
  • study Rust on your own

To be clear: this course will not make you an expert in Rust. The goal is to get you started. To become an expert takes a long time and much effort.

For more information, including on when it happens and how to sign up, see my Rust training for FOSS programmers page.

(Disclaimer: I’m doing this partly ad advertising for my paid training: if you’d like your employer to pay me to run the course for their staff, point them at my training courses page.)

Not breaking things is hard

Building things that work for a long time requires a shift in thinking and in attitude and a lot of ongoing effort.

When software changes, it has repercussions for those using the software directly, or using software that builds on top of the software that changes. Sometimes, the repercussions are very minor and require little or no effort from those affected. More often, the people developing, operating, or using software are exposed to a torrential rain storm of changes. This thing changes, and that thing changes, and both of those mean that those things need to change. Living in a computerized world can feel like treading water: it’s exhausting, but you have to continue doing it, because if you stop, you drown.

The Linux kernel has a rule that changes to the kernel must never break software running on top of the kernel. A large part of the world’s computing depends on the Linux kernel: there are billions of devices, on all continents, in orbit, and also on Mars. All of those devices need to continue working. The Linux kernel developers are by no means perfect in this, but overall, upgrading to a new version is nearly a non-event: it requires installing the new version and rebooting to start using it, but everything usually just works. (Except when it doesn’t.)

Linux is not unique in this, of course. I use it as an example because it’s what I know.

Achieving that kind of stability takes a lot of care, and a lot of effort. This is not without cost. Sometimes it prevents fixing previous mistakes. Sometimes it turns a small change into a multi-year project to prevent breakage. For a project used as widely as Linux, the cost is worth it.

Most other software changes less carefully. For example, there is software that implements web sites and web applications: search engines, email, maps, shops, company marketing brochures, personal home pages, blogs, etc. A small number of web sites are used by such large numbers of people that they have a big impact: the people behind these sites take care when making changes. Most sites have little impact: if, say, https://liw.fi/training/rust-basics/ is down or renders badly for some people, it affects mostly just me. That means I can make changes more easily and with less care than, say, Amazon can change its web shop.

Much of the world’s software is code libraries. Applications, and web sites, build on top of those libraries to reduce development and maintenance effort. If a library already provides code to, for example, scale a photo to a smaller size, an application developer can use the library and not have to learn how to write that code themselves. This also raises quality: someone whose main focus is resizing photos can spend much more effort on how to do it well than someone whose main focus is making an email program that just happens to show thumbnails of attached images.

A well-made library that does something commonly useful might be used by thousands, even millions, of applications and web sites.

However, if the photo resizing code library changes often, and changes in ways that breaks the applications using it, all those thousands or millions of applications have to adapt. If they don’t adapt, they won’t benefit from other changes that they do want: say, a new way to resize that results in smaller image files with better clarity. They will also miss out on fixes for security issues.

There is an ongoing discussion among software developers about stability versus making changes more easily. Some people get really frustrated by how hard it can be to get new versions of their software to people who want it. For those people, and their users, the cost of stability is too high. They want something that takes less effort and goes faster, and are willing to instead pay to cost of things changes frequently and occasionally breaking.

Other people get really frustrated by everything breaking all the time. For those people, the cost of stability is worth it.

It’s a cost/benefit calculation that everyone needs to do for themselves. There is no one answer that serves everyone equally well. Telling other people that they’re wrong here is the only poor choice.

v-i version 0.2: non-interactive Debian installer for bare metal machines

I’ve just released version 0.2 of v-i, my non-interactive, fairly fast, unofficial, alternative installer of Debian for physical computers (“bare metal”). It’s what I use to install Debian on my PCs now. I blogged about it previously.

Below is a transcript of me installing Debian to my Thinkpad T480 laptop, from my desktop machine.

$ scp exolobe1-spec.yaml root@v-i:
exolobe1-spec.yaml                             100%  228   190.6KB/s   00:00
$ ssh root@v-i
Linux v-i 5.10.0-16-amd64 #1 SMP Debian 5.10.127-1 (2022-06-30) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sun Aug  7 13:10:02 2022 from
root@v-i:~# time ./v-i exolobe1-spec.yaml
OK, done

real    1m11.003s
user    0m30.671s
sys 0m8.645s
root@v-i:~# reboot
Connection to v-i closed by remote host.
Connection to v-i closed.

The exolobe1-spec.yaml file contains:

hostname: exolobe1
drive: /dev/sda
  - name: home
    size: 300G
    mounted: /home
  user_pub: |
   ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPQe6lsTapAxiwhhEeE/ixuK+5N8esCsMWoekQqjtxjP liw personal systems

To summarize what happened above:

  • I wrote the installer image to a USB drive, and booted the laptop with that drive
  • I logged into the installer, running on the laptop, using SSH, to run the installer
  • the installer asks no questions: I give it a file with all the information it needs to install Debian
  • the installation command took 1 minute 11 seconds
    • Disclaimer: this is the second install with the same USB drive. The first install takes about four minutes longer, because it runs debootstrap. Later installs use a cache of its output.
  • that time doesn’t include the time to write the USB drive, or to boot the target PC to the installer, or to boot the installed system
  • the installed system is fairly minimal, and does not have, say, a desktop system, a software stack to run web applications, or anything much else that’s actually useful
  • the installed system is, however, accessible over SSH, and can be provisioned with a configuration management system such as Ansible

See https://files.liw.fi/v-i/0.2/ for the installer image I used above, and some other bits and bobs. See git repository to open issues or to contribute.

On home Internet routers

Recently, the power brick for my home Internet router PC failed. It had worked flawlessly for six years. To get something working as soon as possible, I bought a cheap consumer router from a local store. I’d managed to forget how awful they are.

I have a number of computers at home, both physical hardware and virtual machines. They act as servers,. and I need to access them. By name. My undead router PC runs dnsmasq, which provides both DNS and DHCP server, and populates DNS using host names from DHCP requests. This is quite comfortable.

I’ve never had a consumer router do that. I’ve never understood why. They all seem to require manually maintaining that kind of MAC/IP/name mapping, if they provide it at all. Some of the ones then forget them after a week or two.

I know it’s possible to install something like OpenWRT on some consumer routers, and that’s great, when it’s possible. But I don’t like OpenWRT either. For me, it makes too many compromises to fit into minuscule hardware resources.

There’s many router software distributions out there. pfSense is perhaps the best known one. For myself, I tend to just use the Debian Linux distribution, which I know much better than the FreeBSD pfSense uses. I administer it via Ansible, which is how I like it.

After a week of swearing at the consumer router, I replaced it with an old laptop and a USB Ethernet adapter. Installing my Debian based router distribution (Puomi) took a few minutes: I’ve made my own installer that’s fully automated and fast. I then configured the installed system with Ansible to have the exact same setup as on my normal router PC. Keeping configurations in version control and automating installations and deployments feels like a super power.

I’ve demoted the cheap router to a wifi access point and placed it in the part of our home where wifi is most useful. We’d meant to do that anyway.

Now I just need to get a replacement power brick for my six-year-old little fanless PC. It’s surprisingly difficult, even from a store that claims to have over 200 of them in stock.

Obnam 0.8.0 - encrypting backup program

I’ve just pushed out version 0.8.0 of Obnam, an encrypting backup program. Below are the release notes.

Version 0.8.0, released 2022-07-24

Breaking changes

Breaking changes are ones that mean existing backups can’t be restored, or new backups can’t be created.

  • The list of backups is stored in a special “root chunk”. This means backups are explicitly ordered. This also paves way for a future feature to backups: only the root chunk will need to be updated. Without a root chunk, the backups formed a linked list, and deleting from the middle of the list would updating the whole list.

  • The server chunk metadata field sha256 is now called label. Labels include a type prefix, to allow for other chunk checksum types in the future.

  • The server API is now explicitly versioned, to allow future changes to cause less breakage.

New features

  • Users can now choose the backup schema version for new backups. A repository can have backups with different schemas, and any existing backup can be restored. The schema version only applies to new backups.

  • New command obnam inspect shows metadata about a backup. Currently only the schema version is shown.

  • New command obnam list-backup-versions shows all the backup schema versions that this version of Obnam supports.

  • Obnam now logs some basic performance measurement for each run: how many live files were found in total, backed up, chunks uploaded, existing chunks reused, and how long various parts of the process took.

Other changes

  • The obnam show-generation command now outputs data in the JSON format. The output now includes data about the generation’s SQLite database size.

Thank you

Several people have helped with this release, with changes or feedback.

  • Alexander Batischev
  • Lars Wirzenius