Welcome to my web log. See the first post for an introduction. See the archive page for all posts. (There is an english language feed if you don’t want to see Finnish.)

Archives Tags Moderation policy Main site

Me on Mastodon, for anything that is too small to warrant a blog post.

All content outside of comments is copyrighted by Lars Wirzenius, and licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License. Comments are copyrighted by their authors. (No new comments are allowed.)

CI annoyances

I recently asked on the fediverse:

If you’re a programmer and use a continuous integration system to build, test, and maybe deliver or deploy the software, what’s your biggest annoyance or gripe? (Whatever CI system you use.)

I got a bunch of answers. Follow the link above for all of them, here is a biased summary:

  • it’s too hard to write the specification for how a project should be built, tested, delivered, and deployed (“the pipeline”)
    • it’s easy to make mistakes
    • when something breaks, debugging is hard, partly because it is done remotely
    • it’s easy to make mistakes, even catastrophic ones
  • even everything goes well, it takes too long for a pipeline to run successfully
    • concurrency is hard to exploit, especially correctly
    • it’s hard to only build/test what is affected by changes, so many pipelines spend a lot of resources on building and testing everything
  • some people really don’t like YAML, especially large amounts of YAML

The summary is biased, as that’s what I was expecting to hear, based on personal experience.

I have been thinking about some solutions to annoyances like these, but I’m not yet ready to speak about them publicly. I hope to at least make a proof of concept prototype example demonstration of what I have in mind before I open up. (Which might mean I never do.)

Those willing to pay me my consulting rates to work on this please get in touch privately.

'The early days of Linux' on LWN

I wrote an article The early days of Linux for LWN.net, with some of my memories of the start of Linux. It is now no longer behind a paywall, so it’s easy to access for anyone.

Please share the link with people you think might enjoy reading it.

Tricking your brain to get things done

I use the David Allen Getting Things Done system (GTD for short) to cope with all the many, conflicting, things the world throws at me. I find it especially useful to know when I can not do anything without being in trouble later. (I’ve written about my implementation of GTD here.)

Even so, I like to be able to do many things so that I can feel I’ve made the world a better place. My main obstacle is that my brain is a lazy, scheming bastard that will try to get away with doing less than I need or want it to do. Over time, I’ve learned some tricks which I use to con my brain into actually getting the right things done.

When I plan a project, I write down a description of what the world is like once the project is done: “when this is done, I am very confident that my desktop computer is fully backed up and I verify that by restoring the data monthly”. I don’t phrase the goal as a task (“set up backups”), because my brain is maliciously lazy, and will try to interpret any task in as minimal a way as possible. When I describe a desired state of the world, my brain doesn’t find as many loopholes. This simple trick lets me complete more things to my satisfaction.

When I do write down individual tasks, I write down actions in as much detail as I will understand them later, when my brain is tired and cranky. This way, when it’s time to do the task, I don’t need to think about what needs to done or how to do it, I can just do. Thus, I write “install this backup program and run an initial backup to home file server over SSH”, instead of “set up some backup program and try it out”. If I need to first do some research before I can choose a backup program to install, that’s a separate task. Not having to think makes it easier to do. I also try to plan tasks so that they are very concrete, physical actions and as short as I can reasonably make them. Thus, I might actually have a task to just “install this program from the package in Debian”, as that’s a single command. It’s easier to start doing a task that I can expect to only take me minutes than one that will take an hour. Otherwise my brain will say “this is going to take a long time, so it doesn’t matter if I start a bit later”.

A related aspect is that I try to phrase things in a way to make it as easy for my little brain to cope. As an example, when keeping track of issues in my software, I try to report the issue, not the task or the steps to take to fix the issue. Thus, the issue is “backups aren’t encrypted” instead of “implement encrypted backups”. This is not just easier to understand, but also means that when it’s time to evaluate if a ticket can be closed, I evaluate if the actual issue is solved, not whether the task initially written down as a solution is finished. Sometimes the task is incomplete, or entirely wrong. I find I achieve better results if I don’t start planning work when reporting an issue.

Whatever I write down, I assume that future me has a brain that’s been affected by the kind of mild memory loss brought on by debugging late into the night, excessive tea drinking, and other such vices. Thus, I try to include sufficient context and supporting information in an issue or task, rather than writing down just enough to trigger a recall of the salient information. In short, I’m writing for a future me that only has the memories of yesterday’s me.

None of this is works all the time. Sometimes my brain is alert enough to realize that I’m trying to trick it to do thing in the future, and sabotages my best efforts. But my tricks work well enough that some of the time I can actually complete things and that’s enough for me.

Rant: year of Linux on the desktop

A rant about “year of Linux on the desktop” from a tired old man. I’ve been part of the Linux community since before Linux was called Linux. Over the years there’s been many people telling me directly that Linux is silly or wrong or imperfect, or that free and open source software is foolish or pointless. A lot more people have, of course, pontificated along those lines in public, and not directed it at me. I’m not claiming to be targeted at that, but I’ve been around and active for long enough that things accumulate. It’s the end of a long year for me, and I though I’d let off some steam myself. Hence this rant.

Over time, the goal posts of success keep being moved by the naysayers. I’m too tired to dig up all important milestones and dates, or references, but here’s highlights of the timeline as I have experienced it (years may be a little off):

  • 1991: It’s not possible for a hobbyist to write their own operating system kernel.
  • 1992: Well OK, you have a kernel, but there’s no networking or a graphical desktop, so it’s not actually useful. Anyway, ain’t nobody got time to compile all their software.
  • 1993: Fine, there’s some kind of desktop, but it doesn’t support drag and drop so it’s a toy. Also, it won’t go anywhere unless it fully and equally supports every graphics card ever made for the PC. Oh, and these pre-build distributions are insecure. Don’t download code from the Internet and run that, that’s just stupid.
  • 1994: Linux is pointless, since it only runs on the PC and not on any other kind of computer.
  • 1997: All these Linux users are just hobbyists, nobody will use it for anything important.
  • 1998: Corporations will never put money into developing this free software thing.
  • 2000: Linux and free software are a cancer on the IT industry.
  • 2001: The dotcom bubble burst, so Linux will now die, since nobody can afford to continue to develop it, as there’s no profit in free software.
  • 2003: Uh, okay, Linux didn’t die. But it’s too hard to install.
  • 2005: Ubuntu is just a toy.
  • 2006: So Linux is used a lot, but only on servers. It will never work on phones or embedded devices.

(skipping ahead so I don’t drag up too many bad memories)

  • 2022: Linux runs all top 500 super computers, billions of personal devices, most servers on the Internet, on all continents, on all oceans, in the air, in orbit, and on Mars. Oh, and in the air on Mars. All big corporations use open source in some form.
  • Also 2022: Linux will always be a hobbyist toy, unless solves all these new problems we’ve just thought about.

Next year, 2023, will be my thirtieth year of Linux on the desktop, and the thirtieth year of being told it’s not possible to use Linux on the desktop, or to only use Linux on the desktop. Some of the people telling me this weren’t born when I started using Linux on the desktop.

Despite everything, it’s been fun. I’ve been lucky to have been able to take part of this journey.

Can you run your test suite successfully 1000 times in a row?
seq 1000 | while read i; do echo $i; echo run test suite; echo; done

All quality software should have an automated test suite: some way to ensure that at least the primary happy path works. Ideally, the test suite would verify much more than that, but at least that.

Unfortunately, it is the nature of software to be buggy, sometimes in weird ways. Automated test suites are software. A particularly unpleasant way for a test suite to fail is intermittently. A test will fail sometimes, but not always. It’s called a flaky test. This is particularly bad, because it tends to undermine trust in the test suite as a whole. If a test always fails, there’s clearly a bug in the test or the code under test, and it clearly needs to be fixed.

If a test fails once every now and then, it might be problem in the test, in the code under test, in the environment, or something else. who knows? The threshold to just disabling or deleting the test can be low.

A test can be flaky for any number of reasons. How do you know if you have flaky tests?

An easy way to get some confidence is to run the test suite many times. The more times you run the test in row, the less likely the test is flaky due to random chance. It might be flaky due to other reasons, of course, but running the test suite 1000 times is a good baseline.

Every time I do this to a new project with a test suite of significant size, there are problems. Which I then fix.

Can your test suite pass 1000 repetitions of the test suite in a row? Are you sure?

Keyboardio Model100 keyboard: review at one month
Keyboardio Model100 with screwdriver, key cap puller, spare key switches, and other included parts.

I bought a new keyboard and have been using it for about a month now as my daily driver. It’s wonderful.

In 2019 I was given a Keyboardio Model01 keyboard as a gift. I had been curious of it for a while, so this gave me a chance to try one out without spending a big chunk of my own money. It’s a split ortholinear keyboard with custom designed key caps and a wooden enclosure in the shape of butterfly wings. It took me a day or two to start getting used to typing on, and about three months to be comfortably fast. I’m a software developer, and most of my day consists of typing, and so typing comfort and speed is important to me.

The switch to the new keyboard also involved switching to the US keyboard layout from my native Finnish one. I’d been touch typing with the Finnish keyboard since about 1983, so the layout change was also a big change. I switched so that typing program code would be easier. The Finnish layout hides some common characters (especially braces, brackets, and backslashes) behind modifier keys in a way that’s sometimes a little uncomfortable. The Model01 would’ve allowed me to keep using the Finnish layout almost without problems, I just chose not to. (The Swedish letter a-with-ring, or å, was a little difficult on the Model01, but I could’ve lived with it had I wanted to.)

The Model01 quickly became my favorite keyboard. Every time I used my laptop keyboard I found it quite uncomfortable, both mentally (“where is my Enter?!”) and physically (“oh my poor wrists!”). Part of the discomfort was due to the US/FI layout switch, but mostly it was how typing on a laptop keyboard feels, and how the keys are physically laid out, and how my hands and arms have to twist. There’s just no comparison with the Model01. For a while I even carried the Model01 to cafes and on overseas trips, but stopped doing that because it is quite a large extra thing to lug around. It’s easier to type with two fingers.

I’ve recently upgraded to the Model100, buying it from the Kickstarter campaign. It’s nearly identical to the Model01, but the key switches are different. I chose switches that are silent, but tactile, and it’s quite a quiet keyboard. My spouse appreciates the lack of noise. I appreciate that the typing feel is awesome.

The Model100 is without doubt the best keyboard I’ve ever used.

It’s not without issues. The wood enclosure will move with the seasons, like wood does. I had some issues with that with the Model01, so it’s not a new problem. I will cope, but I wish I could get an enclosure without this problem. Possibly one made out of plywood? Or metal?

I also wish the keyboard didn’t taunt me with all the possibilities that come from being able to, and encouraged to, change its firmware. I really want to, but I know it can become an endless time sink for a tinkering geek like myself, so I’m trying to resist.

Rust training for FOSS devs: how did it go?

For the past three Saturdays, I’ve been training half a dozen free and open source software developers in the basics of the Rust programming language. It’s gone well. The students have learned at least some Rust and should be able to continue learning on their own. I’ve gotten practice doing the training and have made the course clearer, tighter, and generally better.

The structure of the course was:

  • Session 1: quick start
    • what kind of language is Rust?
    • the cargo tool
    • using Rust libraries
    • error handling in Rust
    • evolution of an enterprise “hello, world” application
  • Session 2: getting things to work
    • memory management in Rust
    • the borrow checker
    • concurrency with threads
    • hands-on practice: compute sha256 checksums concurrently
  • Session 3: getting deeper
    • mob programming to implement some simple Unix tools in Rust

I am going to run the course again. I’ll give people on the waiting list first refusal, but if my proposed times don’t fit them, I’ll ask for more volunteers. Watch my blog to learn about that. If you want to get on the waiting list, follow instructions on the course page.

If you’d like me to teach you and others Rust, ask your employer to pay for it. I can do training online or on-site. See my paid course page

Linus has just recently merged in initial support for Rust in the Linux kernel. If you’re a professional Linux developer and would like to learn Rust, please ask your employer to fund a course for you and your colleagues.

(I’m afraid I don’t publish my materials: they’re not useful on their own, and there’s a lot of really good Rust learning materials out there already.)

Rust training for FOSS programmers

Do you write code for free and open source projects? Would you like to learn the basics of the Rust programming language? I’m offering to teach the basics of Rust to free and open source software programmers, for free.

After the course, you will be able to:

  • understand what kind of language Rust is
  • make informed decisions about using Rust in a project
  • read code written in Rust
  • write simple command line programs in Rust
  • understand memory management in Rust
  • study Rust on your own

To be clear: this course will not make you an expert in Rust. The goal is to get you started. To become an expert takes a long time and much effort.

For more information, including on when it happens and how to sign up, see my Rust training for FOSS programmers page.

(Disclaimer: I’m doing this partly ad advertising for my paid training: if you’d like your employer to pay me to run the course for their staff, point them at my training courses page.)

Not breaking things is hard

Building things that work for a long time requires a shift in thinking and in attitude and a lot of ongoing effort.

When software changes, it has repercussions for those using the software directly, or using software that builds on top of the software that changes. Sometimes, the repercussions are very minor and require little or no effort from those affected. More often, the people developing, operating, or using software are exposed to a torrential rain storm of changes. This thing changes, and that thing changes, and both of those mean that those things need to change. Living in a computerized world can feel like treading water: it’s exhausting, but you have to continue doing it, because if you stop, you drown.

The Linux kernel has a rule that changes to the kernel must never break software running on top of the kernel. A large part of the world’s computing depends on the Linux kernel: there are billions of devices, on all continents, in orbit, and also on Mars. All of those devices need to continue working. The Linux kernel developers are by no means perfect in this, but overall, upgrading to a new version is nearly a non-event: it requires installing the new version and rebooting to start using it, but everything usually just works. (Except when it doesn’t.)

Linux is not unique in this, of course. I use it as an example because it’s what I know.

Achieving that kind of stability takes a lot of care, and a lot of effort. This is not without cost. Sometimes it prevents fixing previous mistakes. Sometimes it turns a small change into a multi-year project to prevent breakage. For a project used as widely as Linux, the cost is worth it.

Most other software changes less carefully. For example, there is software that implements web sites and web applications: search engines, email, maps, shops, company marketing brochures, personal home pages, blogs, etc. A small number of web sites are used by such large numbers of people that they have a big impact: the people behind these sites take care when making changes. Most sites have little impact: if, say, https://liw.fi/training/rust-basics/ is down or renders badly for some people, it affects mostly just me. That means I can make changes more easily and with less care than, say, Amazon can change its web shop.

Much of the world’s software is code libraries. Applications, and web sites, build on top of those libraries to reduce development and maintenance effort. If a library already provides code to, for example, scale a photo to a smaller size, an application developer can use the library and not have to learn how to write that code themselves. This also raises quality: someone whose main focus is resizing photos can spend much more effort on how to do it well than someone whose main focus is making an email program that just happens to show thumbnails of attached images.

A well-made library that does something commonly useful might be used by thousands, even millions, of applications and web sites.

However, if the photo resizing code library changes often, and changes in ways that breaks the applications using it, all those thousands or millions of applications have to adapt. If they don’t adapt, they won’t benefit from other changes that they do want: say, a new way to resize that results in smaller image files with better clarity. They will also miss out on fixes for security issues.

There is an ongoing discussion among software developers about stability versus making changes more easily. Some people get really frustrated by how hard it can be to get new versions of their software to people who want it. For those people, and their users, the cost of stability is too high. They want something that takes less effort and goes faster, and are willing to instead pay to cost of things changes frequently and occasionally breaking.

Other people get really frustrated by everything breaking all the time. For those people, the cost of stability is worth it.

It’s a cost/benefit calculation that everyone needs to do for themselves. There is no one answer that serves everyone equally well. Telling other people that they’re wrong here is the only poor choice.