Welcome to my web log. See the first post for an introduction. See the archive page for all posts. (There is an english language feed if you don't want to see Finnish.)
Archives Tags Moderation policy Main site
Me on Mastodon, for anything that is too small to warrant a blog post.
All content outside of comments is copyrighted by Lars Wirzenius, and licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License. Comments are copyrighted by their authors. (No new comments are allowed.)
Summary: I'd like help maintaining vmdb2
, my
software for creating virtual machine images with Debian installed.
In 2011 I needed to create six similar Debian virtual machines, differing
in Debian release and computer architecture. This was tedious, and so it
needed to be automated. I wrote vmdebootstrap
, which worked OK for a few
years, but was not very flexible. It had a fixed sequence of operations
that could only be slightly varied using options. When it worked, it was
fine, but increasingly it didn't work. I was facing an ever-growing set
of options, some of which would be mutually incompatible. With N options,
you need to test N2 combinations. That did not appeal.
In 2017 I got tired of the growing complexity and write vmdb2
, which didn't
have a fixed sequence of operations. Instead, it read an input file that
lists the operations to do, and their order. This was much more flexible.
Combinatorial explosion averted.
I still maintain vmdb2
but for many years now it has been in a "selfish
maintainership" mode, where I only really fix or change anything if it
affects me, or I have some other such reason do something. I've done this to
protect my free time and my sanity.
Despite this there are a few people using it and I think it's time to make
sure vmdb2
has a better future.
The problem, from my point of view, with maintaining vmdb2
is that many
people use to build images for systems that are wildly different from what
I originally built vmdebootstrap
for: Intel architecture virtual machines.
Indeed, I do that myself: I built a Debian installer on top of vmdb2
for
bare metal PC hardware (https://v-i.liw.fi/).
I am not any kind of deep expert in boot loaders, UEFI, or hardware support, or layers close to these in a Linux operating system. Debugging problems with these is tedious and frustratring. Reviewing changes related to them as well.
I also can't spend a ton more time on vmdb2
, as I have an embarrassing
plethora of other hobby projects.
Therefore, I'd like help maintaining vmdb2
. If you use it, or this area of
system software interests you, and you'd like to help, please let me know.
If I can do something to make it easier for you to help, let me know.
My contact information is public. Email is preferred.
I develop CI systems as a hobby and for work. I want to gain
experience in running what I've built, by running a service for
others. I've set up a Radicle CI instance with my Ambient
engine to run CI for open source Rust projects that have a Radicle
repository. See callisto.liw.fi
.
The offer:
My server runs CI for your project for free. You get feedback on whether your project builds, and its test suite runs successfully. If you can and want to, you tell me what you think of Ambient and Radicle CI. I find out if my CI system works for other people's projects and learn about missing features and other problems.
The idea is that you do me a favor and I do you a favor. In the best case we both benefit. In the worst case you waste a small amount of time and effort to try a new system.
I can't promise much, but I intend to keep this running for at least until the end of the year.
Some constraints:
- For ideological reasons, this offer is only open to open source projects.
- For technical reasons, your project must be in a Radicle repository and must be a Rust program. Radicle is how Ambient is notified that something is changed and that CI needs to run. Rust is required because Ambient downloads dependencies, and that is so far only implemented for Rust.
- You get pass/fail status and a log for each run.
- You don't get build artifacts. There is no delivery or deployment available. For now, I don't want to provide a service that publishes arbitrary files or that can access other servers. My server contains no secrets and has no access to anywhere else.
Some caveats:
- Ambient is not mature software. It is not polished at all. It's a hobby project. User visible behavior in Ambient may change without warning. I try to avoid breaking anything, of course.
- When I update software on the server, CI runs in progress may be terminated. Sorry. You can trigger a new run.
- Counter caveat: I've been using Radicle with Ambient as my only CI system for most of this year so it's probably not entirely useless, maybe, possibly, I hope, but this experiment is to find out.
- The CI server is configured so it will run when the default branch of the Radicle repository changes or when a Radicle "patch" is created or modified. A patch corresponds to a PR or MR.
- CI runs in a virtual machine with no network access. The operating
system is Debian 12 (
bookworm
), using CPU architectureamd64
, with several Rust versions installed, with 2 virtual CPUs, 12 GiB RAM, a few tens of GB of disk space, about 30 GB of cache, and a maximum run time of 20 minutes. If these limits aren't enough, I may be able to accommodate special requests, but I'm trying to have little variation between CI projects for now. - Rust crate dependencies are downloaded before the VM starts and
provided in the VM in
/workspace/deps
. If you need other dependencies that aren't in the VM, I'm going to say "sorry" for now. - The lack of network access is part of the security design of Ambient.
- The server and service may go away at any time. This offer is an experiment and if I deem the experiment not worth continuing, I will terminate the service. Possibly without notice.
- I may need to remove data and projects the server at any time, because hardware resources are limited. This might happen without warning.
- I may need to wipe the server and re-install it from scratch, to recover from bad mistakes on my part. This too may happen without warning.
- The above has a lot of warnings, sorry. I'm trying to manage expectations.
Selection process:
- If you'd like your open source Rust project to use my server, post a
message on the fediverse mentioning me (
@liw@toot.liw.fi
) with a short explanation of your project, and a link to its public Git repository. You can also email me (liw@liw.fi
). If the repository is already in a Radicle repository, tell me the repository ID. You can create a Radicle repository after I tell you I'd like to select your project, if you prefer to wait. - I select some number of projects using nebulous and selfish criteria and add your repository to my CI server node and you can watch https://callisto.liw.fi/ for run information, including run logs. I'm likely to select all projects that seem benign, while the server has spare capacity.
Communication:
- You follow me on the fediverse to get updates, or follow my blog. You can send me a direct message or email, if you prefer, while the experiment is running.
- The Radicle Zulip chat system is also available, if you're
willing to create an account. See the
#radicle-ci
channel there.
Documentation:
- Instructions on callisto
- I name my computers after Finnish heavy metal bands. This srver is named after Callisto.
Two of the original ideas about Unix is that each program should do one thing and that programs should be able to be combine so they consume each others' output. This led to the convention and tradition that Unix command line programs produce output that's relatively easy for other programs to parse.
In practice, this meant that output was line based, one record per line, and columns on a line were separated by white space or other characters that were easy to match on, such as colons. In simple cases this is very easy, and so it's common, but as the world gets more complicated, simple cases are sometimes not enough.
Today, it's a common request today that a Unix command line program should optionally format output in a structured format, such as JSON.
Luckily, this is easy enough to do, in most languages. In the Rust
language, the powerful serde
set of libraries makes this
particularly easy.
However, adding JSON output support to an existing program can be tedious. A very common implementation approach is to mix the logic for figuring out what to output and the logic for how to format the output. If there's only one output format, mixing these concerns is often the simplest path forward. In very resource constrained environments it can be the only way, if there isn't enough memory to store all of the data to be formatted to output at once.
When multiple output formats need to be supported, and it's possible to store all of the output data in memory at once, I prefer to separate the concerns. First I collect all the data to be output, then I produce output in the desired output format.
As an example in the Rust language:
#![allow(dead_code)]
use serde::Serialize;
fn main() {
let output = Output {
name: "example".into(),
values: vec![
OutputValue {
key: "foo".into(),
value: "bar".into(),
},
OutputValue {
key: "yo".into(),
value: "yoyo".into(),
},
],
};
println!("========================================");
println!("humane output:");
println!();
println!("name: {}", output.name);
println!("values:");
for v in output.values.iter() {
println!(" {}={}", v.key, v.value);
}
println!("========================================");
println!("debug output:");
println!();
println!("{output:#?}");
println!("========================================");
println!("JSON output:");
println!();
println!("{}", serde_json::to_string_pretty(&output).unwrap());
}
#[derive(Debug, Serialize)]
struct Output {
name: String,
values: Vec<OutputValue>,
}
#[derive(Debug, Serialize)]
struct OutputValue {
key: String,
value: String,
}
This is a very simplistic example, of course, but shows how the two concerns can be separated.
I've converted a few programs to this style over the years. The hard part is always teasing apart the data collection and the output formatting. It needs to be done carefully to avoid breaking anything that depends on the existing output format.
In any new programs I write, I separate the concerns from the beginning to be kind to my future self.
I've been using the Debian Linux distribution since the mid-1990s. I still use it. I had a brief exploration of Linux distributions, early on. It was brief partly, because there were so few of them.
My first Linux installation was by Linus, to develop and test the installation method. He'd never installed Linux, because it grew on his PC on top of an existing Minix installation. He used my PC to figure out how to install Linux. This was in 1991.
I then tried subsequent boot+root floppy images, by Linus or others, and the MCC Interim Linux distribution, and SLS. Possibly one or two others.
Then in 1993, Debian was announced. I think I tried it first in 1994, and was hooked. Debian was interesting in particular because it was a community project: I could join and help. So I did. I became a Debian developer in 1996.
I've since used Ubuntu (while working for Canonical), and Baserock (while working for Codethink), and I've looked at several others, but I always return to Debian.
I like Debian for several reasons:
- I know it very well.
- I know the community of Debian developers.
- I trust the community of Debian developers.
- I trust the Debian project to follow its Social Contract.
- I trust the Debian project to vet software freedom of what they package.
- I trust the Debian project to update its packages for security fixes.
- I trust the Debian project to keep the privacy of its users in mind.
The key word here is trust. Over thirty years, I've built very strong trust in Debian doing the right thing, from my point of view. That's a pretty high bar for any other distribution to clear.
I'm building riki, my partial ikiwiki clone, from the bottom up. My previous two attempt have been more top down. I'm now thining that for this project at least it makes sense to first build the fundamental building blocks that I know I'll be needing, and do the higher level logic on top of that later.
The first building block is a way to represent page names, and to resolve references from one page to another, within a site. I'm trying to mimick what ikiwiki does, since I'm aiming to be compatible with it.
- A page is the unit of a site. A site consits of one or more pages.
- note that I consider "blobs" such as images to each be pages, but they're not parsed as wiki text, and merely get copied to the output directory as-is
- A "page path name" is the file system pathname to the source file of a page, relative to root of the site source directory.
- A "page name" is the path from the root of site to a page. It refers to the logial page, not the source file or the generated file.
- A "link" is a path from one logical page to another.
Examples:
- a file
index.mdwn
(page path name) at the root of source tree becomes page/
and output fileindex.html
- file
foo/bar.mdwn
becomes pagefoo/bar
and output filefoo/bar/index.html
- if file
foo/bar.mdwn
links to (refers to) page/bar
, it refers to pagebar
at the root of the site, which corresponds to filebar.mdwn
in the source andbar/index.html
in the source tree
I like the ikiwiki
default of using a directory for each page in the output.
In other words, source file foo.mdwn
becomes foo/index.html
in the output,
to represent page foo
.
I'm not fond of the .mdwn
suffix for markdown files, which ikiwiki
has been
using for a very long time. I will make riki
support it, but will later also
support the .md
suffix. Initially, I'll stick with the longer suffix only.
I've implemented a Rust type PageName
to represent a page name, and RelativePath
to represent the path from one page to another. I like to use Rust types to
help me keep track of what's what and to help me avoid silly mistakes. These
two types are basically slighly camouflaged strings.
More interestingly, I've also implemented a type Pages
that represents the
complete set of pages in a site. This exists only to allow me to implement the
method resolve
:
fn resolve(&self, source: &PageName, link: &str) -> Result<PageName, PageNameError>
This returns the name of the page that the link refers
to. ikiwiki
has a somewhat intricate set of linking
rules which this
method implements. This will be used in many places: everywhere
a page refers to another page on the site. Thus, this is truly a
fundamental building block that has to be correct.
The source code module implemetning all of the above is in Git if you want all the dirty details. I expect it to change, but I wanted to at least get the logic for linking rules done and that was easier if it's all in one module.
I'm going to try to re-implement part of
ikiwiki
. It's my third attempt, so the
likelihood of failure is high. I'll blog about this for general
amusements. "Software development as a comedic performance", if you
will.
The name of the new program is riki
: it's part of ikiwiki
and it's
written in Rust.
Motivation
I've been using ikiwiki
for wikis, web sites, and blogs for about
two decades. These days I only use it as a static site generator. I
like it. I've grown comfortable with it.
I've tried several other static site generators, and I've written one myself, but I've not liked them.
Some of the reasons I like ikiwiki
are:
- the power of the
inline
directive is astounding; it's how anikiwiki
site creates a blog, but I use it also to collect pages by tag, or by location on site, or by other criteria; this is very flexible and lets me do some things basically no other site generator has enabled me to do - much of the power of
inline
comes from thePageSpec
mini-language for selecting what pages to include - in the generated HTML for a site all internal links are relative to the page where the link is; this means the HTML isn't rooted at a specific base URL, and I can easily move it elsewhere
- in other ways too,
ikiwiki
gives me as the site author the power and flexibility to do what I want to do, it is not opinionated; an opinionated tool is great if you share its opinions - there is a 1:1 mapping of input files to output files in their respective directory trees; I get to organize things in whatever way makes sense to me
However, ikiwiki
is of course not perfect:
- it can be slow; my biggest site is many thousands of pages, and can take up to about twenty minutes to build from scratch; that doesn't happen often, but it happens often enough that it annoys me
- on a first build, on a fresh Git checkout,
ikiwiki
gets timestamps of pages wrong, unless you run it with the--gettime
option, which makes the build process even slower - to enable speedy incremental builds you have to manage the
.ikiwiki
cache carefully; this complicates things and I keep making mistakes with it - page templates are powerful, but also clumsy, and it takes a lot of
work to style
ikiwiki
output more than a little ikiwiki
is too forgiving of errors: for example, if an internal link is to a page that doesn't exist, the site build doesn't failikiwiki
is written in the Perl language, which I don't know much, and that makes it hard for me to make changes
Riki
What I'm aiming for with riki
:
- only a static site generator; I have no interest in
riki
being a wiki or otherwise support editing via a web browser - speed; in my initial prototyping on that largest site of mine, a single-threaded Rust program processed it in less than ten seconds; this is fast enough that I don't care about caching or incremental builds
- intolerance of errors: if there's any problem in a site (bad link, wrong date format, etc), fail the site build
- as much of output styling as possible can be done with CSS only and the provided default CSS is acceptable to me
- hackability by me
ikiwiki
compatibility for the sites I have; I'll be happy to review patches for additional features, but I'm unlikely to implement directives that I don't use myself, for example
I'm happy to consider requests for ikiwiki
supports I don't use
myself. I cannot promise to implement any of them, but I can at least
promise to discuss them, and to try to structure what I do implement
in such a way that it'd not be too invasive for someone else to
implement the desired features.
I use Radicle, and the riki
repository
is already public. Radicle is a bit of a leap for most people, so if
there's demand, I can make a mirror on codeberg.org
. Until then I'm
happy to receive issue reports via email, or patches (something that I
can feed to git am
) via email. Or any other channel.
(This is a re-post to my own blog of the article we just posted on the Radicle blog.)
In this blog post I show how I use Radicle and its CI support for my own software development. I show how I start a project, add it to Radicle, add CI support for it, and manage patches and issues.
I have been working full time on Radicle CI for a couple of years now. All my personal Git repositories are hosted on Radicle. Radicle CI is the only CI I now use.
There are instructions to install the software I mention here at the end.
These days, I'm not a typical software developer. I usually work in Emacs and the command line instead of an IDE. In this blog post I'll concentrate on the parts of my development process that relate to Radicle, and not my other tooling.
Overview of Radicle
Radicle is a peer-to-peer, distributed, local-first, sovereign collaboration system for software development built on Git. It's open source and does not use blockchain or other crypto currency stuff.
In 2024 I wrote an article for LWN, "Radicle: peer-to-peer collaboration with Git", which is still a reasonable introduction to Radicle. For an overview of Radicle, see that article, and the guides on the Radicle web site .
Overview of Radicle CI
The Radicle node process opens a Unix domain socket to which it sends events describing changes in the node. One of these events represents changes to a repository in the node's storage.
Support for CI in Radicle is built around the repository change event.
The Radicle CI broker (cib
), listens for the events and matches them
against its configuration to decide when to run CI. The node operator
gets to decide for what repositories they run CI.
The CI broker does not itself run CI. It invokes a separate program, the "adapter", which is given the event that triggered CI. The adapter either executes the run itself, or uses an external CI system to execute it. This allows Radicle to support a variety of CI systems, by writing a simple adapter for each.
I have written a CI engine for myself,
Ambient, and the adapter for that
(radicle-ci-ambient
), and that is what I use.
There are adapters for running CI locally on the host or in a
container, GitHub actions, Woodpecker, and several others. See CI
broker
README.md
and integration
documentation
for a more complete list. The adapter interface is intentionally easy
to implement: it needs to read one line of JSON and write up to two
lines of JSON.
The sample project
This blog post is about Radicle, so I'm going to use a "hello world" program as an example. This avoids getting mired into the details of implementing something useful.
First I create a Git repository with a Rust project. I choose Rust, because I like Rust, but the programming language is irrelevant here.
$ cargo init liw-hello
Creating binary (application) package
... some text removed
$ cd liw-hello
$ git add .
$ git commit -m "chore: cargo init"
[main (root-commit) 5037847] chore: cargo init
3 files changed, 10 insertions(+)
create mode 100644 .gitignore
create mode 100644 Cargo.toml
create mode 100644 src/main.rs
Then I edit the src/main.rs
file to have some useful content,
including unit tests:
fn main() {
let greeting = Greeting::default()
.greeting("hello")
.whom("world");
println!("{}", greeting.greet());
}
struct Greeting {
greeting: String,
whom: String,
}
impl Default for Greeting {
fn default() -> Self {
Self {
greeting: "howdy".into(),
whom: "partner".into(),
}
}
}
impl Greeting {
fn greeting(mut self, s: &str) -> Self {
self.greeting = s.into();
self
}
fn whom(mut self, s: &str) -> Self {
self.whom = s.into();
self
}
fn greet(&self) -> String {
format!("{} {}", self.greeting, self.whom)
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn default() {
let g = Greeting::default();
assert!(!g.greeting.is_empty());
assert!(!g.whom.is_empty());
}
#[test]
fn sets_greeting() {
let g = Greeting::default().greeting("hi");
assert_eq!(g.greet(), "hi partner");
}
#[test]
fn sets_whom() {
let g = Greeting::default().whom("there");
assert_eq!(g.greet(), "howdy there");
}
}
To commit that, I actually use Emacs with Magit for this, but I also often use the command line, which I show here.
git commit -am "feat: implement greeting"
Once I have a Git repository with at least one commit, I can create a
Radicle repository for that. I do that on the command line. The rad
init
command asks the user some questions. The answers could be
provided via option, which is useful for testing, but not something I
usually do when using the program.
$ rad init
Initializing radicle ๐พ repository in /home/liw/radicle/liw-hello..
โ Name liw-hello
โ Description Sample program for blog post about Radicle and its CI
โ Default branch main
โ Visibility public
โ Repository liw-hello created.
Your Repository ID (RID) is rad:z3dhWQMH8J6nX3Qo97o5oSFMTfgyr.
You can show it any time by running `rad .` from this directory.
โค Uploaded to z6MksCgjxU4VZt6qgtZntdikhtXFbsfvKRLPzpKtfCY4rAHR, 0 peer(s) remaining..
โ Repository successfully synced to z6MksCgjxU4VZt6qgtZntdikhtXFbsfvKRLPzpKtfCY4rAHR
โ Repository successfully synced to 1 node(s).
Your repository has been synced to the network and is now discoverable by peers.
Unfortunately, you were unable to replicate your repository to your preferred seeds.
To push changes, run `git push`.
There you go. I now have a Radicle repository to play with. As of publishing this blog post, the repository is alive on the Radicle network, if you want to look at it or clone it.
CI configuration in the repository
To use Radicle CI with Ambient, I need to create
.radicle/ambient.yaml
:
plan:
- action: cargo_clippy
- action: cargo_test
This tells Ambient to run cargo clippy
and cargo test
, albeit with
additional command line arguments.
This is specific to Ambient, and the Ambient adapter for Radicle CI, but similar files are needed for every CI system. The Radicle CI broker does not try hide this variance: it's important that you, as the developer using a specific CI system, get full access to it, even when you use it through Radicle CI. If the CI broker added a layer above that it would only cause confusion and irritation.
Running CI locally
I find the most frustrating part of using CI to be to wait for a CI
run to finish on a server and then try to deduce from the run log what
went wrong. I've alleviated this by writing an extension to rad
to
run CI locally:
rad-ci
.
It can produce a huge amount of output, so I've abbreviated that
below.
rad
supports extensions like git
does: if you run rad foo
and
foo
isn't built into rad
, then rad
will try to run rad-foo
instead. rad-ci
can thus be invoked as rad ci
, which I use in the
example below.
$ rad ci
...
RUN: Action CargoClippy
SPAWN: argv=["cargo", "clippy", "--offline", "--locked", "--workspace", "--all-targets", "--no-deps", "--", "--deny", "warnings"]
cwd=/workspace/src (exists? true)
extra_env=[("CARGO_TARGET_DIR", "/workspace/cache"), ("CARGO_HOME", "/workspace/deps"), ("PATH", "/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin")]
Checking liw-hello v0.1.0 (/workspace/src)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.15s
RUN: Action finished OK
RUN: Action CargoTest
SPAWN: argv=["cargo", "test", "--offline", "--locked", "--workspace"]
cwd=/workspace/src (exists? true)
extra_env=[("CARGO_TARGET_DIR", "/workspace/cache"), ("CARGO_HOME", "/workspace/deps"), ("PATH", "/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin")]
Compiling liw-hello v0.1.0 (/workspace/src)
Finished `test` profile [unoptimized + debuginfo] target(s) in 0.18s
Running unittests src/main.rs (/workspace/cache/debug/deps/liw_hello-9c44d33bbe6cdc80)
running 3 tests
test test::default ... ok
test test::sets_greeting ... ok
test test::sets_whom ... ok
test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
RUN: Action finished OK
RUN: Action TarCreate {
archive: "/dev/vde",
directory: "/workspace/cache",
}
RUN: Action finished OK
RUN: Action TarCreate {
archive: "/dev/vdd",
directory: "/workspace/artifacts",
}
RUN: Action finished OK
ambient-execute-plan ends
EXIT CODE: 0
[2025-07-04T05:48:23Z INFO ambient] ambient ends successfully
Everything went fine.
(I've used the voluminous output to help debug rad-ci
, but now that
it is stable, I should reduce the volume by default. A cobbler's
children may have no shoes but a programmer's tool has unnecessary
debug output.)
I find this ability to emulate what happens in CI on a server to be very useful. To start with, I can use the resources I have locally, on my laptop. I don't need to compete with the shared server with other people. I don't have to wait for the CI server to have time for me. I also don't need to commit changes, which is another little source of friction removed from the edit-ci-debug cycle.
For Ambient I intend to add support when it's run locally (as rad-ci
does), and there's a failure, the developer can log into the
environment and have hands-on access. This will make debugging a
failure under CI much easier than pushing changes to add more output
to the run log to help figure out what the problem is. But that isn't
implemented yet: I only have 86400 seconds per day, most days.
CI configuration on my CI node
I love being able to run CI locally, but it is not sufficient. One important aspect of a shared CI is that everyone uses the same environment, with the same versions of everything. A server can also deliver or deploy changes, as needed.
I've configured a second node, ci0, where I run the CI broker and Ambient for all the public projects I have or participate in. The actual server is a small desktop PC I have, which is quiet and uses fairly little power, especially when idle. The HTML report pages get published on a public server, for the amusement of others.
My CI broker configuration is such that I don't need to change it for
every new project. I only need to make sure the repository is on the
CI node, and the repository has a .radicle/ambient.yaml
file.
To seed, I run this on the CI node:
rad seed rad:z3dhWQMH8J6nX3Qo97o5oSFMTfgyr
That's the repository ID for my sample project. I run rad .
in the
working directory to find out what it is. Because finding out the ID
is so easy, I never bother to make note of it when creating a repository.
Reporting an issue
The rad
tool can open issues from the command line, but for issue
management I've moved to using the desktop
application. In the screenshot below I
open an issue about the default greeting.
In the above picture I show how I open a new issue for the sample repository, saying the greeting is not the usual "hello world" greeting.
Making a change
To make a change to the project, I make a branch, commit some changes, then create a Radicle patch.
$ git switch -c change
Switched to a new branch 'change'
$ git commit -am "feat: change greeting"
[change d19c898] feat: change greeting
1 file changed, 2 insertions(+), 2 deletions(-)
$ git push rad HEAD:refs/patches
โ Patch fd552417cc9a66c6aac1b6c8c717996bea741bfd opened
โ Synced with 11 seed(s)
* [new reference] HEAD -> refs/patches
The last command above pushes the branch to Radicle, via the special
rad
remote, and instructs the rad
Git remote helper to create a
Radicle patch instead of a branch. The refs/patches
name is special
and magic. The git-remote-rad
helper program understands it as a
request to create a new patch.
This makes a change in the local node, which by default then automatically syncs it with other nodes it's connected to, if they have the same repository. My laptop node is connected to the CI node, so that happens immediately.
As soon as the new patch lands in the CI node, the CI broker triggers a new CI run, which fails. I can go to the web page updated by the CI broker and see what the problem is. The patch diff is:
diff --git a/src/main.rs b/src/main.rs
index a79818f..216bab7 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -11,8 +11,8 @@ struct Greeting {
impl Default for Greeting {
fn default() -> Self {
Self {
- greeting: "howdy".into(),
- whom: "partner".into(),
+ greeting: "hello".into(),
+ whom: "world".into(),
}
}
}
The problem is that tests assume the original default:
---- test::sets_greeting stdout ----
thread 'test::sets_greeting' panicked at src/main.rs:50:9:
assertion `left == right` failed
left: "hi world"
right: "hi partner"
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
---- test::sets_whom stdout ----
thread 'test::sets_whom' panicked at src/main.rs:56:9:
assertion `left == right` failed
left: "hello there"
right: "howdy there"
failures:
test::sets_greeting
test::sets_whom
test result: FAILED. 1 passed; 2 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
I change the tests, run the tests locally, run rad ci
locally, and
commit the fix..
I then push the fix to the patch. The push default for this branch was set to the Radicle patch, which makes pushing easier.
$ git push
โ Patch fd55241 updated to revision 8d1f8c69dc0f8028d8b1bb9e336240febaf2d1f4
To compare against your previous revision 3180ddd, run:
git range-diff c3f02b43830578c93edd83a23ee2902899fdb159 17cda244d2e78bdeffd0647b20f315726bebf605 2a82eb0326179b60664ffeeac3ee062a5adfdcd6
โ Synced with 13 seed(s)
https://app.radicle.xyz/nodes/ci0/rad:z3dhWQMH8J6nX3Qo97o5oSFMTfgyr/patches/fd552417cc9a66c6aac1b6c8c717996bea741bfd
To rad://z3dhWQMH8J6nX3Qo97o5oSFMTfgyr/z6MkgEMYod7Hxfy9qCvDv5hYHkZ4ciWmLFgfvm3Wn1b2w2FV
17cda24..2a82eb0 change -> patches/fd552417cc9a66c6aac1b6c8c717996bea741bfd
I wait for CI to run. It is a SUCCESS!
I still need to merge the fix to the main
branch. This will also
automatically mark the branch as merged for Radicle.
$ rad patch
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ โ ID Title Author Reviews Head + - Updatโฆ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ fd55241 ci: add configuration Radicle + Ambient liw (you) - 2a82eb0 +14 -4 1 minโฆ โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
$ git switch main
Switched to branch 'main'
$ git merge change
Updating 54d2c9c..2a82eb0
Fast-forward
Cargo.lock | 7 +++++++
src/main.rs | 8 ++++----
2 files changed, 11 insertions(+), 4 deletions(-)
create mode 100644 Cargo.lock
$ git push
โ Patch fd552417cc9a66c6aac1b6c8c717996bea741bfd merged
โ Canonical head updated to 2a82eb0326179b60664ffeeac3ee062a5adfdcd6
โ Synced with 13 seed(s)
https://app.radicle.xyz/nodes/ci0/rad:z3dhWQMH8J6nX3Qo97o5oSFMTfgyr/tree/2a82eb0326179b60664ffeeac3ee062a5adfdcd6
To rad://z3dhWQMH8J6nX3Qo97o5oSFMTfgyr/z6MkgEMYod7Hxfy9qCvDv5hYHkZ4ciWmLFgfvm3Wn1b2w2FV
c3f02b4..2a82eb0 main -> main
$ rad patch
Nothing to show.
$ delete-merged
Deleted branch change (was 2a82eb0).
(The last command is a little helper script that deletes any local branches that have been merged into the default branch. I don't like to have a lot of merged branches around to confuse me.)
I could have avoided this round trip via the server by running rad
ci
, or at least cargo test
, before creating the patch, but I was
confident that I can't make a mistake in an example this simple. This
is why CI is needed: to keep in control the hubris of someone who has
been programming for decades.
Installing
To install Radicle itself, the official
instructions will get you rad
and
radicle-node
. The Radicle desktop
application has it's own installation
instructions.
There are instructions for installing Radicle CI (for Debian), but not other systems, since I only use Debian. I would very much appreciate help with expanding that documentation.
It's probably easiest to install
rad-ci
from source code or with cargo install
, but I have a deb
package
for those using Debian or derivatives in my APT
repository.
Conclusion
I've used CI systems since 2010, starting with Jenkins, just after it got renamed from Hudson. I've written about four CI engines myself, depending on how you count rewrites. With Radicle and Ambient I am finally getting to a development experience where CI is not actively irritating, even if is not yet fun.
A CI system that's a joy to use, that sounds like a fantasy. What would it even be like? What would make using a CI system joyful to you?
Here's a short description how I create command line programs in the Rust language. This post is something I can point people at when they ask questions. It will also inevitably provoke people to tell me of better ways.
I write command line programs often, and these days mostly in Rust. By
often I mean at least one per week. They're usually throwaway
experiments: I usually start with cargo init /tmp/foo
, and only if
it seems viable do I move it to my home directory. Too often they turn
into long-lived projects that I have to maintain.
The more important thing in my tool box for this is the clap
crate, and its derive
feature.
cargo add clap --features derive
This allows me to define the command line syntax using Rust type
declarations. The following struct
defines an optional string
argument, with a default value, and an option -f
or --filename
that takes a filename value. The code below is all I need to write.
use clap::Parser;
#[derive(Parser)]
struct Args {
#[clap(default_value = "world")]
whom: String,
#[clap(short, long)]
filename: Option<PathBuf>,
}
clap
also support subcommands: for example, my command line
password manager supports commands like the
following:
sopass value list
sopass value show
sopass value add foo bar
I happen to find the multiple levels of subcommands natural, even if
they are a recent evolution in Unix command line
conventions. clap
allows them, but of course doesn't require subcommands at all, never
mind multiple levels.
To implement subcommands, I define an enum
with one variant per
subcommand, and the variant contains a type that implements the
subcommand.
#[derive(Parser)]
struct Args {
cmd: Cmd;
}
#[derive(Parser)]
enum Cmd {
Greet(GreetCmd),
...
}
#[derive(Parser)]
struct GreetCmd {
#[clap(default_value = "world")]
whom: String,
}
impl GreetCmd {
fn run(&self) -> Result<(), anyhow::Error> {
println!("hello, {}", self.whom);
Ok(())
}
}
The main program then uses these:
let args = Args::parse();
match &args.cmd {
Cmd::Greet(x) => x.run()?,
...
}
When I want to have multiple levels of subcommands, I define a trait for the lowest level, or leaf command:
pub trait Leaf {
type Error;
fn run(&self, config: &Config) -> Result<(), Self::Error>;
}
I implement that trait for every leaf command struct
.
I define all the non-leaf commands in the main module, so they're
conveniently in one place. Each non-leaf command needs to match on its
subcommand type and call the run
method for the value contained in
each variant, like I did above.
This results in a bit of repetitive code, but it's not too bad. It's
certainly not bad enough that I've ever wanted to either generate code
in build.rs
or define a macro.
This is what I do. I find it reasonably convenient, despite being a little repetitive. I'm sure there are other approaches that suit other people better.
In the hope that it helps anyone who hesitates on what dock to get for their Framework laptop: I have a 13-inch Framework AMD laptop, bought in 2024. It has two full USB4 ports (the rear extension module bays). It is not advertised as Thunderbolt, but seems to work. I bought an HP Thunderbolt4 Dock G4, and it works fine. I have two 4K monitors at 60 Hz connected to the dock, and both seem to work just fine. Also a USB keyboard and USB drive work fine. LVFS updated the firmware fine, too.
When I wrote Why is Debian the way it is?, a year and a half ago, I was asked to also cover why Debian changes the software it packages. Here's a brief list of examples of why that happens:
Software in Debian needs to follow certain policies as set by Debian over the years, and documented in the Debian Policy Manual. These are mostly mundane things like system wide configuration being in
/etc
, documentation in/usr/share/doc
, and so on. Some of this is more intricate, like when names of executables can be the same in different packages.Programs included in Debian need to work together in other ways. This might mean require changing one or both. As an example, they might need to agree where Unix domain socket exists, or what Unix user account they should run under.
Debian will remove code that "calls home" or tries to update software in a way that bypasses the Debian packaging system. This is done both for privacy reasons, and because updating software without going via the packaging system is usually problematic from a functional point of view, and always problematic from a security point of view.
Debian may fix bugs before they're fixed in upstream, or may backport a bug fix to an earlier version. The goal here is to make life better for users of Debian. Debian does this especially for fixes to security problems, but also for other problems.
Debian avoids including anything in the main part of its package archive it can't legally distribute. This applies to the source packages. This means, Debian may strip out those parts software that it doesn't think are free according to the Debian Free Software Guidelines. The stripped-out parts might be moved to another package in the "non-free" part of Debian. An example might a manual that is licensed under the GNU Free Documentation License with immutable parts, or a logo that can't be changed.
Debian has often added a manual page when the upstream doesn't provide one.
Thank you to Jonathan McDowell for help with this list. Opinions and mistakes are mine. Mine, I say!