Welcome to my web log. See the first post for an introduction. See the archive page for all posts. (There is an english language feed if you don't want to see Finnish.)
Archives Tags Moderation policy Main site
Me on Mastodon, for anything that is too small to warrant a blog post.
All content outside of comments is copyrighted by Lars Wirzenius, and licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License. Comments are copyrighted by their authors. (No new comments are allowed.)
Goal
Last time I tried, but failed, to implement the
shortcut directive. I failed, because I was trying to change many things at
a time. One of the problems was that riki currently assumes parameters to
directives are always named. That is not actually true for all directives,
including shortcut.
My goal today is to change riki so that parameters can be named without a
value, valued without a name, or be name/value pairs.
I'll have succeded if riki can handle all the following forms, as far as
parsing is concerned. I don't care about actually implementing the directives
yet.
[[!foo bar]]- positional parameter, could be name or value[[!foo yo=yoyo]]- name/value parameter
The usual unquoted, quoted, or triple quoted variants for positional parameters or values are expected. Parsing the variants is already implemented.
Plan
- Write down a detailed prose specification of how parameters should work. Add it
to
doci/directifes.md. - Start implementing the specification, first in the wikitext parser.
Notes
Specification
- At wikitext parsing time, name/value pairs are easy to recognize unambiguously.
- A quoted value without a name can also be distinguished.
- However, if a parameter that is only a name or an unquoted value, the parser doesn't know which it is. The directive implementation needs to do that.
- There are thus three cases:
- a name/value pair
- a quoted value
- a name or unquoted value
- Wrote that up in
doc/directives.mdin informal prose.
Wikitext parser
- The current
struct Parametertype assumes each parameter has a name. It does not allow for positional parameters. This is the cause of all the pain in this area. - I will change that to be an
enumwith variants for name/value pairs and positional parameters. The wikitext parser needs to be changed too, to allow for quoted or triple quoted values without names as parameters. And the directive module needs to be changed for the newenum. - This is too many changes at once. I'll do the minimum, which is change the parameter type
into an
enum, but not the parser. I'll gut the directive module parameter handling, for now. I'll need to change all the directive stuff anyway for the parameter handling changes. - That made things easier.
- Adding parsing of value-only parameters. Easy enough, but reminded that I still don't have
a good way to debug
winnowparsers, except adding prints in opportune places.
Directives
- Most directives only have named parameters. They might not have values.
- Others will have only positional parameters, such as
tag. - For full generality, a directive needs to be able to iterate over the parameters, regardless of kind.
- but I'll add this only once it's needed
- The directive implementations need to easily check errors in parameters.
- I may later add a declarative way for directives to specify what parameters they support, but for now I'll stick with repetitive code.
- I may want to later distinguish between unnamed quoted and unquoted values. For now, the quoted value is treated as if it were a name.
Summary
Making one change at a time sure does make things easier. Merged.
This month in Radicle CI, 2026-01
This is a monthly newsletter about the current state of Radicle CI, what has happened recently, and near future plans.
Current status
Radicle CI is in production use. There are several CI nodes, and Lars runs a public one for open source Rust projects at https://callisto.liw.fi/.
After the Ambient release in December, the Ambient adapter for Radicle CI needs a release to allow the new actions to work. CI plans that don't use the new actions work fine.
Links
- Radicle home page
- Radicle CI home page, unofficial
- Radicle CI broker repository
- Radicle CI Ambient adapter repository
- Radicle CI integrations documentation
Office hours
Lars holds a weekly office hour for Radicle CI. Alternating weeks at 08 or 17 UTC at https://meet.ffmuc.net/radicleciofficehour (it's Jitsi and you only need a web browser and mic). Each office hour is announce on
#radicle-cion the Radicle Zulip, and on Lars' fediverse account. The office hour is a place where anyone can show up and ask questions about Radicle CI or Radicle or anything related to them. Sometimes Lars or someone else will have a demo or a short talk.The next ones are January 23 at 17:00 UTC, and January 30 and 08:00 UTC.
The past month
The end of year holidays happened, and fairly little has happened in Radicle CI development over the past month.
Radicle CI broker version 0.24.0 on 2025-12-09
From NEWS.md:
The way
ciblooks up job COBs for specific commits has been optimized to use a cache. This speeds things up a lot in some circumstances: in one benchmark, creating 100 new runs for a commit went down from 600 seconds to 40.The log messages for a CI run, at the INFO level, are now easier to follow without being familiar with the code base. Especially the decision on whether an event filter should trigger CI to run is logged as one message and contains the whole decision tree as JSON.
When it loads the configuration file,
cibnow checks that a default adapter is set, if thefiltersfield is used. Previously a missing default adapter was only noticed later when starting a CI run. The new behavior exposes problems much earlier.
Ambient CI version 0.11.0 on 2025-12-20
From NEWS.md:
Breaking changes
The Rust Cargo related actions now use the
cargo-targetsubdirectory in the workspace cache directory (/ci/cache/cargo-target). This is invisible to all common uses of Rust in Ambient projects, but allows dividing the cache between different kinds of use in the future. This does mean that existing caches become obsolete and should be deleted. That makes this technically a breaking change.Ambient now checks for common problems when it loads the projects file. This is know as "the linter". Currently it checks that an
rsynctarget has been configured if anrsyncorrsync2action is used, that each file downloaded by ahttp_getaction has a unique filename, and that all shell script snippets inshellactions are OK according to theshellcheckprogram.Linting can be prevented by setting
linttofalsein the configuration. This may be necessary if, say, one of the checks is wrong.This is a breaking change because most shell script snippets will be found wanting.
The pre- and post-plan now only allow actions that are actually meant for them. Previously both allowed the same set of actions. However, actions like
cargo_fetchdon't really make sense for post-plan. Now the separation is more strict.This is technically a breaking change, but hopefully doesn't actually break anything for anyone. If you have a legitimate use for a pre- or post-plan action that is now not allowed, let us know.
Problems fixed
Some portability fixes for NixOS by invoking the Bash shell by name instead of path. NixOS does not put Bash at
/bin/bash, so using a full path doesn't work reliably. Using the name should work anywhere.Ambient now checks, when loading the projects file, that the source location for a project is a directory (following symlinks) and gives an error if it is not. This means the problem is found when Ambient starts and not much later when it starts running CI for a specific project. If there are many projects, that might be hours later.
New features added
The workspace in the VM is now
/ci. The old name/workspacewill work indefinitely. The new name is shorter and arguably clearer. The workspace is set up byambient-execute-planand so this change does not affect any VM images.There is now a user guide for Ambient, published by CI at https://doc.liw.fi/ambient-ci/userguide.html and included in the
debpackage at/usr/share/doc/ambient-ci/userguide.html. The user guide contains a description of each action that a project CI plan can use. The guide is woefully lacking, but it's easier to add things to something that exists than starting from an empty directory.The Ambient subplot document is formatted and published at https://doc.liw.fi/ambient-ci/ambient.html. It may be useful for checking how a specific aspect of Ambient is used. The subplot is the test suite that verifies most aspects. That means it's continually run and does not easily get out of date.
New action
setenvallows setting environment variables for later actions. Using theshellaction does not work for this, because each shell forgets any changes to its environment variables when it terminates.
- action: setenv
set:
foo: bar
New plan action
deb2and post-plan actionsdput2andrsync2use subdirectoriesdebianandrsyncin the artifacts directory. This means if a project both builds documentation and adebpackage, they don't get mixed into the same directory. Instead, documentation goes into/ci/artifacts/rsyncand the package into/ci/artifacts/debian.The old actions
deb,dput, andrsynccontinue to work as before and use the whole artifacts directory. The new actions were added to avoid changing the existing actions in an incompatible, breaking way. The old actions are not deprecated.The runnable plan versions of the old actions have changed. The plan and post-plan actions result in the same runnable plan action. Changes to runnable actions are not currently considered breaking changes in Ambient.
In the VM, the
gitcommand is now configured by default to have "Ambient CI" as the name for the user and "ambient@example.com" as the email address. This removes the need for each project to do that in their CI plan just to usegit.New subcommand
ambient statelists the projects in the Ambient state directory (configuration fieldstate) and the sizes of the files and subdirectories they contain. The output looks like this:
{
"projects": {
"dummy": {
"latest_commit": "09d6a5d81a5001bf210df2bf80e871e3731f6e9f",
"run_log": 21370,
"dependencies": 472923464,
"cache": 2074946410,
"artifacts": 4096
},
},
"project_count": 6
}
The
ambient qemusubcommand has been added to execute a runnable plan in a virtual machine, with or without networking. This is primarily a utility command to help develop Ambient by making it easier to experiment.The configuration file now allows enabling UEFI use for an image. The
runandqemusubcommands additionally have a--uefioption for that.The
ambient qemu --persistoptions allows creating a variant of an image. This can be used, for example, to change a generic cloud image from Debian or Arch Linux to boot fast even if the VM has no network access. Together with the optional UEFI support this paves way for using generic images instead of custom images for Ambient. That, in turn, should enable Ambient users to run CI under other operationg systems in the VM. However, Ambient needs further changes to make this convenient.Ambient now gives an error message if a virtual drives is too big. The virtual drives are created before the virtual machine starts. Previously, there was no helpful error message, only an "assert" error that only makes sense to Ambient developers.
The exported parts of the Ambient library part now all have documentation. This makes the library usable from other programs, but more importantly, makes it harder for Lars to forget what a type or method is for. Many typo fixes and other changes were made to exported names.
Note that Ambient is probably not very useful to use as a library. If you use it that way, or would like to, please be in touch and let us know so can try to avoid breaking it for you.
Ambient CI version 0.11.1 on 2016-01-14
From NEWS.md:
Bug fix:
- Always pass UEFI OVMF firmware file to QEMU, not just when UEFI support is on. The change for optional UEFI support in 0.11.0 broke some uses of Ambient. This change restores functionaliy at least for my own uses.
Future plans
Lars is still getting back up to speed with Radicle CI work and has no
concrete plans as yet. Two clear needs are making it easier to set up Radicle
CI both on a server and for local use (see rad-ci), and also automate the
release process.
New releases of the CI broker, Ambient adapter, and native CI adapter are pending and will happen soon.
The overall goal is to make CI a joy to use with Radicle.
Notes
If you have an open source Rust project and want to try Radicle and Ambient on
callisto, see https://callisto.liw.fi/callisto/ for instructions.
Quotes
No quotes yet, but please suggest something for the next issue.
Goal
My goal for today is to implement the shortcut directive from ikiwiki in
riki. The plugin allows the page shortcuts in the site define shortcuts that
can be used on page. A shortcut is defined like this:
[[!shortuct wikpedia https://en.wikipedia.org/wiki/%W]]
[[!wikipedia War_of_1812]]
The shortcut directive defines a new directive, which can be invoked as if
it were a normal directive. The first nameless parameter is the name of the
new directive, and the second is the URL pattern.
In the URL pattern, %s is replaced by the nameless argument to the
invocation of the directive added as the shortcut, with URL encoding. %S
(upper case) is the same, but without encoding. %W is encoded in a way
suitable for Wikipedia. The desc parameter in the definition or invocation
sets a description for the link, i.e., link text.
Plan
I don't quite like the way ikiwiki implements shortcuts, so my plan is to
be compatible, but different.
- The
shortcutdirective can be used anywhere on the site, not just in the page calledshortcuts. - It's an error to define a shortcut with the name of a directive, or another shortcut.
- I don't want a dynamic set of directives. I think
ikiwikidoes things that because it was easy to do in Perl. I'll implement something different. rikidoes not currently support nameless parameters. I will add those, but I'll tackle the shortcut directive first. While I do that, I'll require named parameters. Once I add support for nameless parameters, I'll change therikidirective to define shortcuts.
Notes
Add phases for directives
- Some directives depend on other directives having been executed first. The
shortcutdirective needs to be executed before any shortcut is processed. Theinlinedirective requires other pages to have been processed into HTML first. - I can deal with this by adding explicit dependencies on directives, and pages using them, but that seems like a lot of work and prone to bugs.
- Instead, I'll add the concept of "phases" to execution of directives. Each directive execution will be given the "phase number" currently executing. After phases 0 (the first phase), any invocation of a directive must be either a known builtin directive, or a known shortcut for the site.
- Each directive will return a value indicating it has been executed in a phase, and does not need to be executed again.
- The site build process will iterate over invocations of directives, executing them until all directives have been executed.
- Actually, that seems unnecessarily tricky. New plan: each directive can define
three methods,
execute_0,execute_1, andexecute_2. The site build process will loop over directive invocations three times, calling a successive execution method in each loop. Further, all invocations on all pages are executed in each phase. - Adding that was easy and quick.
- Next I'll implement a datatype for holding known shortcuts. I'll add a field to
the type that represents the site to use this. The
shortcutdirective will be able to add to that. - I don't know what the Wikipedia encoding is, so I'll skimp on that. In fact, I'll
only implement
%sas that's the only thing I use. I'll get back to this later. - All directives will need to get a mutable reference to the site.
Ran out of time implementing this today, will continue tomorrow.
Next morning
- I've coded myself into a corner. I want directives to be able to modify the site, but also the pages, and this results in more than one mutable reference to the site.
- I'll put the shortcut set in an arc box, or something.
- But not this early in the morning.
- I'm juggling too many changes at the same time. That might be a signal that I should start over and do one thing at a time.
Summary
Things did not end well. Mistakes were made. I will try again another day.
Goal
I had code and test for internal linking. I rewrote the code to be in a Site
type, but lost the tests, as they were hard to add. Now I don't know if my
code works. Actually, I know there's problem.
The goal for today is to add tests for internal linking, following the
specification in doc/linking.md.
Plan
- I have helper functions to produce candidate pages for resolving a link on a page. Add unit tests for these. Each function is easy to test in isolation.
- Add unit tests for the
Site::resolvemethod to test the order in which candidate target pages are tried.
Notes
- To make it easier to set up a
Sitefor testing, add aSite::fake_pagemethod, only usable for tests. Also add a testing-onlySite::emptyconstructor. - Sibling test, easy.
- Direct subpage, tricky. Apparently the code I have is broken. It's the code I rewrote from what I thought was working code, but I'm not convinced that actually worked either. This is why tests are important.
- A tricky bit: the origin page is, for example,
/src/foo.mdwn, which means its internal name is/foo. There is also the target page/src/foo/bar.mdwn, with internal name/foo/bar. A linkbaron the origin page should resolve to the target page. I can joinbar(the link) with the origin internal name (/foo) to get/foo/bar. That's not a full page name, though. It lacks the fully qualified path the source file: I can get the source directory from theSite, but not the suffix (.mdwn). - Maybe the internal name of the target would be enough? I can have
Site::resolvedo a lookup with that. The hash map of pages uses internal names for lookup, anyway. - Now that I think about it, the helper functions that construct possible
targets for links should return a
PathBufrepresenting the internal name, not a fullPageName. The full names are trickier to construct in all cases. Ideally it'd be a dedicated type, but I'll start with the genericPathBuf. - Actually, an optional
PathBuf- sometimes it's not possible to construct the candidate - I'll throw away all the code and start over.
- I do, however, want to have my own type for the set of pages, instead of a
raw
HashMap, since I have to change everywhere it's used. Future me will thank me and so will present me. - That was a lot of furious rewriting and refactoring, but it works now.
- Merged.
I'm making progress with riki, although very slowly. It's a program I'd really like to have, but it's so far down my priorities that I don't work on it often. But I'm currently on holiday and have more degrees of freedom. Here are notes from today's develoment session.
Goal
My goal today is to make riki be able to render a simple site. I want this
so that I can start trying riki on my various ikiwiki sites.
riki can already render a single page. Today I want to make a command like
this work:
riki build src dest
This will read the directory src, load all files, process every .mdwn file
as Markdown with wikitext and produce HTML for that, and write the output to
dest. All other files get copied verbatim.
This is only about processing a directory tree. It's OK if links within
the site do not work in the output. It's OK to not implement any additional
directives or other functionality. Once I can try to render sites, I can run
riki on my various sites, and other chosen test sites, and find out what I
need to fix.
Plan
- Create a minimal test site to verify
riki buildworks even for the easiest case. - Implement the scaffolding for the
riki buildcommand. - Implement scanning the source tree for files.
- Implement processing of
.mdwnfiles into HTML. - Implement writing out of output files.
Notes
Test site
- For the test site I want about two files. One is not enough. More than two seems unnecessary.
index.mdwnandother.mdwnwill both contain ameta titleand some text to make it easy to verify the page contains the right stuff.
Scan source folder
- I'll use the
walkdir, as it's good and I'm familiar with it. - I'll ignore everything that isn't a regular file. Directories will be created from source file paths, symlinks are dangerous. I might add support for symlinks some day, if there's a request, but for now I'm ignoring them.
- I'll only process
.mdwnfiles as wikitext, everything else is a blob. - I'm reading in blobs into memory. I may later only remember the pathname and copy files without keeping them in memory, but for now this is easier.
Process .mdwn files
- This is basiclly what the
render-pagecommand I implemented earier does. A bit of copy-pasta, and done! I may later refactor this to avoid code duplication, but it's literally two lines, so I'm not too bothered today.
Write out files
For any input file
foo/bar/yo.mdwnunder the source directory, writefoo/bar/yo/index.htmlin the output directory, creating the output directory and subdirectories as needed. Butindex.mdwnin a directory should result inindex.htmlin the corresponding output directory.Hmm, my plan was to hack up something quickly and then tidying that up. But this turns out be ever so slightly more complicated than I thought. This is my third time trying to implement
rikiand I've run into this kind of surprise compoication before. But I'll push through. I should take notes of what the complexities are. Right now it's keeping track of page content (blob vs HTML), and filename. I want to be consistent, and avoid possibilities for coding mistakes, when I can.OK, I have something that works with my silly little test site.
Did a little refactor to simplify code.
Figuring out the output file name is a little tricky, but not too bad. Prime target for unit tests, too.
Trying this on my sites
- Some of my very simples sites work.
- Others have problems.
- failing to handle some Markdown combinations
- missing directives:
inline,map
- I've noted those in issues.
End
- I've merged the changes.
sartorial (comparative more sartorial, superlative most sartorial)
(not comparable) Of or relating to the tailoring of clothing.
Synonym: vestiaryOf or relating to the quality of dress.
In his smart suit Jacob was by far the most sartorial of our party.(anatomy) Of or relating to the sartorius muscle.
(From Wiktionary)
Three years ago over the Christmas holidays I embarked on a sartorial adventure. It started with the realization that I found most of my clothes uncomfortable. For the longest time I'd been wearing cargo trousers and T-shirts, and they never fit me well, and for whatever reason I started thinking that maybe I didn't have to live with discomfort.
I also have a troubled relationship with my body. I don't like the way I look. I have self-confidence issues related to this. I don't like seeing myself. After I started thinking about changing what I wear, I realized that I could tackle those issues at the same time.
I wasn't raised to care about clothes or appearance, much. Clean clothes without many holes in them was the goal I took away into adulthood. It's what I've done most of my life.
After some online research, and a brief infatuation with the "corporate Goth" look, I decided that given my overall tendencies, classic European men's style is the one I tend to like, and would like strive for. That means business suits and similar clothing from the past century.
I've taken it slow. I don't want to make sudden big changes and then regret them. I wanted to take my time to learn what I actually like, from experience. I also need to learn about clothing and wearing it that boys learned growing up in the first half of the 20th century.
There is luckily a lot of good material about this online. Here is a condensed, highly biased summary of what I've learned so far.
Comfortable clothing means clothing that fits your body well, and is made mostly of natural materials. Fit matters so the clothes aren't too tight, so they squeeze, or too large, so they snag, or get in the way you move. Materials matter for temperature control: artificial materials tend to make you sweat or don't keep you warm, where natural ones keep you cool or warm as appropriate. I like wool, especially.
Belts are really uncomfortable, but suspenders (or braces) are much better. I have a large stomach, which makes belts especially bad, but for most people, belts are at best mediocre at keeping trousers up. A pair of suspenders keep trousers at the right height more securely.
If I keep nothing else from this adventure, it's suspenders.
I gain much confidence from wearing clothes I have carefully chosen for myself, rather than what everyone else is wearing. It doesn't even matter much if others like how I look, although it's nice when I am complimented. In fact, I've been complimented on my looks more these past three years than the preceding fifty. I felt the confidence boost before the compliments, though.
Taking care of clothes and shoes requires effort and knowledge: laundry, ironing, patching, etc. The benefit of artificial materials is that they tend to require less effort. I find the maintenance effort is worth it, but also that I need to burn some willpower to start. I am a lazy slob.
Given the shape of my body, I've ended up having most of my new clothes made to measure for me. Mass produced clothes rarely fit me well, and I now want something that does.
On terminology: clothing that's ready to wear when it's bought is called "off-the-rack" or "ready-to-wear". It often benefits from some alterations and adjustments, because there's more body shape variation than mass producers are willing to cater for.
Custom-made clothing is either "made-to-measure" or "tailor-made". For made to measure, the clothes are made by adjusting a generic pattern to the measurements of your body. Tailors create a pattern just for you. The generic pattern can be varied, but within limits. A completely new pattern has no limits.
On cost: The upfront cost of custom clothing is higher than for mass produced clothes, but the quality is so much higher, over time the cost is lower. Of course, you have to be able to afford the upfront cost. For shirts, trousers, and jackets, I've found made to measure ones are about two to three times the cost of off the rack ones.
Tailored clothing are many more times more expensive than that and I've not tried that option.
Some garments are OK mass produced. For example, shoes, gloves, hats, overcoats. For these, either there's less variation or fit doesn't need to be as exact.
Colors turn out to be important. A monochromatic look can work, but more colors tends to be more visually interesting. I'm still learning and experimenting with this.
I don't wear a suit at all times. If I'm alone at home, I tend to go for sweat pants and T or polo shirts. This is partly because they're comfortable, and partly to save wear and tear on more expensive garments.
I've come to realize that I detest the fashion industry. I don't detest fashion as such: it's perfectly fine that there are changing trends in clothing. It would be boring if everything always stayed the same. The industry has weaponized this to pressure people to buy much more new clothes than they need, to the detriment of everyone and everything.
Organic trends and changes: yes. Cynical exploitative industry: no.
I'll mention this to be clear: this is my adventure and clothing for me. I don't care what you wear. If you're happy that's all that matters. If you prefer shorts, sandals, and no shirt, that's OK. I'm not interested in trying to influence what you wear. I'm sharing this in the hope someone else finds it interesting.
I collect links to web sites, publications, and companies related to men's style. This is primarily for my own benefit, but I'm happy if it helps anyone else.
This is a summary of the current state of Radicle CI and near future plans. The goal is to make it a monthly newsletter.
Radicle is an open source, peer-to-peer code collaboration stack built on Git. Unlike centralized code hosting platforms, there is no single entity controlling the network. Repositories are replicated across peers in a decentralized manner, and users are in full control of their data and workflow.
Radicle CI adds continuous integration support to Radicle. Any Radicle node can choose to run CI for any repository it has access to. Any project using Radicle can choose which CI nodes it trusts. Radicle CI has integrations with a number of CI systems, and making new ones is easy.
Ambient CI is a CI engine that makes it safe and secure to run CI on untrusted code.
History
As this is the first report, we'll start with a summary of the history of Radicle CI.
Work on Radicle CI started the second half of 2023. We quickly picked the current architecture of a CI broker that listens to events from the node, and runs an adapter program to actually run CI on the change. This allows all the tricky parts to be in one program and makes it fairly easy to add support for new CI systems. At least it's easy from the Radicle side: some external CI systems make it tricky to integrate with them.
In 2024 the CI broker gained enough functionality to be usable. The first release was in April.
From the beginning, the CI broker was accompanied by a "native adapter", the simplest possible implementation of a CI system, which merely runs a shell snippet locally. This made Radicle CI feasible to use in some circumstances. The native adapter is, however, not very safe, because it provides not isolation at all. The main goal for the native adapter is to have some adapter for use when developing the CI broker.
There were soon adapters for Concourse and other CI systems, and a generic webhook adapter to ease integration to external CI systems.
In January 2025 the Ambient adapter was created. This made Radicle CI a realistic standalone CI system. It no longer requires using external CI systems, but use of them continues to be supported.
Early in 2025 the rad-ci program was created. It emulates what happens in
a CI run on the local machine, initially for the native adapter, but soon
also for the Ambient one. This means a developer does not have to wait for
a CI node to have time to run CI for their change, they can just run it
locally. Due to the nature of CI systems, rad-ci only really works with some
adapters, because emulating a complicated CI system is a lot of work.
In mid-2025 Radicle job COBs were implemented in a production ready way. A job COB lets Radicle nodes update the CI status for a commit: they carry information about which node has run CI for which commit, and if it succeeded or failed. Lars had tried to implement them from 2023, but Fintan actually did it well. The COBs are created by the CI broker to notify other Radicle nodes that an automated process has been run for a specific commit. The Radicle desktop app shows CI status using them, and the web view is going to.
Also in mid-2025, the CI broker started supporting concurrent CI runs, although only one at a time per repository.
Current status
Radicle CI is in production use. It is not yet a joy to use. Much work remains to be done to get there. There are a few CI node instances, and Lars runs one using Ambient for open source Rust projects at https://callisto.liw.fi/.
Future plans
First of all, update this report monthly. The current rate of change probably doesn't warrant a weekly update.
Lars plans to concentrate on making it easier to set up and run Radicle CI at least for the rest of 2025.
Notes
If you have an open source Rust project and want to try Radicle and Ambient on
callisto, see https://callisto.liw.fi/callisto/ for instructions.
Links
- Radicle CI home page, unofficial
- Radicle CI broker repository
- Radicle CI Ambient adapter repository
- Radicle CI integrations documentation
Quotes
No quotes yet, but please suggest something for the next issue.
This blog post is a reaction to a blog post on Stopping bad guys. (I've shortened the title. Please follow link to read original.)
I write open source software for the sake of humanity. I want to live off my work, certainly, but more importantly, I want the software I build to make life better for other people in the future.
I don't think open source developers should use licenses to combat bad guys doing evil things. I think doing so will harm the ecosystem, but not prevent actual evil from happening.
It is becoming increasingly common for open source developers to be concerned what other people do with their software. The blog post I linked to above is an example. Many developers object to their software being used by fossil fuel companies, oppressive law enforcement, or others. Some of the developers are trying to change their open source licenses to prevent those groups from using the software.
I concur with the goal: the modern day Gestapo in the US should be stopped, and so should companies who destroy the global ecosystem. Genocide should be stopped and prevented. All of these need to happen, but using licenses as a weapon for this is a bad idea. It doesn't actually work, but it poisons the open source ecosystem.
A fundamental reason why open source thrives at all is because it enables easy, low-friction use of existing software to build new software. I write a library, you combine parts of that and parts of other libraries to make an application, but you do not have to negotiate terms. When we did this tens of millions times we got the world of today where every information system is at least partly made out of open source software.
Open source licenses are not all fully compatible with each other, but there's enough popular, compatible licenses that by and large it's nearly always possible to combine code from different sources to build something new.
I'm old and cynical: those who kidnap or murder people, or who are just immensely wealthy, don't really care if they violate open source licenses, because there not really anyone with the will and resources to stop them. Changing the license of your open source project won't stop, say, the IDF, ICE, Shell, or Meta, who all find they can stand above the law.
The good guys who would build something benign on what you've done will, however, have to tread much more carefully and do much more work to do so. They're good people, and they do their best to follow the rules. A well-meaning, but incompatible, license may cause too much friction for the benign application from ever being created.
We, the non-evil parts of humanity, need other ways to resist evil people and organizations. I'm afraid I don't have an easy solution. You can speak up when you see a problem. You can refuse to help evildoers. You can resist bad things. You can take care of other people. You can do and build good things. These are all going to work in the long run, I'm sure, but they require a lot of courage, a lot of persistence, and a lot of time. In Finnish terms, they require a lot of sisu.
Another aspect that open source developers worry about is if a large corporation makes use of their software to make a profit, but doesn't contribute back in any way. I can understand this worry, but I am myself in the lucky position where I don't need to care. I'm OK with others profiting from what I've built, as long as they don't cause trouble for me, or cause me to have to do more without compensation. If I was trying to, say, run a paid service and Amazon competed with me, I might think differently. But I care more about building things to help humanity.
I mostly don't care about for-profit companies: they can be a useful social construct, but people actually matter. As long as companies, meaning the people who run them, don't harm people, or the environment, and don't get in the way of making things better for people, I'm happy to not care.
If they want me to do something, either they pay me or I say no.
This is my opinion, and I'm fine with others disagreeing with me. I may well change my mind when I think about this further. I publish this on my blog so that I get it out of my head, and make room for my thoughts about this to grow.
Ambient CI is the CI engine I'm building for myself, and tentatively for Radicle CI. I last blogged about it as a project over a year ago. Since then, the focus and goal of the project has crystallized in my head as:
- You should be able to run CI for other people safely and securely, without much effort.
- We as software developers should be able to share our computing resources with each other to run CI for each other. This should be safe and secure for us to do.
- We should be able to run CI locally and get the same exact results as when a server does it.
In other words, safe and securet distributed and local CI for everyone.
The approach I've taken is that any code from the software under test, or any of its dependencies, none of which can inherently be trusted, is only ever run in an isolated, network-less virtual machine. I've proven to myself that this works well enough. It will not, of course, work for any software project that inherently requires network access to build or test, but those are rare excecptions.
For now, I mostly care about automatically building software, and running its automated tests.
Ambient can publish build artifacts with rsync and deb packages with dput; other delivery methods are easy to add, too.
I have not worried about deployment, yet.
Deployment seems a tricky problem in a distributed system, but I'll worry about it when the integration parts work well.
Ambient is not there yet. It will take a lot more work, but there's been some progress.
- Most importantly for me, I've integrated the Ambient engine with the Radicle CI subsystem. This is important to me because Radicle, the distributed Git forge, handles all the boring server parts and I don't need to implement them. I get paid to work on Radicle CI and we're experimenting with using Ambient as the default CI engine.
- I've used Ambient, with Radicle, as my primary CI system for this year.
The combination has worked well.
I also use GitLab CI on
gitlab.comfor one or two projects where I collaborate with people who don't want to use Radicle. - I've set up https://callisto.liw.fi/, a host that runs CI with Ambient+Radicle for open source Rust projects. See https://callisto.liw.fi/callisto/ for instructions, and https://blog.liw.fi/posts/2025/callisto/ for the announcement. I do this to get more experience with running CI for other people.
There are, of course, problems (see open issues). Apart from the recorded issues, I worry about what I don't know about. Please educate me.
Two of the big problems I know about are:
- Before the actual build happens, in a network-less virtual machine, build dependencies need to be downloaded. I've only implmented this for Rust crates, and even that needs improvement. Other languages and other build dependencies need to supported too. This is an area where I will certainly need help, as I don't know most language ecosystems.
- The dependency downloading and the delivery actions are run on the host system. Ambient needs to isolate them, too, into being run in a virtual machine. If nothing else, this relieves the host system from having go have language tool chains installed.
- So far I'm using a custom build Debian image for the virtual machine. I write my own VM image building software, so this is no big deal for me. However, Ambient really should be able to use published "cloud images" for any operating system, as a base image. I only deal with Debian, really, so I'll need help with this, even for other Linux distributions. From a software architecture point of view, Ambient requires fairly little from the VM image. Maybe some day someone will add support for Windows and macOS, too.
- Ambient runs the VM using QEMU, and needs to unlock architecture support by not assuming VM image is in the host architecture.
Emulating a foreign architecture is not very fast, but slow is better than impossible.
Imagine being able to produce binaries for
x86_64,aarch64,riscv64, and any of the other architecture QEMU supports. - Ambient currently only supports a very straightforward CI run plan, with a linear sequence of actions to execute. Ambient needs to support matrix builds, and more generally build graphs. By build graphs I mean things like generating some artifacts once, then using those on a list of architectures to produce more, then process those artifacts once, and finally deliver some or all of the artifact to various places. In a distributed manner, of course.
- I need to learn how to count. I also need to learn to resist the temptation to make a tired joke about hard problems in computer science, but that keeps dropping out of my short term memory.
That's where Ambient stands currently. It works, at least in my simple use, and it's distributed.
If CI systems interest you, give Ambient a look. Let me know what you think, by email or on the fediverse.
I've just released version 0.6.0 for sopass, my
command line password manager that I use instead of pass.
Version 0.6.0, released 2025-10-31
If I were of the American persuasion, this would be a spooky release. But I'm not, so it's a comfy release that doesn't scare anyone.
The
sopass value generatecommand generates a new random value for a name.There have also been other changes. A
debpackage is built and published by CI for every merge into themainbranch. The documentation of acceptance criteria is published at https://doc.liw.fi/sopass/sopass.html. Lars has decided to not work on cross-device syncing, as it's not something he needs, even though it's an interesting technical problem.
