Recent changes to this wiki:

Publish log entry
diff --git a/posts/2018/11/18/retiring_from_debian.mdwn b/posts/2018/11/18/retiring_from_debian.mdwn
new file mode 100644
index 0000000..643acdd
--- /dev/null
+++ b/posts/2018/11/18/retiring_from_debian.mdwn
@@ -0,0 +1,47 @@
+[[!meta title="Retiring from Debian"]]
+[[!meta date="2018-11-18 18:32"]]
+[[!tag debian retiring]]
+
+I've started the process of retiring from Debian. Again. This will be
+my third time. It'll take a little while I take care of things to do
+this cleanly: uploading packages to set Maintainer to QA, removing
+myself from Plant Debian, sending the retirement email to -private,
+etc.
+
+I've had a rough year, and Debian has also stopped being fun for me.
+There's a number of Debian people saying and doing things that I find
+disagreeable, and the process of developing Debian is not nearly as
+nice as it could be. There's way too much friction pretty much
+everywhere.
+
+For example, when a package maintainer uploads a package, the package
+goes into an upload queue. The upload queue gets processed every few
+minutes, and the packages get moved into an incoming queue. The
+incoming queue gets processed every fifteen minutes, and packages get
+imported into the master archive. Changes to the master archive get
+pushed to main mirrors every six hours. Websites like
+lintian.debian.org, the package tracker, and the Ultimate Debian
+Database get updated at time. (Or their updates get triggered, but it
+might take longer for the update to actually happen. Who knows.
+There's almost no transparency.)
+
+The developer gets notified, by email, when the upload queue gets
+processed, and when the incoming queue gets processed. If they want to
+see current status on the websites (to see if the upload fixed a
+problem, for example), they may have to wait for many more hours,
+possibly even a couple of days.
+
+This was fine in the 1990s. It's not fine anymore.
+
+That's not why I'm retiring. I'm just tired. I'm tired of dragging
+myself through high-friction Debian processes to do anything. I'm
+tired of people who should know better tearing open old wounds. I'm
+tired of all the unconstructive and aggressive whinging, from Debian
+contributors and users alike. I'm tired of trying to make things
+better and running into walls of negativity. (I realise I'm not being
+the most constructive with this blog post and with my retirement. I'm
+tired.)
+
+I wish everyone else a good time making Debian better, however. Or
+whatever else they may be doing. I'll probably be back. I always have
+been, when I've retired before.

Publish log entry
diff --git a/posts/2018/10/24/idea_for_a_debian_qa_service_monitoring_install_size_with_dependencies.mdwn b/posts/2018/10/24/idea_for_a_debian_qa_service_monitoring_install_size_with_dependencies.mdwn
new file mode 100644
index 0000000..fe1a3c9
--- /dev/null
+++ b/posts/2018/10/24/idea_for_a_debian_qa_service_monitoring_install_size_with_dependencies.mdwn
@@ -0,0 +1,40 @@
+[[!meta title="Idea for a Debian QA service: monitoring install size with dependencies"]]
+[[!meta date="2018-10-24 10:42"]]
+[[!tag debian]]
+
+This is an idea. I don't have the time to work on it myself, but I
+thought I'd throw it out in case someone else finds it interesting.
+
+When you install a Debian package, it pulls in its dependencies and
+recommended packages, and those pull in theirs. For simple cases, this
+is all fine, but sometimes there's surprises. Installing mutt to a
+base system pulls in libgpgme, which pulls in gnupg, which pulls in a
+pinentry package, which can pull in all of GNOME. Or at least people
+claim that.
+
+It strikes me that it'd be cool for someone to implement a QA service
+for Debian that measures, for each package, how much installing it
+adds to the system. It should probably do this in various scenarios:
+
+* A base system, i.e., the output of debootstrap.
+* A build system, with build-essentian installed.
+* A base GNOME system, with gnome-core installed.
+* A full GNOME system, with gnome installed.
+* Similarly for KDE and each other desktop environment in Debian.
+
+The service would do the installs regularly (daily?), and produce
+reports. It would also do alerts, such as notify the maintainers when
+installed size grows too large compared to installing it in stable, or
+a previous run in unstable. For example, if installing mutt suddenly
+installs 100 gigabytes more than yesterday, it's probably a good idea
+to alert interested parties.
+
+Implementing this should be fairly easy, since the actual test is just
+running debootstrap, and possibly apt-get install. Some
+experimentation with configuration, caching, and eatmydata may be
+useful to gain speed. Possibly actual package installation can be
+skipped, and the whole thing could be implemented just by analysing
+package metadata.
+
+Maybe it even exists, and I just don't know about it. That'd be cool,
+too.

Fix: order of paragraphs
diff --git a/posts/2018/10/15/rewrote_summain_from_python_to_rust.mdwn b/posts/2018/10/15/rewrote_summain_from_python_to_rust.mdwn
index 81334fd..247a59d 100644
--- a/posts/2018/10/15/rewrote_summain_from_python_to_rust.mdwn
+++ b/posts/2018/10/15/rewrote_summain_from_python_to_rust.mdwn
@@ -17,11 +17,11 @@ Results:
 * Input is a directory tree with 8.9 gigabytes of data in 9650 files
   and directories.
 * Each file gets stat'd, and regular files get SHA256 computed.
+* Run on a Thinkpad X220 laptop with a rotating hard disk. Two CPU
+  cores, 4 hyperthreads. Mostly idle, but desktop-py things running in
+  the background. (Not a very systematic benchmark.)
 * Python version: 123 seconds wall clock time, 54 seconds user, 6
   second system time.
-* Run on a Thinkpad X220 laptop with a rotating hard disk. Mostly
-  idle, but desktop-py things running in the background. (Not a very
-  systematic benchmark.)
 * Rust version: 61 seconds wall clock (50% of the speed), 56 seconds
   user (104%), and 4 seconds system time (67&).
 

creating tag page tag/rust
diff --git a/tag/rust.mdwn b/tag/rust.mdwn
new file mode 100644
index 0000000..6e12944
--- /dev/null
+++ b/tag/rust.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged rust"]]
+
+[[!inline pages="tagged(rust)" actions="no" archive="yes"
+feedshow=10]]

Publish log entry
diff --git a/posts/2018/10/15/rewrote_summain_from_python_to_rust.mdwn b/posts/2018/10/15/rewrote_summain_from_python_to_rust.mdwn
new file mode 100644
index 0000000..81334fd
--- /dev/null
+++ b/posts/2018/10/15/rewrote_summain_from_python_to_rust.mdwn
@@ -0,0 +1,31 @@
+[[!meta title="Rewrote summain from Python to Rust"]]
+[[!meta date="2018-10-15 10:59"]]
+[[!tag rust summain]]
+
+[learning Rust]: https://blog.liw.fi/learning-rust/
+[summain]: http://git.liw.fi/cgi-bin/cgit/cgit.cgi/summain/
+[summainrs]: http://git.liw.fi/cgi-bin/cgit/cgit.cgi/summainrs/
+
+I've been [learning Rust][] lately. As part of that, I rewrote my
+[summain][] program from Python to Rust (see [summainrs][]). It's not
+quite a 1:1 rewrite: the Python version outputs RFC822-style records,
+the Rust one uses YAML. The Rust version is my first attempt at using
+multithreading, something I never added to the Python version.
+
+Results:
+
+* Input is a directory tree with 8.9 gigabytes of data in 9650 files
+  and directories.
+* Each file gets stat'd, and regular files get SHA256 computed.
+* Python version: 123 seconds wall clock time, 54 seconds user, 6
+  second system time.
+* Run on a Thinkpad X220 laptop with a rotating hard disk. Mostly
+  idle, but desktop-py things running in the background. (Not a very
+  systematic benchmark.)
+* Rust version: 61 seconds wall clock (50% of the speed), 56 seconds
+  user (104%), and 4 seconds system time (67&).
+
+A nice speed improvement, I think. Especially, since the difference
+between the single and multithreaded version of the Rust program is
+four characters (`par_iter` instead of `iter` in the `process_chunk`
+function).

Change: publish
diff --git a/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn b/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
index 33173e9..a874830 100644
--- a/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
+++ b/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
@@ -1,6 +1,6 @@
 [[!meta title="On flatpak, snap, distros, and software distribution"]]
 [[!meta date="2018-10-11 10:12"]]
-[[!tag draft debian flatpak distribution]]
+[[!tag debian flatpak distribution]]
 
 [Flatpak]: https://flatpak.org/
 [Snappy]: https://en.wikipedia.org/wiki/Snappy_(package_manager)

Fix: spelling
diff --git a/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn b/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
index 425e303..33173e9 100644
--- a/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
+++ b/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
@@ -34,7 +34,7 @@ The website also raises the point that a number of flatpaks themselves
 contain unfixes security problems. I find this to be more worrying
 than an imperfect sandbox. A security problem inside a perfect sandbox
 can still be catastrophic: it can leak sensitive data, join a
-distributed denial of service attack, use exessive CPU and power, and
+distributed denial of service attack, use excessive CPU and power, and
 otherwise cause mayhem. The sandbox may help in containing the problem
 somewhat, but to be useful for valid use, the sandbox needs to allow
 things that can be used maliciously.
@@ -87,7 +87,7 @@ changes in LibreOffice for the newer version to work.
 
 For example, imagine LO uses a library to generate PDFs. A new version
 of the library reduces CPU consumption by 10%, but requires changes,
-becase the library's API (programming interface) has changed
+because the library's API (programming interface) has changed
 radically. The API changes are necessary to allow the speedup. Should
 LibreOffice upgrade to the new version of not? If 10% isn't enough of
 a speedup to warrant the effort to make the LO changes, is 90%? An

Fix: wording
diff --git a/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn b/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
index 31f6f6d..425e303 100644
--- a/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
+++ b/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
@@ -104,7 +104,8 @@ to fix it, to communicate that there is a fix, and to upgrade the
 dependency. Some projects have partial solutions for that, but there
 seems to be nothing universal.
 
-I'm sure most of this can be solved, some day, in some manner. I don't
-have a solution yet, but I do think it's much too simplistic to say
-"Flatpaks will solve everything", or "the distro approach is best", or
-"just use the cloud".
+I'm sure most of this can be solved, some day, in some manner. It's
+definitely an interesting problem area. I don't have a solution, but I
+do think it's much too simplistic to say "Flatpaks will solve
+everything", or "the distro approach is best", or "just use the
+cloud".

Fix: wording
diff --git a/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn b/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
index 18c8984..31f6f6d 100644
--- a/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
+++ b/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
@@ -89,10 +89,10 @@ For example, imagine LO uses a library to generate PDFs. A new version
 of the library reduces CPU consumption by 10%, but requires changes,
 becase the library's API (programming interface) has changed
 radically. The API changes are necessary to allow the speedup. Should
-LibreOffice upgrade to the new version of not? If 10% isn't enough, is
-90%? An automated system could upgrade the library, but that would
-then break the LO build, resulting in something that doesn't work
-anymore.
+LibreOffice upgrade to the new version of not? If 10% isn't enough of
+a speedup to warrant the effort to make the LO changes, is 90%? An
+automated system could upgrade the library, but that would then break
+the LO build, resulting in something that doesn't work anymore.
 
 Security updates are easier, since they usually don't involve API
 changes. An automated system could upgrade dependencies for security

Fix: wording
diff --git a/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn b/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
index 1cd5090..18c8984 100644
--- a/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
+++ b/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
@@ -80,10 +80,10 @@ it is). It's a huge piece of software, and it needs a very large
 number of libraries and other dependencies to work. These need to be
 provided inside the LibreOffice Flatpak, or by one or more of the
 Flatpak "runtimes", which are bundles of common dependencies. Making
-sure all of the dependencies can be partly automated, but not fully:
-someone, somewhere, needs to make the decision that a newer version is
-worth upgrading to right now, even if it requires changes in
-LibreOffice for the newer version to work.
+sure all of the dependencies are up to date can be partly automated,
+but not fully: someone, somewhere, needs to make the decision that a
+newer version is worth upgrading to right now, even if it requires
+changes in LibreOffice for the newer version to work.
 
 For example, imagine LO uses a library to generate PDFs. A new version
 of the library reduces CPU consumption by 10%, but requires changes,

Fix: wording
diff --git a/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn b/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
index 3c53425..1cd5090 100644
--- a/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
+++ b/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
@@ -41,13 +41,13 @@ things that can be used maliciously.
 
 As a user, I want software that's...
 
+* easy to install and update
 * secure to install (what I install is what the developers delivered)
 * always up to date with security fixes, including for any
   dependencies (embedded in the software or otherwise)
 * reasonably up to date with other bug fixes
 * sufficiently up to date with features I want (but I don't care a
   about newer features that I don't have a use for)
-* easy to install and update
 * protective of my freedoms and privacy and other human rights, which
   includes (but is not restricted to) being able to self-host services
   and work offline
@@ -59,10 +59,9 @@ As a software developer, I additionally want my own software to be...
   my users
 * easy to deliver to my users
 * easy to debug
-* isolated from any changes to build and runtime dependencies that
-  break my software, or at least make such changes be extremely
-  obvious, meaning they result in a build error or at least an error
-  during automated tests
+* not be broken by changes to build and runtime dependencies, or at
+  least make such changes be extremely obvious, meaning they result in
+  a build error or at least an error during automated tests
 
 These are requirements that are hard to satisfy. They require a lot of
 manual effort, and discipline, and I fear the current state of

Fix: wording
diff --git a/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn b/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
index d2cfbbe..3c53425 100644
--- a/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
+++ b/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
@@ -26,7 +26,9 @@ interesting.
 
 The website raises the issue that Flatpak's sandboxing is not as good
 as it should be. This seems to be true. Some of Flatpak's defenders
-respond that it's an evolving technology, which seems fair.
+respond that it's an evolving technology, which seems fair. It's not
+necessary to be perfect; it's important to be better than what came
+before, and to constantly improve.
 
 The website also raises the point that a number of flatpaks themselves
 contain unfixes security problems. I find this to be more worrying

Fix: wording
diff --git a/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn b/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
index 04253d8..d2cfbbe 100644
--- a/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
+++ b/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
@@ -18,11 +18,11 @@ distributions. There's also, Snappy, which is [Canonical][]'s similar
 thing.
 
 The discussion started with the launch of a new website attacking
-Flatpak as a technology. I'm not going to link to it, since it's too
-much of an anonymous attack and rant, and less than constructive. I'd
-rather have a constructive discussion. I'm also not going to link to
-rebuttals, and will just present my own view, which I hope is
-different enough to be interesting.
+Flatpak as a technology. I'm not going to link to it, since it's an
+anonymous attack and rant, and not constructive. I'd rather have a
+constructive discussion. I'm also not going to link to rebuttals, and
+will just present my own view, which I hope is different enough to be
+interesting.
 
 The website raises the issue that Flatpak's sandboxing is not as good
 as it should be. This seems to be true. Some of Flatpak's defenders

Fix: wording
diff --git a/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn b/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
index 555c47b..04253d8 100644
--- a/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
+++ b/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
@@ -3,19 +3,19 @@
 [[!tag draft debian flatpak distribution]]
 
 [Flatpak]: https://flatpak.org/
-[snappy]: https://en.wikipedia.org/wiki/Snappy_(package_manager)
+[Snappy]: https://en.wikipedia.org/wiki/Snappy_(package_manager)
 [Canonical]: https://www.canonical.com/
 
-I don't think flatpaks, snaps, traditional Linux distros,
-non-traditional Linux distros, containers, online services, or other
-forms of software distribution are a good solution for all users. They
-all fail in some way, and each of them requires continued, ongoing
-effort to be acceptable even within their limitations.
+I don't think any of [Flatpak][], [Snappy][], traditional Linux
+distros, non-traditional Linux distros, containers, online services,
+or other forms of software distribution are a good solution for all
+users. They all fail in some way, and each of them requires continued,
+ongoing effort to be acceptable even within their limitations.
 
-This week, there's been some discussion about [Flatpak][], a software
+This week, there's been some discussion about Flatpak, a software
 distribution approach that's (mostly) independent of traditional Linux
-distributions. There's also, [snappy][], which is [Canonical][]'s
-similar thing.
+distributions. There's also, Snappy, which is [Canonical][]'s similar
+thing.
 
 The discussion started with the launch of a new website attacking
 Flatpak as a technology. I'm not going to link to it, since it's too

creating tag page tag/flatpak
diff --git a/tag/flatpak.mdwn b/tag/flatpak.mdwn
new file mode 100644
index 0000000..b8bb42a
--- /dev/null
+++ b/tag/flatpak.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged flatpak"]]
+
+[[!inline pages="tagged(flatpak)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tag/distribution
diff --git a/tag/distribution.mdwn b/tag/distribution.mdwn
new file mode 100644
index 0000000..76b86c9
--- /dev/null
+++ b/tag/distribution.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged distribution"]]
+
+[[!inline pages="tagged(distribution)" actions="no" archive="yes"
+feedshow=10]]

Publish log entry
diff --git a/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn b/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
new file mode 100644
index 0000000..555c47b
--- /dev/null
+++ b/posts/2018/10/11/on_flatpak_snap_distros_and_software_distribution.mdwn
@@ -0,0 +1,109 @@
+[[!meta title="On flatpak, snap, distros, and software distribution"]]
+[[!meta date="2018-10-11 10:12"]]
+[[!tag draft debian flatpak distribution]]
+
+[Flatpak]: https://flatpak.org/
+[snappy]: https://en.wikipedia.org/wiki/Snappy_(package_manager)
+[Canonical]: https://www.canonical.com/
+
+I don't think flatpaks, snaps, traditional Linux distros,
+non-traditional Linux distros, containers, online services, or other
+forms of software distribution are a good solution for all users. They
+all fail in some way, and each of them requires continued, ongoing
+effort to be acceptable even within their limitations.
+
+This week, there's been some discussion about [Flatpak][], a software
+distribution approach that's (mostly) independent of traditional Linux
+distributions. There's also, [snappy][], which is [Canonical][]'s
+similar thing.
+
+The discussion started with the launch of a new website attacking
+Flatpak as a technology. I'm not going to link to it, since it's too
+much of an anonymous attack and rant, and less than constructive. I'd
+rather have a constructive discussion. I'm also not going to link to
+rebuttals, and will just present my own view, which I hope is
+different enough to be interesting.
+
+The website raises the issue that Flatpak's sandboxing is not as good
+as it should be. This seems to be true. Some of Flatpak's defenders
+respond that it's an evolving technology, which seems fair.
+
+The website also raises the point that a number of flatpaks themselves
+contain unfixes security problems. I find this to be more worrying
+than an imperfect sandbox. A security problem inside a perfect sandbox
+can still be catastrophic: it can leak sensitive data, join a
+distributed denial of service attack, use exessive CPU and power, and
+otherwise cause mayhem. The sandbox may help in containing the problem
+somewhat, but to be useful for valid use, the sandbox needs to allow
+things that can be used maliciously.
+
+As a user, I want software that's...
+
+* secure to install (what I install is what the developers delivered)
+* always up to date with security fixes, including for any
+  dependencies (embedded in the software or otherwise)
+* reasonably up to date with other bug fixes
+* sufficiently up to date with features I want (but I don't care a
+  about newer features that I don't have a use for)
+* easy to install and update
+* protective of my freedoms and privacy and other human rights, which
+  includes (but is not restricted to) being able to self-host services
+  and work offline
+
+As a software developer, I additionally want my own software to be...
+
+* effortless to build
+* automatically tested in a way that gives me confidence it works for
+  my users
+* easy to deliver to my users
+* easy to debug
+* isolated from any changes to build and runtime dependencies that
+  break my software, or at least make such changes be extremely
+  obvious, meaning they result in a build error or at least an error
+  during automated tests
+
+These are requirements that are hard to satisfy. They require a lot of
+manual effort, and discipline, and I fear the current state of
+software development isn't quite there yet. As an example, the Linux
+kernel development takes great care to never break userland, but that
+requires a lot of care when making changes, a lot of review, and a lot
+of testing, and a willingness to go to extremes to achieve that. As a
+result, upgrading to a newer kernel version tends to be a low-risk
+operation. The glibc C library, used by most Linux distributions, has
+a similar track record.
+
+But Linux and glibc are system software. Flatpak is about desktop
+software. Consider instead LibreOffice, the office suite. There's no
+reason why it couldn't be delivered to users as a Flatpak (and indeed
+it is). It's a huge piece of software, and it needs a very large
+number of libraries and other dependencies to work. These need to be
+provided inside the LibreOffice Flatpak, or by one or more of the
+Flatpak "runtimes", which are bundles of common dependencies. Making
+sure all of the dependencies can be partly automated, but not fully:
+someone, somewhere, needs to make the decision that a newer version is
+worth upgrading to right now, even if it requires changes in
+LibreOffice for the newer version to work.
+
+For example, imagine LO uses a library to generate PDFs. A new version
+of the library reduces CPU consumption by 10%, but requires changes,
+becase the library's API (programming interface) has changed
+radically. The API changes are necessary to allow the speedup. Should
+LibreOffice upgrade to the new version of not? If 10% isn't enough, is
+90%? An automated system could upgrade the library, but that would
+then break the LO build, resulting in something that doesn't work
+anymore.
+
+Security updates are easier, since they usually don't involve API
+changes. An automated system could upgrade dependencies for security
+updates, and then trigger automated build, test, and publish of a new
+Flatpak. However, this is made difficult by there is often no way to
+automatically, reliably find out that there is a security fix
+released. Again, manual work is required to find the security problem,
+to fix it, to communicate that there is a fix, and to upgrade the
+dependency. Some projects have partial solutions for that, but there
+seems to be nothing universal.
+
+I'm sure most of this can be solved, some day, in some manner. I don't
+have a solution yet, but I do think it's much too simplistic to say
+"Flatpaks will solve everything", or "the distro approach is best", or
+"just use the cloud".

Publish log entry
diff --git a/posts/2018/10/10/new_job_wmf_release_engineering.mdwn b/posts/2018/10/10/new_job_wmf_release_engineering.mdwn
new file mode 100644
index 0000000..00489ae
--- /dev/null
+++ b/posts/2018/10/10/new_job_wmf_release_engineering.mdwn
@@ -0,0 +1,8 @@
+[[!meta title="New job: WMF release engineering"]]
+[[!meta date="2018-10-10 09:02"]]
+[[!tag ]]
+
+I've started my new job. I now work in the release engineering team at
+Wikimedia, the organisation that runs sites such as Wikipedia. We help
+put new versions of the software that runs the sites into production.
+My role is to help make that process more automated and frequent.

Publish log entry
diff --git a/posts/2018/10/01/gitr_parsing_text_with_nom_pain_points.mdwn b/posts/2018/10/01/gitr_parsing_text_with_nom_pain_points.mdwn
new file mode 100644
index 0000000..24924e2
--- /dev/null
+++ b/posts/2018/10/01/gitr_parsing_text_with_nom_pain_points.mdwn
@@ -0,0 +1,26 @@
+[[!meta date="2018-10-01 09:04"]]
+[[!meta title="GITR: Parsing Text with Nom; Pain points"]]
+[[!tag learning-rust]]
+
+# Parsing
+
+* Nom seems powerful and useful, but I'm going to skip this chapter
+  for now. I'll return to it, or read the docs directly, when I need
+  to do some parsing.
+
+# Pain points
+
+* "What the notation says is that the output strings live at least as
+  long as the input string." — this seems like a bug, I think.
+  Surely the output strings live at most as long as the input string?
+
+* This chapter has little new stuff, but reminders of things that Rust
+  programmers need to take care of.
+
+# The End
+
+I've now read through the entire GITR book, except the parsing
+chapter. It's time to start writing Rust code. My first real project
+will be to rewrite my [summain][] tool in Rust.
+
+[summain]: https://liw.fi/summain/

Publish log entry
diff --git a/posts/2018/09/30/gitr_object-orientation_in_rust.mdwn b/posts/2018/09/30/gitr_object-orientation_in_rust.mdwn
new file mode 100644
index 0000000..fa62566
--- /dev/null
+++ b/posts/2018/09/30/gitr_object-orientation_in_rust.mdwn
@@ -0,0 +1,17 @@
+[[!meta title="GITR: Object-Orientation in Rust"]]
+[[!tag learning-rust]]
+[[!meta date="2018-09-30 08:30"]]
+
+* I use Python in a heavily object-oriented manner, but I've learnt to
+  mostly avoid some aspects, such as inheritance. I don't need Rust to
+  be OO.
+
+* Traits seem like a better approach. It stresses the interface aspect
+  of inheritance, which is much nicer than the mess that often results
+  from the implementation sharing aspect.
+
+* First example of Rust macros are implemented. This seem quite
+  powerful, and may well be a legshooting device. I shan't investigate
+  making macros of my own until I'm comfortable with the language
+  otherwise.
+

Publish log entry
diff --git a/posts/2018/09/29/gitr_threads_networking_and_sharing.mdwn b/posts/2018/09/29/gitr_threads_networking_and_sharing.mdwn
new file mode 100644
index 0000000..97a45e8
--- /dev/null
+++ b/posts/2018/09/29/gitr_threads_networking_and_sharing.mdwn
@@ -0,0 +1,79 @@
+[[!meta title="GITR: Threads, Networking and Sharing"]]
+[[!tag learning-rust]]
+[[!meta date="2018-09-29 10:14"]]
+
+* I'm excited about this. Mainstream CPUs have been gaining more cores
+  or hyperthreads since the early 2000s, but none of my programming
+  languages have been particularly good at making use of those. C is
+  just too complicated, and Python has the global interpreter lock,
+  which ruins concurrency a lot. Python's getting better, but not well
+  enough. Concurrency was one of the reasons I wanted to learn Haskell
+  in 2003, but being a bear with a very small brain, I still haven't
+  learnt much Haskell.
+
+  Rust promises to make use of threads be much safer than using them
+  in C, and that would be a really good thing.
+
+* This chapter also introduces networking, which also is exciting.
+  Much of what I've done in recent years has been web API
+  implementations, and I am looking forward to doing that in Rust.
+
+* New thing: `std::cell::Cell` with methods `new`, `get`, and `put`. 
+
+* New thing: `std::cell::RefCell` with methods `new`, `borrow`, and
+  `borrow_mut`. Borrow rules apply, but are checked runtime. Mutable
+  borrows should be used sparingly.
+
+* New thing: `std::rc::Rc` — essentially a reference counted
+  `Box`. Each `Rc` (clone of the original one) has an immutable
+  reference to a heap value, and together they manage the reference
+  count. When the count goes to zero, the heap value is freed. This is
+  a bit like manual memory management, but piggy-backing Rust's normal
+  memory management. Allows safe-ish data sharing for when the normal
+  Rust rules get too much in the way. Involves some runtime overhead.
+
+* New thing: `std::thread`, especially `spawn`. A thread executes a
+  closure. Threads are objects like others, e.g., they can be kept in
+  vector to be joined.
+
+* Important point: threads need to move values, not borrow. If they
+  borrow, the reference to the value they have may outlive the value
+  in the original thread, which would be a bug. Values with `'static`
+  lifetimes can be borrowed across threads. `Rc` is not thread safe
+  and can't be used to share references across threads.
+  `std::sync::Arc` is a thread-safe version, with more runtime
+  overhead.
+
+  This seems fundamental for making threaded code safe: tightly
+  controlled sharing of data and compile-time checks for violations of
+  such sharing.
+
+* New thing: channels for inter-thread communication. A
+  multiple-producer, single-consumer solution. There's a variant for
+  synchronous channels, where senders block until their message has
+  been received.
+
+* New thing: barriers for inter-thread synchronisation: all threads
+  wait for all threads to reach the barriers, and then all threads
+  continue. Presumably the barrier keeps track of how many threads
+  have references to it, and how many are currently waiting for the
+  barrier. This seems like a nice, easy synchronisation approach,
+  which may still take some time to get used to.
+
+* New thing: mutexes. For protecting a shared resource from concurrent
+  use. Note that this is different from barriers as it's about
+  allowing only one thread to access the resource at a time. While the
+  Rust mutex abstraction seems easier to use correctly than the
+  corresponding C stuff, it's still not super-easy to be safe.
+
+  Rust seems to provide, in the stdlib or as third-party crates,
+  higher level abstractions that make concurrency easier to use
+  safely. This is very good.
+
+* I don't understand from the GITR description what `to_socket_addrs`
+  does for numeric addresses. Creates a socket that connects to the
+  remote server+port? If not, how does the example check for addresses
+  that are reachable hosts? What does `:0` mean for port addresses?
+
+* Overall, Rust seems to model networking based on traditional
+  networking concepts. Seems straightforward enough.

Publish log entry
diff --git a/posts/2018/09/26/gitr_error_handling.mdwn b/posts/2018/09/26/gitr_error_handling.mdwn
new file mode 100644
index 0000000..33308b2
--- /dev/null
+++ b/posts/2018/09/26/gitr_error_handling.mdwn
@@ -0,0 +1,32 @@
+[[!meta title="GITR: Error Handling"]]
+[[!meta date="2018-09-26 14:37"]]
+[[!tag learning-rust]]
+
+* Error handling is one of the aspects of programming that tends to be
+  most tedious and most error prone.
+
+* The `?` operator in Rust makes error handling be quite easy, but
+  still requires declaring functions as returning a `Result`. That's
+  not too difficult. However, it's not enough to use use the handy
+  operator for good error handling: at some point, **something** needs
+  to actually handle the error result, or the program will crash. This
+  is similar to exceptions in Python. But at least Rust makes it
+  explicit what functions can return an error, and strongly guides you
+  to handle those results.
+
+* New thing: `std::error::Error` for defining one's own errors.
+
+* Thought: how does Rust handle out-of-memory errors? None of the
+  `new` functions return any kind of error, it seems to me, and new
+  "objects" get created all the time. Just crash?
+
+* I don't really understand `error-chain`.
+
+* `chain_err` seems like an interesting approach.
+
+* Overall, looks like Rust may have some useful machinery for handling
+  errors. More importantly, the type systems makes if more difficult
+  to just ignore errors, which I like. I may have to use Rust for a
+  while to appreciated the error handling, though.
+
+

Publish log entry
diff --git a/posts/2018/09/22/gitr_standard_library_containers.mdwn b/posts/2018/09/22/gitr_standard_library_containers.mdwn
new file mode 100644
index 0000000..cfd6a72
--- /dev/null
+++ b/posts/2018/09/22/gitr_standard_library_containers.mdwn
@@ -0,0 +1,44 @@
+[[!meta title="GITR: Standard Library Containers"]]
+[[!meta date="2018-09-22 15:30"]]
+[[!tag learning-rust]]
+
+* Python is a "batteris included" language: the standard library that
+  comes with a standard Python installation is quite large. Rust's is
+  not as large, but that's OK: Cargo makes it easy to use third-party
+  libraries (or so they say). Still, the Rust stdlib is large enough
+  that an overview is good.
+
+* The API of an abstract type, such as a `Vec`, is not as simple as
+  for, say, C or Python. That's because Rust has constraints on types
+  for some methods. For example, a method for sorting is only defined
+  for vectors of types that can be ordered. This makes the stdlib much
+  more powerful, and allows it to include a lot more functionality
+  than would be obvious from first glance, but does seem to me to make
+  it harder to navigate the docs, and find things.
+
+* So far it seems like Rust's stdlib is good, but concentrates on
+  giving building blocks for doing higher level things, rather than
+  implementing such higher level things itself. The stdlib aims to be
+  (and remaind) stable, not provide everything. This is probably good.
+  A firm, stable base on which to build things.
+
+* Slices and vectors are very closely related, but this is not because
+  there's hardcoded magic in the language, but rather the language
+  provides tools for implementing this. Contrast with C's pointers vs
+  arrays, which is deeply hardcoded into the language.
+
+* Methods one might expect for a container may be in an iterators
+  instead. Interesting design choice. I should study the iterator
+  methods.
+
+* There seems to be a lot of traits, and it will take some time to get
+  familiar with them all. A bit of a labyrinth.
+
+* I don't understand where `HashSet` in the "Here's a shortcut, just
+  as we defined for vectors" example comes from. Indeed, clicking on
+  the play button shows errors. Maybe it's meant to expand on previous
+  examples.
+
+* "If both the struct and the trait came from the same crate
+  (particularly, the stdlib) then such implemention would not be
+  allowed." I wonder if that should say "from different crates"?

Change: publish vmdb2 roadmap blog
diff --git a/posts/2018/09/20/vmdb2_roadmap.mdwn b/posts/2018/09/20/vmdb2_roadmap.mdwn
index 658d6ba..7ac1653 100644
--- a/posts/2018/09/20/vmdb2_roadmap.mdwn
+++ b/posts/2018/09/20/vmdb2_roadmap.mdwn
@@ -1,6 +1,6 @@
 [[!meta title="vmdb2 roadmap"]]
 [[!meta date="2018-09-20 10:58"]]
-[[!tag vmdb2 roadmap draft]]
+[[!tag vmdb2 roadmap]]
 
 I now have a rudimentary [roadmap][] for reaching 1.0 of [vmdb2][], my
 Debian image building tool.

creating tag page tag/roadmap
diff --git a/tag/roadmap.mdwn b/tag/roadmap.mdwn
new file mode 100644
index 0000000..b9c6334
--- /dev/null
+++ b/tag/roadmap.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged roadmap"]]
+
+[[!inline pages="tagged(roadmap)" actions="no" archive="yes"
+feedshow=10]]

Publish log entry
diff --git a/posts/2018/09/20/vmdb2_roadmap.mdwn b/posts/2018/09/20/vmdb2_roadmap.mdwn
new file mode 100644
index 0000000..658d6ba
--- /dev/null
+++ b/posts/2018/09/20/vmdb2_roadmap.mdwn
@@ -0,0 +1,47 @@
+[[!meta title="vmdb2 roadmap"]]
+[[!meta date="2018-09-20 10:58"]]
+[[!tag vmdb2 roadmap draft]]
+
+I now have a rudimentary [roadmap][] for reaching 1.0 of [vmdb2][], my
+Debian image building tool.
+
+[[!img vmdb2.svg alt="Visual roadmap"]]
+
+The visual roadmap is generated from the following YAML file:
+
+    vmdb2_1_0:
+      label: |
+        vmdb2 is production ready
+      depends:
+        - ci_builds_images
+        - docs
+        - x220_install
+
+    docs:
+      label: |
+        vmdb2 has a user
+        manual of acceptable
+        quality
+
+    x220_install:
+      label: |
+        x220 can install Debian
+        onto a Thinkpad x220
+        laptop
+
+    ci_builds_images:
+      label: |
+        CI builds and publishes
+        images using vmdb2
+      depends:
+        - amd64_images
+        - arm_images
+
+    amd64_images:
+      label: |
+        CI: amd64 images
+
+    arm_images:
+      label: |
+        CI: arm images of
+        various kinds
diff --git a/posts/2018/09/20/vmdb2_roadmap/vmdb2.svg b/posts/2018/09/20/vmdb2_roadmap/vmdb2.svg
new file mode 100644
index 0000000..2383f67
--- /dev/null
+++ b/posts/2018/09/20/vmdb2_roadmap/vmdb2.svg
@@ -0,0 +1,85 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
+ "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by graphviz version 2.40.1 (20161225.0304)
+ -->
+<!-- Title: project Pages: 1 -->
+<svg width="676pt" height="245pt"
+ viewBox="0.00 0.00 675.93 244.69" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 240.6934)">
+<title>project</title>
+<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-240.6934 671.9308,-240.6934 671.9308,4 -4,4"/>
+<!-- vmdb2_1_0 -->
+<g id="node1" class="node">
+<title>vmdb2_1_0</title>
+<polygon fill="#f4bada" stroke="#000000" points="431.6932,-236.6934 267.6932,-236.6934 267.6932,-200.6934 431.6932,-200.6934 431.6932,-236.6934"/>
+<text text-anchor="middle" x="349.6932" y="-214.9934" font-family="Times,serif" font-size="14.00" fill="#000000">vmdb2 is production ready</text>
+</g>
+<!-- ci_builds_images -->
+<g id="node2" class="node">
+<title>ci_builds_images</title>
+<polygon fill="#f4bada" stroke="#000000" points="237.1932,-146.2168 92.1932,-146.2168 92.1932,-108.2168 237.1932,-108.2168 237.1932,-146.2168"/>
+<text text-anchor="middle" x="164.6932" y="-131.0168" font-family="Times,serif" font-size="14.00" fill="#000000">CI builds and publishes</text>
+<text text-anchor="middle" x="164.6932" y="-116.0168" font-family="Times,serif" font-size="14.00" fill="#000000">images using vmdb2</text>
+</g>
+<!-- vmdb2_1_0&#45;&gt;ci_builds_images -->
+<g id="edge1" class="edge">
+<title>vmdb2_1_0&#45;&gt;ci_builds_images</title>
+<path fill="none" stroke="#000000" d="M313.1431,-200.6206C284.2949,-186.356 243.8169,-166.3409 212.4139,-150.8132"/>
+<polygon fill="#000000" stroke="#000000" points="213.915,-147.6509 203.3996,-146.3558 210.8122,-153.9257 213.915,-147.6509"/>
+</g>
+<!-- docs -->
+<g id="node3" class="node">
+<title>docs</title>
+<ellipse fill="#ffffff" stroke="#000000" cx="349.6932" cy="-127.2168" rx="94.0904" ry="37.4533"/>
+<text text-anchor="middle" x="349.6932" y="-138.5168" font-family="Times,serif" font-size="14.00" fill="#000000">vmdb2 has a user</text>
+<text text-anchor="middle" x="349.6932" y="-123.5168" font-family="Times,serif" font-size="14.00" fill="#000000">manual of acceptable</text>
+<text text-anchor="middle" x="349.6932" y="-108.5168" font-family="Times,serif" font-size="14.00" fill="#000000">quality</text>
+</g>
+<!-- vmdb2_1_0&#45;&gt;docs -->
+<g id="edge2" class="edge">
+<title>vmdb2_1_0&#45;&gt;docs</title>
+<path fill="none" stroke="#000000" d="M349.6932,-200.6206C349.6932,-193.2057 349.6932,-184.237 349.6932,-175.1524"/>
+<polygon fill="#000000" stroke="#000000" points="353.1933,-174.923 349.6932,-164.9231 346.1933,-174.9231 353.1933,-174.923"/>
+</g>
+<!-- x220_install -->
+<g id="node4" class="node">
+<title>x220_install</title>
+<ellipse fill="#ffffff" stroke="#000000" cx="564.6932" cy="-127.2168" rx="103.4757" ry="37.4533"/>
+<text text-anchor="middle" x="564.6932" y="-138.5168" font-family="Times,serif" font-size="14.00" fill="#000000">x220 can install Debian</text>
+<text text-anchor="middle" x="564.6932" y="-123.5168" font-family="Times,serif" font-size="14.00" fill="#000000">onto a Thinkpad x220</text>
+<text text-anchor="middle" x="564.6932" y="-108.5168" font-family="Times,serif" font-size="14.00" fill="#000000">laptop</text>
+</g>
+<!-- vmdb2_1_0&#45;&gt;x220_install -->
+<g id="edge3" class="edge">
+<title>vmdb2_1_0&#45;&gt;x220_install</title>
+<path fill="none" stroke="#000000" d="M392.1703,-200.6206C419.4592,-189.0099 455.7025,-173.5894 487.9243,-159.8799"/>
+<polygon fill="#000000" stroke="#000000" points="489.5049,-163.0111 497.3363,-155.8753 486.7642,-156.5698 489.5049,-163.0111"/>
+</g>
+<!-- amd64_images -->
+<g id="node5" class="node">
+<title>amd64_images</title>
+<ellipse fill="#ffffff" stroke="#000000" cx="76.6932" cy="-26.8701" rx="76.8869" ry="18"/>
+<text text-anchor="middle" x="76.6932" y="-23.1701" font-family="Times,serif" font-size="14.00" fill="#000000">CI: amd64 images</text>
+</g>
+<!-- ci_builds_images&#45;&gt;amd64_images -->
+<g id="edge4" class="edge">
+<title>ci_builds_images&#45;&gt;amd64_images</title>
+<path fill="none" stroke="#000000" d="M147.7259,-107.8689C133.8662,-92.0646 114.164,-69.5981 99.0237,-52.3337"/>
+<polygon fill="#000000" stroke="#000000" points="101.6507,-50.0208 92.4258,-44.81 96.3878,-54.6362 101.6507,-50.0208"/>
+</g>
+<!-- arm_images -->
+<g id="node6" class="node">
+<title>arm_images</title>
+<ellipse fill="#ffffff" stroke="#000000" cx="253.6932" cy="-26.8701" rx="82.9636" ry="26.7407"/>
+<text text-anchor="middle" x="253.6932" y="-30.6701" font-family="Times,serif" font-size="14.00" fill="#000000">CI: arm images of</text>
+<text text-anchor="middle" x="253.6932" y="-15.6701" font-family="Times,serif" font-size="14.00" fill="#000000">various kinds</text>
+</g>
+<!-- ci_builds_images&#45;&gt;arm_images -->
+<g id="edge5" class="edge">
+<title>ci_builds_images&#45;&gt;arm_images</title>
+<path fill="none" stroke="#000000" d="M181.8533,-107.8689C193.7474,-94.4584 209.896,-76.251 223.8701,-60.4953"/>
+<polygon fill="#000000" stroke="#000000" points="226.5104,-62.7932 230.5273,-52.9894 221.2734,-58.1484 226.5104,-62.7932"/>
+</g>
+</g>
+</svg>

Fix: pub foo; -> mod foo;
diff --git a/posts/2018/09/18/gitr_modules_and_cargo.mdwn b/posts/2018/09/18/gitr_modules_and_cargo.mdwn
index a500b21..f622e80 100644
--- a/posts/2018/09/18/gitr_modules_and_cargo.mdwn
+++ b/posts/2018/09/18/gitr_modules_and_cargo.mdwn
@@ -29,14 +29,14 @@
       answer() ... }`, to define a module in the same file.
 
     * File `bar.rs` contains `pub fn answer()...`. Can be used from
-      `main.rs` with `pub bar; bar::answer()`
+      `main.rs` with `mod bar; bar::answer()`
 
     * File `yo/mod.rs` contains `pub fn answer()...`. Can be used from
-      `main.rs` with `pub yo; yo::answer()`. Note that `mod.rs` is the
+      `main.rs` with `mod yo; yo::answer()`. Note that `mod.rs` is the
       required filename.
 
     * File `yo/thing.rs` contains `pub fn answer()...` and `yo/mod.rs`
-      contains `pub mod thing;`. Can be used from `main.rs` with `pub
+      contains `pub mod thing;`. Can be used from `main.rs` with `mod
       yo; yo::thing::answer()`.
 
 * rustc can handle all of this. It builds everything from scratch by

Publish log entry
diff --git a/posts/2018/09/18/gitr_modules_and_cargo.mdwn b/posts/2018/09/18/gitr_modules_and_cargo.mdwn
new file mode 100644
index 0000000..a500b21
--- /dev/null
+++ b/posts/2018/09/18/gitr_modules_and_cargo.mdwn
@@ -0,0 +1,72 @@
+[[!meta title="GITR: Modules and Cargo"]]
+[[!meta date="2018-09-18 11:50"]]
+[[!tag learning-rust]]
+
+* New thing: modules. Mostly separate from source code files: a file
+  can contain any number of modules.
+
+* Names in a module, or struct, are private by default, and must be
+  declared with `pub` to be public. I like. It's much better than C
+  (public by default) or Python (only a convention that a leading
+  underscore makes a name private; the language doesn't enforce this).
+
+  This kind of thing is the more helpful to larger a code base
+  becomes, and the more code is shared between developers.
+
+* Also, a struct itself can be public, while it's members are not.
+  Another good idea. Within a module struct members are public. This
+  seems convenient, but it strikes me that keeping modules small is
+  going to be a good idea.
+
+* A file `src/foo.rs` is a module, `mod foo;` is needed to use it.
+
+* I like that rustc (and cargo) handles building of modules
+  automatically, and that not Makefiles are needed.
+
+* Summary of how modules and separate source files work:
+
+    * Main function is in `main.rs`. It contains `pub mod foo { pub fn
+      answer() ... }`, to define a module in the same file.
+
+    * File `bar.rs` contains `pub fn answer()...`. Can be used from
+      `main.rs` with `pub bar; bar::answer()`
+
+    * File `yo/mod.rs` contains `pub fn answer()...`. Can be used from
+      `main.rs` with `pub yo; yo::answer()`. Note that `mod.rs` is the
+      required filename.
+
+    * File `yo/thing.rs` contains `pub fn answer()...` and `yo/mod.rs`
+      contains `pub mod thing;`. Can be used from `main.rs` with `pub
+      yo; yo::thing::answer()`.
+
+* rustc can handle all of this. It builds everything from scratch by
+  default, which is fine for me for now. Rustc can build libraries
+  (called crates) as well, reducing the need to build everything every
+  time.
+
+  Use `extern foo;` to use a separately built library.
+
+* Static linking is the default, at least for now. Makes for easier
+  development of Rust, since there's no need for a stable ABI, and no
+  need to rebuild everything after each Rust stable release. However,
+  I expect Debian to favour dynamic linking, but I haven't checked.
+
+* I can live with static linking, but it's awkward for security
+  updates.
+
+* The author says Go has a philosophical problem with dynamic linking.
+  I wonder what that is.
+
+* Cargo is the Rust workflow tool and package manager.
+  <https://crates.io/> is the site for sharing Rust libraries. I'll
+  want to see what security guarantees they provide.
+
+* New thing: raw string literals: `r#"..."#` (allows embeded
+  newlines, and backslashes lose meaning).
+
+* Will need to think about how Cargo deals with versioned depenedcies.
+
+* I'll need to experiment with the regex and crono crates at some
+  point. No hurry, though. I'll wait until I need them. I will
+  probably want to play with the serde crates soon, though. They seem
+  much likely to be useful soon in ick.

Publish log entry
diff --git a/posts/2018/09/15/a_filesystem_walker_as_an_iterator.mdwn b/posts/2018/09/15/a_filesystem_walker_as_an_iterator.mdwn
new file mode 100644
index 0000000..feacb7e
--- /dev/null
+++ b/posts/2018/09/15/a_filesystem_walker_as_an_iterator.mdwn
@@ -0,0 +1,18 @@
+[[!meta title="A filesystem walker as an iterator"]]
+[[!meta date="2018-09-15 18:03"]]
+[[!tag learning-rust]]
+
+Today I'm not reading another chapter. Instead I want to try my hand
+at writing some Rust. I want to have an iterator that scans a
+directory tree, and returns a `std::fs::DirEntry` for each file,
+directory, or other filesystem object. I want to call the iterator
+something like this:
+
+    let tree = DirTree::new("/");
+    for entry in tree {
+        println!("{:?}", entry.path.display());
+    }
+
+<http://git.liw.fi/rust/fswalk/> took me too long, but I got it
+working. Not pretty, and hides a source of errors, but good enough for
+today.

Change: inline manually
diff --git a/posts/2018/09/14/gitr_filesystem_and_processes.mdwn b/posts/2018/09/14/gitr_filesystem_and_processes.mdwn
index 4d3df40..fea9ca7 100644
--- a/posts/2018/09/14/gitr_filesystem_and_processes.mdwn
+++ b/posts/2018/09/14/gitr_filesystem_and_processes.mdwn
@@ -22,4 +22,27 @@
 
 # List files program
 
-[[list_files.rs]]
+    use std::env;
+    use std::io;
+    use std::path::Path;
+
+    fn main() -> io::Result<()> {
+        for filename in env::args().skip(1) {
+            let path = Path::new(&filename);
+            find_files(&path)?;
+        }
+        Ok(())
+    }
+
+    fn find_files(root: &Path) -> io::Result<()> {
+        for entry in root.read_dir()? {
+            let entry = entry?;
+            let path = entry.path();
+            if path.is_dir() {
+                find_files(&path)?;
+            } else {
+                println!("{:?}", entry.path().display());
+            }
+        }
+        Ok(())
+    }

Change: link to, don't inline
diff --git a/posts/2018/09/14/gitr_filesystem_and_processes.mdwn b/posts/2018/09/14/gitr_filesystem_and_processes.mdwn
index ee60e61..4d3df40 100644
--- a/posts/2018/09/14/gitr_filesystem_and_processes.mdwn
+++ b/posts/2018/09/14/gitr_filesystem_and_processes.mdwn
@@ -22,6 +22,4 @@
 
 # List files program
 
-[[!format txt """
-[[!inline pages="gitr_filesystem_and_processes/list_files.rs" raw="yes"]]
-"""]]
+[[list_files.rs]]

Change: specify subdir
diff --git a/posts/2018/09/14/gitr_filesystem_and_processes.mdwn b/posts/2018/09/14/gitr_filesystem_and_processes.mdwn
index 3725a6e..ee60e61 100644
--- a/posts/2018/09/14/gitr_filesystem_and_processes.mdwn
+++ b/posts/2018/09/14/gitr_filesystem_and_processes.mdwn
@@ -23,5 +23,5 @@
 # List files program
 
 [[!format txt """
-[[!inline pages="list_files.rs" raw="yes"]]
+[[!inline pages="gitr_filesystem_and_processes/list_files.rs" raw="yes"]]
 """]]

Change: inline as txt
diff --git a/posts/2018/09/14/gitr_filesystem_and_processes.mdwn b/posts/2018/09/14/gitr_filesystem_and_processes.mdwn
index 1f5d499..3725a6e 100644
--- a/posts/2018/09/14/gitr_filesystem_and_processes.mdwn
+++ b/posts/2018/09/14/gitr_filesystem_and_processes.mdwn
@@ -20,6 +20,8 @@
 * We're getting to interesting bits now: doing things with filesystems
   and processes.
 
-# list files program
+# List files program
 
+[[!format txt """
 [[!inline pages="list_files.rs" raw="yes"]]
+"""]]

Change: don't use !format, doesn't support Rust
diff --git a/posts/2018/09/14/gitr_filesystem_and_processes.mdwn b/posts/2018/09/14/gitr_filesystem_and_processes.mdwn
index ae377d4..1f5d499 100644
--- a/posts/2018/09/14/gitr_filesystem_and_processes.mdwn
+++ b/posts/2018/09/14/gitr_filesystem_and_processes.mdwn
@@ -22,6 +22,4 @@
 
 # list files program
 
-[[!format rust """
 [[!inline pages="list_files.rs" raw="yes"]]
-"""]]

Publish log entry
diff --git a/posts/2018/09/14/gitr_filesystem_and_processes.mdwn b/posts/2018/09/14/gitr_filesystem_and_processes.mdwn
new file mode 100644
index 0000000..ae377d4
--- /dev/null
+++ b/posts/2018/09/14/gitr_filesystem_and_processes.mdwn
@@ -0,0 +1,27 @@
+[[!meta title="GITR: Filesystem and Processes"]]
+[[!meta date="2018-09-14 10:33"]]
+[[!tag learning-rust]]
+
+* There's a lot of traits just for reading from files, or really
+  anything that implement those traits. This is a bit of maze, and
+  it'll take me a while to learn to navigate this.
+
+* Do iterators need to be of a particular type or implement a
+  particular trait, or is it enoug to just have a `next` method that
+  returns an Option?
+
+* I like that I/O errors must be handled explicitly, and that `?`
+  makes that easy. I've tended to become a little complacent with
+  Python's exception handling.
+
+* I like that filenames have their own type, even if Unix would do
+  with byte strings. So refreshing after Python.
+
+* We're getting to interesting bits now: doing things with filesystems
+  and processes.
+
+# list files program
+
+[[!format rust """
+[[!inline pages="list_files.rs" raw="yes"]]
+"""]]
diff --git a/posts/2018/09/14/gitr_filesystem_and_processes/list_files.rs b/posts/2018/09/14/gitr_filesystem_and_processes/list_files.rs
new file mode 100644
index 0000000..fd4ac33
--- /dev/null
+++ b/posts/2018/09/14/gitr_filesystem_and_processes/list_files.rs
@@ -0,0 +1,24 @@
+use std::env;
+use std::io;
+use std::path::Path;
+
+fn main() -> io::Result<()> {
+    for filename in env::args().skip(1) {
+        let path = Path::new(&filename);
+        find_files(&path)?;
+    }
+    Ok(())
+}
+
+fn find_files(root: &Path) -> io::Result<()> {
+    for entry in root.read_dir()? {
+        let entry = entry?;
+        let path = entry.path();
+        if path.is_dir() {
+            find_files(&path)?;
+        } else {
+            println!("{:?}", entry.path().display());
+        }
+    }
+    Ok(())
+}

Publish log entry
diff --git a/posts/2018/09/13/new_website_for_vmdb2.mdwn b/posts/2018/09/13/new_website_for_vmdb2.mdwn
new file mode 100644
index 0000000..d4a7a0a
--- /dev/null
+++ b/posts/2018/09/13/new_website_for_vmdb2.mdwn
@@ -0,0 +1,9 @@
+[[!meta title="New website for vmdb2"]]
+[[!meta date="2018-09-13 19:43"]]
+[[!tag announce vmdb2]]
+
+I've set up a new [website][] for vmdb2, my tool for building Debian
+images (basically "debootstrap, except in a disk image"). As usual for
+my websites, it's ugly. Feedback welcome.
+
+[website]: https://vmdb2.liw.fi/

Change: add clarifications to blog post about GITR ch. 2
diff --git a/posts/2018/09/13/gitr_structs_enums_and_matching.mdwn b/posts/2018/09/13/gitr_structs_enums_and_matching.mdwn
index d379566..b7324a4 100644
--- a/posts/2018/09/13/gitr_structs_enums_and_matching.mdwn
+++ b/posts/2018/09/13/gitr_structs_enums_and_matching.mdwn
@@ -108,3 +108,31 @@ Rust](https://stevedonovan.github.io/rust-gentle-intro/), chapter 2,
 
 * New thing: `type` to create type aliases, like `typedef` in C.
 
+Edited to add, based on feedback from my friend:
+
+* `t.N` syntax for indexing tuples only works for constants. `t[i]`
+  works for any expression.
+
+* It seems I misunderstood associated functions. It seems an
+  associated function is just a function in the `impl` block, but not
+  a method. A method needs to also get the value (or a reference to
+  it) as its first argument: a `self`, or `&self`, or `&mut self`
+  argument. A method is an associated function, but an associated
+  function need not be method.
+
+* Traits can provide default implementations for functions. This is
+  super-powerful.
+
+* Re the enum full glory example: Given a variable `x`, when a
+  function is called as `foo(x)` the value of `x` is **moved** into
+  the function, and cannot be used on the caller. If the call is
+  `foo(&x)`, then the value is **borrowed** and so the caller can
+  still use `x`. In the book's `match` example, what doesn't work is
+  matching against the value, since that moves the value out of the
+  `Value`, and that fails, because the `Value` is itself borrowed from
+  the caller: moving anything out from a borrowed value breaks Rust's
+  rules for keeping track of what's owned by whom. Also: you're not
+  allowed to give away something you've borrowed, in real life,
+  either. Having the match return a reference instead means borrowing
+  further, and that's OK.
+

Publish log entry
diff --git a/posts/2018/09/13/gitr_structs_enums_and_matching.mdwn b/posts/2018/09/13/gitr_structs_enums_and_matching.mdwn
new file mode 100644
index 0000000..d379566
--- /dev/null
+++ b/posts/2018/09/13/gitr_structs_enums_and_matching.mdwn
@@ -0,0 +1,110 @@
+[[!meta title="GITR: Structs, enums, and matching"]]
+[[!meta date="2018-09-13 10:02"]]
+[[!tag learning-rust]]
+
+Re-reading [Gentle introduction to
+Rust](https://stevedonovan.github.io/rust-gentle-intro/), chapter 2,
+"Structs, Enums and Matching".
+
+* Important concept: moving values. Rust assignment does not, by
+  default, copy a value, or add a reference to the value, but moves
+  it. This means that if a variable is assigned to another, the
+  original variable can no no longer be used. This can be controlled
+  by doing explicit copies (`clone` method), or implicit ones using
+  the copy trait.
+
+* This only affects "non-primitive types", basically anything that is
+  not a machine word or a small on-stack struct. It affects especially
+  (only?) things that are heap allocated.
+
+* It's all about managing memory management and borrowing.
+
+* I like that Rust makes this explicit and non-magic.
+
+* The rustdoc generated stuff have too many invisible links (titles,
+  etc), making it difficult to click on an area to get focus there.
+  This causes accidental navigation, and it's unnecessarily difficult
+  to get back, for some reason. (Also, WTF do I need to move keyboard
+  focus that way? Stupid web stuff.)
+
+* Important concept: variable scoping. Block scoping. Loop scope.
+  Scope ties into memory management: when execution leaves a scope,
+  all variables in that scope are "dropped", which can trigger heap
+  memory to be reclaimed. This is lovely.
+
+* New thing: tuples. Not very exciting, but indexing syntax is a
+  little unusual: `t.42`. Not sure if index can be any integral
+  expression or if it has to be constant.
+
+* I dislike the example that uses first and last name fields, even as
+  an example. It perpetrates the falsehood that everyone has first and
+  last names.
+
+* New thing: structs. Not exciting, as such, but very important.
+  Notably, these aren't classes, and Rust isn't an object oriented
+  language. I think I'm going to like that, even if it means
+  rearraning my brain a bit.
+
+* New thing: Associated functions and `impl` blocks. Very interesting.
+  This feels like it'll be crucial for making clean code. Having to
+  use them even for such common things as constructors could be a
+  little weird, but since a constructor is going rarely going to be
+  the only associated function, using the same approach for everything
+  makes a lot of sense. I like that there is no magic name for the
+  constructor, that `new` is merely a convention.
+
+* The magic `&self` argument to associated functions is a litle magic,
+  but it saves having to write out the full type, so it's OK.
+
+* New thing: `#[derive(Debug)]` to automatically add the `Debug`
+  trait. I expect this will become part of the boilerplate for most
+  structs, but it's useful to have it not be mandatory, to save on
+  code size.
+
+* Important thing: lifetimes. For Rust to manage heap values
+  correctly, it needs to know how long each value needs to live. This
+  is handled by allowing the programmer to specify the lifetime.
+  Enables better correctness analysis by compiler, leading to fewer
+  programming errors.
+
+* Important thing: traits. These provide the kind of functionality in
+  Rust that inheritance provides in OO languages. A bit like
+  interfaces. A trait defines an interface for a type, meaning
+  functions that can operate on values of that type. The functions can
+  then be implemented for different types, and Rust keeps track of
+  which implementation is called, by virtue of static typing.
+
+* Also, interestingly, one can add new methods to existing types by
+  defining new traits and implementing them. Including built-in types
+  like integers.
+
+* I should eventually study the Rust std basic traits.
+
+* Traits are correctly used by Rust a lot. For example, to implement
+  an iterator, you implement the `std::iter::Iterator` trait.
+
+* New thing: associated type for traits. Type parameters.
+
+* New thing: trait bounds. Essentially requirements on type
+  parameters. More things to tell the compiler, so it can save me from
+  making mistakes. Like. At the same time I foresee that this will
+  require me to learn a lot of details.
+
+* New thing: enums. Much nicer than C enums.
+
+* I'm not sure I understand the last two examples in [Enums in their
+  full glory](https://stevedonovan.github.io/rust-gentle-intro/2-structs-enums-lifetimes.html#enums-in-their-full-glory).
+  Why is it OK to return the extracted string value in an Option?
+
+* Closures and borrowing seems complicated. I may want to stick to
+  very simple closures for now.
+
+* Interesting point: Rust speed requires programmer to type more, to
+  be more explicit about types and lifetimes and so on. Javascript,
+  Python, etc, are terser languages, but suffer runtime speed
+  penalties for that. I am OK with Rust's tradeoff.
+
+* New thing: `Box`.
+
+* New thing: `type` to create type aliases, like `typedef` in C.
+

Publish log entry
diff --git a/posts/2018/09/12/gitr_introduction_basics.mdwn b/posts/2018/09/12/gitr_introduction_basics.mdwn
new file mode 100644
index 0000000..6f86a8b
--- /dev/null
+++ b/posts/2018/09/12/gitr_introduction_basics.mdwn
@@ -0,0 +1,174 @@
+[[!meta title="GITR: Introduction, Basics"]]
+[[!meta date="2018-09-12 10:36"]]
+[[!tag learning-rust]]
+
+Re-reading [Gentle introduction to Rust][] (GITR for short), the
+introduction and Chapter 1, "Basics". I'll be taking notes, to help me
+remember things. I'll note things that seem important or interesting,
+or new things I'm learning, or thoughts provoked by the reading
+material.
+
+[Gentle introduction to Rust]: https://stevedonovan.github.io/rust-gentle-intro/readme.html
+
+Introduction
+-----------------------------------------------------------------------------
+
+* GITR doesn't aim to reach all aspects of Rust, but to "get enough
+  feeling for the power of the language to want to go deeper". Also,
+  to make it easier to understand other documention, especially the
+  Rust Programming Language book.
+
+* GITR doesn't seem to be dumbed down, but it does skip some of the
+  details. That's fine.
+
+* Points at the [Rust Users Forum](https://users.rust-lang.org/), and
+  the [Rust subreddit](https://users.rust-lang.org/). I've already
+  subscribed to the subreddit RSS feed. I don't think I want to follow
+  a web discussion board.
+
+* Also points at the [FAQ](https://www.rust-lang.org/en-US/faq.html),
+  which I shall browser later.
+
+* While I don't disagree with the sentiment, GITR disses C a little
+  more than I'd like.
+
+* Unifying principles of Rust:
+
+    * strictly enforcing safe borrowing of data
+    * functions, methods and closures to operate on data
+    * tuples, structs and enums to aggregate data
+    * pattern matching to select and destructure data
+    * traits to define behaviour on data
+
+* GITR recommends installing Rust via the rustup script. I do wish
+  Rust would get past that being the default.
+
+* I've already previously configured my Emacs to use rust-mode for .rs
+  files. Seems to be OK.
+
+* Not entirly sure using `rustc` directly is a good idea in the long
+  run, `cargo` seems like a better user experience, but it can't hurt
+  to be familiar with both.
+
+* Refers to Zed Shaw. Ugh, even if the actual advice is OK in this
+  instance.
+
+Chapter 1: Basics
+-----------------------------------------------------------------------------
+
+* I don't think copy-pasting examples is a good habit, so I'll be
+  typing in any code myself.
+
+* Note to self: `println!` adds the newline, don't include it in the
+  string.
+
+* Compile-time errors for misspelt variable names? What sort of
+  wonderful magic is this? After a couple of decades of Python, this
+  is what I want.
+
+* For loops and ifs have fewer parens: nice.
+
+* New thing (I'd forgotten): ranges.
+
+* New thing: nearly everything is an expression and has a value,
+  including ifs. Nice.
+
+* New thing: variables are immutable by default. This is another thing
+  I like that Python lacks. It'll make it easier to manage large
+  programs, where unintended mutation is a frequent source of errors
+  for me. I expect it'll be a little annoying at first, but that I'll
+  get used to it. I also expect it to save me at least 12765 hours of
+  debugging in the next year.
+
+* New thing: type inference is also nice, but coming from
+  dynamically-typed Python it feels like less of a win than it would
+  if I came to Rust straight from C. However, I know from the little
+  Haskell I learnt that type inference is crucial for a statically and
+  strongly typed langauage to be comfortable
+
+* New thing: traits. These will be covered properly later, but I
+  already know enough (from a previous partial read of GITR) that
+  they're powerful and interesting. A bit like method overloading and
+  subclassing, but not as messy. Traits also make things like operator
+  overloading generic, and not as built-in as Python's `__foo__`
+  methods.
+
+* I like that Rust avoids implicit data type conversions, even from
+  "safe" cases like integer to float.
+
+* Not sure the implicit return for functions, where the last
+  executed expression in a function is the value returned for the
+  function is something I'm going to like. We'll see.
+
+* Interesting: you can pass a reference to a constant?
+
+* I haven't got Rust docs installed, but I'll be using the online
+  versions of those: <https://doc.rust-lang.org/>.
+
+* Arrays have a fixed size. Elements may be muted, but the array size
+  is fixed at creation time. Array size is part of its type: arrays of
+  different size have different types. Bounds checking for arrays can
+  happen at least partially at compile-time.
+
+* New thing: `[u32; 4]` is the type of an array of four thirty-two-bit
+  integers.
+
+* Slices are views into an array (or vector?). They're easier to pass
+  as function arguments. Slices know their length, but since they're
+  not fixed in size, different calls to a function may get slices of
+  different sizes. A slice can be a view to only part of an array.
+  Bounds-checking for slices is (at least primarily) at run-time.
+
+* New thing: `&foo` in Rust means "borrow foo".
+
+* New thing: `slice.get(i)` method to access a slice's element without
+  panicing. Returns a value of type `Option`, which means it can
+  return a reference to value (`Some(&value)`) or an indication of no
+  value (`None`). This is safer than Python's `None` as the caller
+  must always deal with both cases, in pattern matching. Also, safer
+  than throwing and catching exceptions. However, calling
+  `option.unwrap()` can still panic, so that's a potential trap. The
+  `unwrap_or` method avoids that, though. The `expect` method allows
+  giving a custom error message.
+
+* Vectors are re-sizeable arrays. They seem to "become slices" when
+  passed as an argument to a function that expects a slice. The borrow
+  operator does the type coercion. Vectors have handy methods: `pop`,
+  `extend`, `sort`, `dedup`. The docs have more:
+  <https://doc.rust-lang.org/std/vec/struct.Vec.html>.
+
+* Memory management in Rust is one of its stronger attractions for me.
+
+* New thing: iterators. A powerful concept I already know from Python.
+  I have the impression they're even more powerful and used even more
+  than in Python. It's interesting that GITR makes the point that
+  iterators are more efficient than looping, due to fewer bounds
+  checks. I like that Rust values efficiency.
+
+* Strings are complicates. No surprise, since they are by their very
+  nature, because humans. Rust strings are UTF-8 strings. There's byte
+  strings separately.
+
+* `format!` is a handy macro.
+
+* Having access to command line arguments and file I/O this early in
+  the book is good.
+
+* New thing: closures. Similar to lambdas in Python. Powerful.
+
+* New thing: `match`. I liked pattern matching in Haskell. This is
+  similar, but different. I like that you must handle all cases.
+
+* New thing: `if let`.
+
+* New thing: inclusive ranges, with three dots.
+
+* New thing: `Result` type. Similar to `Option`, but better suited for
+  errors, where an error is not just `None`. The "built-in" `Result`
+  wraps two values, one for `Ok`, the other `Err`. This gets a bit
+  repetitive, so `std::io::Result<T>` is a type alias for `Result<T,
+  std::io:Error>`. Shorter, easier to type.
+
+* New think: the `?` operator to handle `Result` checking in a terser
+  fashion that doesn't obscure the happy path. Only useable in
+  functions that return a `Result` value.

Publish log entry
diff --git a/posts/2018/09/11/a_plan_to_learn_rust.mdwn b/posts/2018/09/11/a_plan_to_learn_rust.mdwn
new file mode 100644
index 0000000..ed3a677
--- /dev/null
+++ b/posts/2018/09/11/a_plan_to_learn_rust.mdwn
@@ -0,0 +1,67 @@
+[[!meta title="A plan to learn Rust"]]
+[[!meta date="2018-09-11 16:49"]]
+[[!tag learning-rust]]
+
+I want to learn the Rust programming language. I have several reasons,
+but the primary ones are: it's been years since I learnt a new
+language and it'd be good for me to learn one; and I'm tired of Python
+and its limitations. I'd like a language that supports building large
+programs, which means it should have a strong type system, a good
+module system, and an healthy eco system. Rust has those, it seems. It
+also has good performance, and support for safer multitasking. It's
+also a systems language, which is what I usually like to do.
+
+I've read a little about Rust already:
+
+* skimmed through [The Rust Programming
+  Lanuage](https://doc.rust-lang.org/book/index.html) book
+* read part of the [Gentle
+  introduction](https://stevedonovan.github.io/rust-gentle-intro/readme.html#a-gentle-introduction-to-rust)
+
+I've also written a few learning toys in Rust: hello, echo, cat, wc,
+wordfreq. Nothing challenging yet.
+
+Here is an initial plan for learning Rust:
+
+* Make a schedule for reading the "Gentle introduction" book. Start a
+  public, but non-syndicated section in my blog for it. For each
+  chapter, write a summary of the salient points: what new stuff it
+  introduces, and any insights I have from learning the chapter. Plus
+  put any code I write based on reading the chapter on my git server.
+
+  Don't overload myself: one chapter a day is enough. This is not a
+  crash course.
+
+* Once I'm done with that, pick interesting parts of the RPL book, and
+  do the same thing with those.
+
+* Start connecting with the Rust community: join IRC channesl, mailing
+  lists, follow blogs, etc.
+
+* Explore standard Rust library docs to find interesting stuff, and
+  just to learn what's there, and to become familiar with the way Rust
+  reference docs are structured.
+
+* Explore [crates.io](https://crates.io/) to find interesting
+  libraries.
+
+* Start writing real programs: programs that do something useful that
+  I actually use. Ideas for these:
+
+    * Rewrite [summain](https://liw.fi/summain/), using as much
+      concurrency as allowed by the hardware. The Rust version should
+      be faster than the Python version, especially when doing
+      checksums.
+
+    * Write a web API that converts hex numbers into words using the
+      PGP word list. This involves porting my [Python
+      library](https://liw.fi/py_pgpwordlist/) to Rust, plus learning
+      how to write Rust HTTP APIs.
+
+    * Rewrite [distix](https://distix.eu/) in Rust. Possibly re-design
+      it at the same time.
+
+* Find out how to package Rust applications and libs well for Debian.
+  Integrate Rust into my personal development flow, with CI building
+  Debian packages of my own stuff, and installing it using those
+  packages.

creating tag page tag/learning-rust
diff --git a/tag/learning-rust.mdwn b/tag/learning-rust.mdwn
new file mode 100644
index 0000000..4024682
--- /dev/null
+++ b/tag/learning-rust.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged learning-rust"]]
+
+[[!inline pages="tagged(learning-rust)" actions="no" archive="yes"
+feedshow=10]]

Publish log entry
diff --git a/posts/2018/09/11/learning_rust.mdwn b/posts/2018/09/11/learning_rust.mdwn
new file mode 100644
index 0000000..b610786
--- /dev/null
+++ b/posts/2018/09/11/learning_rust.mdwn
@@ -0,0 +1,9 @@
+[[!meta title="Learning Rust"]]
+[[!meta date="2018-09-11 16:45"]]
+[[!tag learning-rust]]
+
+I've decided to take a more serious approach to learning [Rust][]. I
+will be making notes about this in my blog, but in a category that
+doesn't get included in my normal RSS/Atom feeds, and thus won't be
+syndicated to Planet Debian. There's no point in boring everyone with
+this.

Change: add a learning-rust feed, don't include its posts elsewhere
diff --git a/englishfeed.mdwn b/englishfeed.mdwn
index 961022a..1342b4f 100644
--- a/englishfeed.mdwn
+++ b/englishfeed.mdwn
@@ -1,5 +1,6 @@
 Feed for Planet Debian.
 
 [[!inline pages="page(posts/*) and !Discussion and !tagged(in-finnish) and 
-                 !tagged(draft) and created_after(posts/dd-again)"
+                 !tagged(draft) and created_after(posts/dd-again) and
+                 !tagged(learning-rust)"
           description="liw's English language blog feed"]]
diff --git a/index.mdwn b/index.mdwn
index 0343655..5d109f4 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -14,4 +14,4 @@ copyrighted by their authors. (No new comments are allowed.)
 
 ---
 
-[[!inline pages="page(posts/*) and !Discussion and !tagged(draft)"]]
+[[!inline pages="page(posts/*) and !Discussion and !tagged(draft) and !tagged(learning-rust)"]]
diff --git a/learning-rust.mdwn b/learning-rust.mdwn
new file mode 100644
index 0000000..aa0bcaf
--- /dev/null
+++ b/learning-rust.mdwn
@@ -0,0 +1,4 @@
+Feed for my posts about learning Rust. I don't want these syndicated
+to Planet Debian, so they have their own feed.
+
+[[!inline pages="page(posts/*) and tagged(learning-rust)"]]

Publish log entry
diff --git a/posts/2018/09/10/short-term_contracting_work.mdwn b/posts/2018/09/10/short-term_contracting_work.mdwn
new file mode 100644
index 0000000..705c089
--- /dev/null
+++ b/posts/2018/09/10/short-term_contracting_work.mdwn
@@ -0,0 +1,19 @@
+[[!meta title="Short-term contracting work?"]]
+[[!meta date="2018-09-08 09:12"]]
+[[!tag ]]
+
+I'm starting a new job in about a month. Until then, it'd be really
+helpful if I could earn some money via a short-term contracting or
+consulting job. If your company or employer could benefit from any of
+the following, please get in touch. I will invoice via a Finnish
+company, not as a person (within the EU, at least, this makes it
+easier for the clients). I also reside in Finland, if that matters
+(meaning, meeting outside of Helsinki gets tricky).
+
+* software architecture design and review
+* coding in Python, C, shell, or code review
+* documentation: writing, review
+* git training
+* help with automated testing: unit tests, integration tests
+* help with Ansible
+* packaging and distributing software as .deb packages

Change: publish "Federated CI" post
diff --git a/posts/2018/08/30/federated_ci.mdwn b/posts/2018/08/30/federated_ci.mdwn
index 1816b9c..e5acd97 100644
--- a/posts/2018/08/30/federated_ci.mdwn
+++ b/posts/2018/08/30/federated_ci.mdwn
@@ -1,6 +1,6 @@
 [[!meta title="Federated CI"]]
 [[!meta date="2018-08-30 17:09"]]
-[[!tag ick freedom federation ci draft]]
+[[!tag ick freedom federation ci]]
 
 In the modern world, a lot of computing happens on other people's
 computers. We use a lot of services provided by various parties. This

creating tag page tag/ci
diff --git a/tag/ci.mdwn b/tag/ci.mdwn
new file mode 100644
index 0000000..c6b64fa
--- /dev/null
+++ b/tag/ci.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged ci"]]
+
+[[!inline pages="tagged(ci)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tag/federation
diff --git a/tag/federation.mdwn b/tag/federation.mdwn
new file mode 100644
index 0000000..b5bb3c4
--- /dev/null
+++ b/tag/federation.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged federation"]]
+
+[[!inline pages="tagged(federation)" actions="no" archive="yes"
+feedshow=10]]

Publish log entry
diff --git a/posts/2018/08/30/federated_ci.mdwn b/posts/2018/08/30/federated_ci.mdwn
new file mode 100644
index 0000000..1816b9c
--- /dev/null
+++ b/posts/2018/08/30/federated_ci.mdwn
@@ -0,0 +1,105 @@
+[[!meta title="Federated CI"]]
+[[!meta date="2018-08-30 17:09"]]
+[[!tag ick freedom federation ci draft]]
+
+In the modern world, a lot of computing happens on other people's
+computers. We use a lot of services provided by various parties. This
+is a problem for user freedom and software freedom. For example, when
+I use Twitter, the software runs on Twitter's servers, and it's
+entirely proprietary. Even if it were free software, even if it were
+using the Affero GPL license (AGPL), my freedom would be limited by
+the fact that I can't change the software running on Twitter's
+servers.
+
+If I could, it would be a fairly large security problem. If I could,
+then anyone could, and they might not be good people like I am.
+
+If the software were free, instead of proprietary, I could run it on
+my own server, or find someone else to run the software for me. This
+would make me more free.
+
+That still leaves the data. My calendars would still be on Twitter's
+servers: all my tweets, direct messages, the lists of people I follow,
+or who follow me. Probably other things as well.
+
+For true freedom in this context, I would need to have a way to
+migrate my data from Twitter to another service. For practical
+freedom, the migration should not be excessively much work, or be
+excessively expensive, not just possible in principle.
+
+For Twitter specifically, there's free-er alternatives, such as
+Mastodon.
+
+For ick, my CI / CD engine, here is my current thinking: ick should
+not be a centralised service. It should be possible to pick and choose
+between instances of its various components: the controller, the
+workers, the artifact store, and Qvisqve (authentication server).
+Ditto for any additional components in the future.
+
+Since users and the components need to have some trust in each other,
+and there may be payment involved, this may need some co-ordination,
+and it may not be possible to pick entirely freely. However, as a
+thought experiment, let's consider a scenario.
+
+Alice has a bunch of non-mainstream computers she doesn't use herself
+much: Arm boards, RISCV boards, PowerPC Macs, Amigas, etc. All in good
+working condition. She'd be happy to set them up as build workers, and
+let people use them, for a small fee to cover her expenses.
+
+Bettina has a bunch of servers with lots of storage space. She'd be
+happy to let people use them as artifact stores, for a fee.
+
+Cecilia has a bunch of really fast x86-64 machines, with lots of RAM
+and very fast NVMe disks. She'd also be happy to rent them out as
+build workers.
+
+Dinah needs a CI system, but only has one small server, which would
+work fine as a controller for her own projects, but is too slow to
+comfortably do any actual building.
+
+Eliza also needs a CI system, but wants to keep her projects separate
+from Dinah's, so wants to have her own controller. (Eliza and Dinah
+can't tolerate each other and do not trust each other.)
+
+Fatima is trusted by everyone, except Eliza, and would be happy to run
+a secure server with Qvisqve.
+
+Georgina is like Fatima, except Eliza trusts her, and Dinah doesn't.
+
+The setup would be like this:
+
+* Alice and Cecilia run build workers. The workers trust both Fatima's
+  and Georgina's Qvisqves. All of their workers are registered with
+  both Qvisqves, and both Dinah's and Eliza's controllers.
+
+* Bettina's artifact store also trusts both Qvisqves.
+
+* Dinah creates an account on Fatima's Qvisqve. Eliza on Georgina's
+  Qvisqve. They each get an API token from the respective Qvisqve.
+
+* When Dinah's project builds, her controller uses the API token to
+  get an identity token from Fatima's Qvisqve, and gives that to each
+  worker used in her builds. The worker checks the ID token, and then
+  accepts work from Dinah's controller. The worker reports the time
+  used to do the work to its billing system, and Alice or Cecilia uses
+  that information to bill Dinah.
+
+* If a build needs to use an artifact store, the ID token is again
+  used to bill Dinah.
+
+* For Eliza, the same thing happens, except with another Qvisqve, and
+  costs from he builds go to her, not Dinah.
+
+This can be generalised to any number of ick components, which can be
+used criss-cross. Each component needs to be configured as to which
+Qvisqves it trusts.
+
+I think this would be a nicer thing to have than the centralised
+hosted ick I've been thinking about so far. Much more complicated, and
+much more work, of course. But interesting.
+
+There are some interesting and difficult questions about security to
+solve. I don't want to start thinking about the details yet, I'll play
+with the general idea first.
+
+What do you think? Send me your thoughts by email.

creating tag page tag/software-freedom
diff --git a/tag/software-freedom.mdwn b/tag/software-freedom.mdwn
new file mode 100644
index 0000000..ad32645
--- /dev/null
+++ b/tag/software-freedom.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged software-freedom"]]
+
+[[!inline pages="tagged(software-freedom)" actions="no" archive="yes"
+feedshow=10]]

Publish log entry
diff --git a/posts/2018/08/24/software_freedom_for_the_modern_era.mdwn b/posts/2018/08/24/software_freedom_for_the_modern_era.mdwn
new file mode 100644
index 0000000..14be109
--- /dev/null
+++ b/posts/2018/08/24/software_freedom_for_the_modern_era.mdwn
@@ -0,0 +1,16 @@
+[[!meta title="Software freedom for the modern era"]]
+[[!meta date="2018-08-24 23:20"]]
+[[!tag software-freedom]]
+
+I was watching the [Matthew Garret
+"Heresies"](https://debconf17.debconf.org/talks/177/) talk from
+Debconf17 today. The following thought struck me:
+
+True software freedom for this age: you can get the source code of a
+service you use, and can set it up on your own server. You can also
+get all your data from the service, and migrate it to another service
+(hosted by you or someone else). Futher, all of this needs to be easy,
+fast, and cheap enough to be feasible, and there can't be "network
+effects" that lock you into a specific service instance.
+
+I will need to think hard what this means for my future projects.

Publish log entry
diff --git a/posts/2018/08/03/on_requiring_english_in_a_free_software_project.mdwn b/posts/2018/08/03/on_requiring_english_in_a_free_software_project.mdwn
new file mode 100644
index 0000000..433807c
--- /dev/null
+++ b/posts/2018/08/03/on_requiring_english_in_a_free_software_project.mdwn
@@ -0,0 +1,95 @@
+[[!meta title="On requiring English in a free software project"]]
+[[!meta date="2018-08-03 15:49"]]
+[[!tag ]]
+
+This week's issue of [LWN][] has a quote by Linus Torvalds on
+translating kernel messages to something else than English. He's
+against it:
+
+> Really. No translation. No design for translation. It's a nasty
+> nasty rat-hole, and it's a pain for everybody.
+> 
+> There's another reason I _fundamentally_ don't want translations for
+> any kernel interfaces. If I get an error report, I want to be able
+> to just 'git grep" it. Translations make that basically impossible.
+>
+> So the fact is, I want simple English interfaces. And people who
+> have issues with that should just not use them. End of story. Use
+> the existing error numbers if you want internationalization, and
+> live with the fact that you only get the very limited error number.
+
+I can understand Linus's point of view. The LWN readers are having a
+discussion about it, and [one of the comments][] there provoked this
+blog post:
+
+> It somewhat bothers me that English, being the lingua franca of of
+> free software development, excludes a pretty huge parts of the world
+> from participation. I thought that for a significant part of the
+> world, writing an English commit message has to be more difficult
+> than writing code.
+
+I can understand that point of view as well.
+
+Here's my point of view:
+
+* It is entirely true that if a project requires English for
+  communication within the project, it discriminates against those who
+  don't know English well.
+
+* Not having a common language within a project, between those who
+  contribute to the project, now and later, would pretty much destroy
+  any hope of productive collaboration. 
+
+  If I have a free software project, and you ask me to merge something
+  where commit messages are in Hindi, error messages in French, and
+  code comments in Swahili, I'm not going to understand any of them. I
+  won't merge what I don't understand.
+
+  If I write my commit messages in Swedish, my code comments in
+  Finnish, and my error messages by entering randomly chosen words
+  from /usr/share/dict/words into search engine, and taking the page
+  title of the fourteenth hit, then you're not going to understand
+  anything either. You're unlikely to make any changes to my project.
+
+  When Bo finds the project in 2038, and needs it to prevent the
+  apocalypse from 32-time timestamps ending, and can't understand the
+  README, humanity is doomed.
+
+  Thus, on balance, I'm OK with requiring the use of a single language
+  for intra-project communication.
+
+* Users should not be presented with text in a language foreign to
+  them. However, this raises a support issue, where a user may
+  copy-paste an error message in their native language, and ask for
+  help, but the developers don't understand the language, and don't
+  even know what the error is. If they knew the error was "permission
+  denied", they could tell the user to run the chmod command to fix
+  the permissions. This is a dilemma.
+
+  I've solved the dilemma by having a unique error code for each error
+  message. If the user tells me "R12345X: Xscpft ulkacsx ggg:
+  /etc/obnam.conf!" I can look up R12345X and see that the error is
+  that /etc/obnam.conf is not in the expected file format.
+
+  This could be improved by making the "parameters" for the error
+  message easy to parse. Perhaps something like this:
+
+    R12345X: Xscpft ulkacsx ggg! filename=/etc/obnam.conf
+
+  Maintaining such error codes by hand would be quite tedious, of
+  course. I invented a [module][] for doing that. Each error message
+  is represented by a class, and the class creates its own error code
+  by taking the its Python module and class name, and computing and
+  MD5 of that. The first five hexadecimal digits are the code, and get
+  surrounded by R and X to make it easier to grep.
+
+  (I don't know if something similar might be used for the Linux
+  kernel.)
+
+* Humans and inter-human communication is difficult. In many cases,
+  there is not solution that's good for everyone. But let's not give
+  up.
+
+[LWN]: https://lwn.net/Articles/761490/
+[module]: http://git.liw.fi/obnam/tree/obnamlib/structurederror.py
+[one of the comments]: https://lwn.net/Articles/761599/

Publish log entry
diff --git a/posts/2018/07/19/building_debian_packages_in_ci_ick.mdwn b/posts/2018/07/19/building_debian_packages_in_ci_ick.mdwn
new file mode 100644
index 0000000..90a4948
--- /dev/null
+++ b/posts/2018/07/19/building_debian_packages_in_ci_ick.mdwn
@@ -0,0 +1,239 @@
+[[!meta title="Building Debian packages in CI (ick)"]]
+[[!meta date="2018-07-19 18:57"]]
+[[!tag debian ick]]
+
+[ick]: https://ick.liw.fi/
+
+I've recently made the first release of [ick][], my CI engine, which
+was built by ick itself. It went OK, but the process needs
+improvement. This blog post is some pondering on how the process of
+building Debian packages should happen in the best possible taste.
+
+I'd appreciate feedback, preferably by email (liw@liw.fi).
+
+# Context
+
+I develop a number of (fairly small) programs, as a hobby. Some of
+them I also maintain as packages in Debian. All of them I publish as
+Debian packages in my own APT repository. I want to make the process
+for making a release of any of my programs as easy and automated as
+possible, and that includes building Debian packages and uploading
+them to my personal APT repository, and to Debian itself.
+
+My personal APT repository contains builds of my programs against
+several Debian releases, because I want people to have the latest
+version of my stuff regardless of what version of Debian they run.
+(This is somewhat similar to what OEMs that provide packages of their
+own software as Debian packages need to do. I think. I'm not an OEM
+and I'm extrapolating wildly here.)
+
+I currently don't provide packages for anything but Debian. That's
+mostly because Debian is the only Linux distribution I know well, or
+use, or know how to make packages for. I could do Ubuntu builds fairly
+easily, but supporting Fedora, RHEL, Suse, Arch, Gentoo, etc, is not
+something I have the energy for at this time. I would appreciate help
+in doing that, however.
+
+I currently don't provide Debian packages for anything other than the
+AMD64 (x86-64, "Intel 64-bit") architecture. I've previously provided
+packages for i386 (x86-32), and may in the future want to provide
+packages for other architectures (RISC-V, various Arm variants, and
+possibly more). I want to keep this in mind for this discussion.
+
+# Overview
+
+For the context of this blog post, let's assume I have a project Foo.
+Its source code is stored in `foo.git`. When I make a release, I tag
+it using a signed git tag. From this tag, I want to build several
+things:
+
+* A **release tarball**. I will publish and archive this. I don't
+  trust git, and related tools (tar, compression programs, etc) to be
+  able to reproducibly produce the same bit-by-bit compressed tarball
+  in perpetuity. There's too many things that can go wrong. For
+  security reasons it's important to be able to have the exact same
+  tarball in the future as today. The simplest way to achive this is
+  to not try to reproduce, but to archive.
+
+* A **Debian source package**.
+
+* A **Debian binary package** built for each target version of Debian,
+  and each target hardware architecture (CPU, ABI, possibly toolchain
+  version). The binary package should be built from the source
+  package, because otherwise we don't know the source package can be
+  built.
+
+The release tarball should be put in a (public) archive. A digital
+signature using my personal PGP key should also be provided.
+
+The Debian source and binary packages should be uploaded to one or
+more APT repositories: my personal one, and selected packages also the
+Debian one. For uploading to Debian, the packages will need to be
+signed with my personal PGP key.
+
+(I am not going to give my CI access to my PGP key. Anything that
+needs to be signed with my own PGP key needs to be a manual step.)
+
+## Package versioning
+
+In Debian, packages are uploaded to the "unstable" section of the
+package archive, and then automatically copied into the "testing"
+section, and from there to the "stable" section, unless there are
+problems in a specific version of a package. Thus all binary packages
+are built against unstable, using versions of build dependencies in
+unstable. The process of copying via testing to stable can take years,
+and is a core part of how Debian achieves quality in its releases.
+(This is simplified and skips consideration like security updates and
+other updates directly to stable, which bypass unstable. These details
+are not relevant to this discussion, I think.)
+
+In my personal APT repository, no such copying takes place. A package
+built for unstable does not get copied into section with packages
+built for a released version of Debian, when Debian makes a release.
+
+Thus, for my personal APT repository, there may be several builds of
+the any one version of Foo available. 
+
+* foo 1.2, built for unstable
+* foo 1.2, built for Debian 9
+* foo 1.2, built for Debian 8
+
+In the future, that list may be expanded by having builds for several
+architectures:
+
+* foo 1.2, built for unstable, on amd64
+* foo 1.2, built for Debian 9, on amd64
+* foo 1.2, built for Debian 8, on amd64
+
+* foo 1.2, built for unstable, on riscv
+* foo 1.2, built for Debian 9, on riscv
+* foo 1.2, built for Debian 8, on riscv
+
+When I or my users upgrade our Debian hosts, say from Debian 8 to
+Debian 9, any packges from my personal APT archive should be updated
+accordingly. When I upgrade a host running Debian 8, with foo 1.2
+built for Debian 8, gets upgraded to Debian 9, foo should be upgraded
+to the version of 1.2 built for Debian 9.
+
+Because the Debian package manager works on combinations of package
+name and package version, that means that the version built for Debian
+8 should have a different, and lesser, version than the one built for
+Debian 9, even if the source code is identical except for the version
+number. The easiest way to achieve this is probably to build a
+different source package for each target Debian release. That source
+package has no other differences than the debian/changelog entry with
+a new version number, so it doesn't necessarily need to be stored
+persistently.
+
+(This is effectively what Debians "binary NMU" uploads do: use the
+same source package version, but do a build varying only the version
+number. Debian does this, among other reasons, to force a re-build of
+a package using a new version of a build depenency, for which it is
+unnecessary to do a whole new sourceful upload. For my CI build
+purposes, it may be useful to have a new source package, for cases
+where there are other changes than the source package. This will need
+further thought and research.)
+
+Thus, I need to produce the following source and binary packages:
+
+* `foo_1.2-1.dsc` &mdash; source package for unstable
+* `foo_1.2-1.orig.tar.xz` &mdash; upstream tarball
+* `foo_1.2-1.debian.tar.xz` &mdash; Debian packaging and changes
+* `foo_1.2-1_amd64.deb` &mdash; binary package for unstable, amd64
+* `foo_1.2-1_riscv.deb` &mdash; binary package for unstable, riscv
+
+* `foo_1.2-1~debian8.dsc` &mdash; source package for Debian 8
+* `foo_1.2-1~debian8.debian.tar.xz` &mdash; Debian packaging and changes
+* `foo_1.2-1~debian8_amd64.deb` &mdash; binary package for Debian 8, amd64
+* `foo_1.2-1~debian8_riscv.deb` &mdash; binary package for Debian 8, riscv
+
+* `foo_1.2-1~debian9.dsc` &mdash; source package for Debian 9
+* `foo_1.2-1~debian9.debian.tar.xz` &mdash; Debian packaging and changes
+* `foo_1.2-1~debian9_amd64.deb` &mdash; binary package for Debian 9, amd64
+* `foo_1.2-1~debian9_riscv.deb` &mdash; binary package for Debian 9, riscv
+
+The `orig.tar.xz` file is a bit-by-bit copy of the upstream release
+tarball. The `debian.tar.xz` files have the Debian packaging files,
+plus any Debian specific changes. (For simplicity, I'm assuming a
+specific Debian source package format. The actual list of files may
+vary, but the `.dsc` file is crucial, and references the other files
+in the source package. Again, these details don't really matter for
+this discussion.)
+
+To upload to Debian, I would upload the `foo_1.2-1.dsc` source package
+from the list above, after downloading the files and signing them with
+my PGP key. To upload to my personal APT repository, I would upload
+all of them.
+
+# Where should Debian packaging be stored in version control?
+
+There seems to be no strong consensus in Debian about where the
+packaging files (the debian/ subdirectory and its contents) should be
+stored in version control. Several approaches are common. The examples
+below use git as the version control system, as it's clearly the most
+common one now.
+
+* The "upstream does the packaging" approach: upstream's `foo.git`
+  also contains the Debian packaging. Packages are built using that.
+  This seems to be especially common for programs, where upstream and
+  the Debian package maintainer are the same entity. That's also the
+  OEM model.
+
+* The "clone upstream and add packaging" approach: the Debian package
+  maintainer clonse the upstream repository, and adds the packaging
+  files in a separate branch. When upstream makes a release, the
+  master branch in the packaging repository is updated to match the
+  upstream's master branch, and the packaging branch is rebased on top
+  of that.
+
+* The "keep it separate" approach: the Debian packager puts the
+  packaging files in their own repository, and the source tree is
+  constructed from botht the upstream repository and the packaging
+  repository.
+
+For my own use, I prefer the "upstream does packaging" approach, as

(Diff truncated)
Change: publish ick 0.53 announcement
diff --git a/posts/2018/07/18/ick_version_0_53_released_ci_engine.mdwn b/posts/2018/07/18/ick_version_0_53_released_ci_engine.mdwn
index 73f861e..d55bdf6 100644
--- a/posts/2018/07/18/ick_version_0_53_released_ci_engine.mdwn
+++ b/posts/2018/07/18/ick_version_0_53_released_ci_engine.mdwn
@@ -1,6 +1,6 @@
 [[!meta title="Ick version 0.53 released: CI engine"]]
 [[!meta date="2018-07-18 18:14"]]
-[[!tag draft announcement ick]]
+[[!tag announcement ick]]
 
 I have just made a new release of ick, my CI system. The new version number
 is 0.53, and a summary of the changes is below. The source code is pushed

Fix: markup
diff --git a/posts/2018/07/18/ick_version_0_53_released_ci_engine.mdwn b/posts/2018/07/18/ick_version_0_53_released_ci_engine.mdwn
index 8d521ba..73f861e 100644
--- a/posts/2018/07/18/ick_version_0_53_released_ci_engine.mdwn
+++ b/posts/2018/07/18/ick_version_0_53_released_ci_engine.mdwn
@@ -5,7 +5,7 @@
 I have just made a new release of ick, my CI system. The new version number
 is 0.53, and a summary of the changes is below. The source code is pushed
 to my git server (git.liw.fi), and Debian packages to my APT repository
-(code.liw.fi/debian). See <https://ick.liw.fi/download/ for instructions.
+(code.liw.fi/debian). See <https://ick.liw.fi/download/> for instructions.
 
 See the website for more information: <https://ick.liw.fi/>
 

Publish log entry
diff --git a/posts/2018/07/18/ick_version_0_53_released_ci_engine.mdwn b/posts/2018/07/18/ick_version_0_53_released_ci_engine.mdwn
new file mode 100644
index 0000000..8d521ba
--- /dev/null
+++ b/posts/2018/07/18/ick_version_0_53_released_ci_engine.mdwn
@@ -0,0 +1,45 @@
+[[!meta title="Ick version 0.53 released: CI engine"]]
+[[!meta date="2018-07-18 18:14"]]
+[[!tag draft announcement ick]]
+
+I have just made a new release of ick, my CI system. The new version number
+is 0.53, and a summary of the changes is below. The source code is pushed
+to my git server (git.liw.fi), and Debian packages to my APT repository
+(code.liw.fi/debian). See <https://ick.liw.fi/download/ for instructions.
+
+See the website for more information: <https://ick.liw.fi/>
+
+A notable change from previous releases should be invisible to users: the
+release is built by ick2 itself, instead of my old mostly-manual CI script.
+This means I can abandon the old script and live in a brave, new world with
+tea, frozen-bubble, and deep meaningful relationships with good people.
+
+Version 0.53, released 2018-07-18
+------------------------------------
+
+* Notification mails now include controller URL, so it's easy to see
+  which ick instance they come from. They also include the exit code
+  (assuming the notification itself doesn't fail), and a clear SUCCESS
+  or FAILURE in the subject.
+
+* Icktool shows a more humane error message if getting a token fails,
+  instead of a Python stack trace.
+
+* Icktool will now give a more humane error message if user triggers
+  the build of a project that doesn't exist, instead of a Python stack
+  trace.
+
+* Icktool now looks for credentials using both the controller URL, and
+  the authentication URL.
+
+* Icktool can now download artifacts from the artifact store, with the
+  new `get-artifact` subcomand.
+
+* The `archive: workspace` action now takes an optional `globs` field,
+  which is a list of Unix filename globs, for what to include in the
+  artifact. Also, optionally the field `name_from` can be used to
+  specify the name of a project parameter, which contains the name of
+  the artifact. The default is the `artifact_name` parameter.
+
+* A Code of Conduct has been added to the ick project.
+  <https://ick.liw.fi/conduct/> has the canonical copy.

Publish log entry
diff --git a/posts/2018/06/21/ick_alpha-6_released_ci_cd_engine.mdwn b/posts/2018/06/21/ick_alpha-6_released_ci_cd_engine.mdwn
new file mode 100644
index 0000000..81d07d0
--- /dev/null
+++ b/posts/2018/06/21/ick_alpha-6_released_ci_cd_engine.mdwn
@@ -0,0 +1,28 @@
+[[!meta title="Ick ALPHA-6 released: CI/CD engine"]]
+[[!meta date="2018-06-21 19:28"]]
+[[!tag announcement ick]]
+
+It gives me no small amount of satisfaction to announce the ALPHA-6
+version of [ick][], my fledgling continuous integration and deployment
+engine. Ick has been now deployed and used by other people than
+myself.
+
+Ick can, right now:
+
+* Build system trees for containers.
+* Use system trees to run builds in containers.
+* Build Debian packages.
+* Publish Debian packages via its own APT repository.
+* Deploy to a production server.
+
+There's still many missing features. Ick is by no means ready to
+replace your existing CI/CD system, but if you'd like to have a look
+at ick, and help us make it the CI/CD system of your dreams, now is a
+good time to give it a whirl.
+
+(Big missing features: web UI, building for multiple CPU
+architectures, dependencies between projects, good documentation, a
+development community. I intend to make all of these happen in due
+time. Help would be welcome.)
+
+[ick]: https://ick.liw.fi/

Fix: bad link
diff --git a/posts/welcome.mdwn b/posts/welcome.mdwn
index b632217..4b62ecd 100644
--- a/posts/welcome.mdwn
+++ b/posts/welcome.mdwn
@@ -2,7 +2,7 @@
 [[!tag meta]]
 [[!meta date="2010-11-29 09:18:04 +0000"]]
 
-Once upon a time, I had a [diary on Advogato](http://advogato.org/person/liw/), 
+Once upon a time, I had a diary on Advogato,
 but rarely used that. Then I wrote my own web log engine, and [used that for 
 many years](http://liw.iki.fi/liw/log/). Then I switched my entire site to 
 [Ikiwiki](http://ikiwiki.info), but other things made 

Fix: add missing links
diff --git a/posts/2018/06/09/hacker_noir_developments.mdwn b/posts/2018/06/09/hacker_noir_developments.mdwn
index 604a25e..bf92e90 100644
--- a/posts/2018/06/09/hacker_noir_developments.mdwn
+++ b/posts/2018/06/09/hacker_noir_developments.mdwn
@@ -11,3 +11,8 @@ happens.
 The Assault chapter was hard to write. It's based on something that
 happened to me earlier this year. The Ambush chapter was much more
 fun.
+
+[Hacker Noir]: https://noir.liw.fi/
+[Patreon]: https://blog.liw.fi/posts/2018/03/08/new_chapter_of_hacker_noir_on_patreon/
+[Assault]: https://noir.liw.fi/assault/
+

Publish log entry
diff --git a/posts/2018/06/09/hacker_noir_developments.mdwn b/posts/2018/06/09/hacker_noir_developments.mdwn
new file mode 100644
index 0000000..604a25e
--- /dev/null
+++ b/posts/2018/06/09/hacker_noir_developments.mdwn
@@ -0,0 +1,13 @@
+[[!meta title="Hacker Noir developments"]]
+[[!meta date="2018-06-09 21:41"]]
+[[!tag noir]]
+
+I've been slowly writing on would-be novel, [Hacker Noir][]. See also
+my [Patreon][] post. I've just pushed out a new public chapter,
+[Assault][], to the public website, and a patron-only chapter to
+Patreon: "Ambush", where the Team is ambushed, and then something bad
+happens.
+
+The Assault chapter was hard to write. It's based on something that
+happened to me earlier this year. The Ambush chapter was much more
+fun.

Fix: tag for Noir post
diff --git a/posts/2018/03/08/new_chapter_of_hacker_noir_on_patreon.mdwn b/posts/2018/03/08/new_chapter_of_hacker_noir_on_patreon.mdwn
index 155e0af..60580da 100644
--- a/posts/2018/03/08/new_chapter_of_hacker_noir_on_patreon.mdwn
+++ b/posts/2018/03/08/new_chapter_of_hacker_noir_on_patreon.mdwn
@@ -1,6 +1,6 @@
 [[!meta title="New chapter of Hacker Noir on Patreon"]]
 [[!meta date="2018-03-08 13:20"]]
-[[!tag hacker-noir patreon announcement]]
+[[!tag noir noir patreon announcement]]
 
 For the 2016 [NaNoWriMo][] I started writing a novel about software
 development, "Hacker Noir". I didn't finish it during that November,

Publish log entry
diff --git a/posts/2018/04/09/architecture_aromas.mdwn b/posts/2018/04/09/architecture_aromas.mdwn
new file mode 100644
index 0000000..7a84d52
--- /dev/null
+++ b/posts/2018/04/09/architecture_aromas.mdwn
@@ -0,0 +1,15 @@
+[[!meta title="Architecture aromas"]]
+[[!meta date="2018-04-09 09:12"]]
+[[!tag ]]
+
+"Code smells" are a well-known concept: they're things that make you
+worried your code is not of good quality, without necessarily being
+horribly bad in and of themsleves. Like a bad smell in a kitchen that
+looks clean.
+
+I've lately been thinking of "architecture aromas", which indicate
+there might be something good about an architecture, but don't
+guarantee it.
+
+An example: you can't think of any compontent that could be removed
+from an architecture without sacrificing main functionality.

creating tag page tag/secuity
diff --git a/tag/secuity.mdwn b/tag/secuity.mdwn
new file mode 100644
index 0000000..9d3c218
--- /dev/null
+++ b/tag/secuity.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged secuity"]]
+
+[[!inline pages="tagged(secuity)" actions="no" archive="yes"
+feedshow=10]]

Publish log entry
diff --git a/posts/2018/04/07/storing_passwords_in_cleartext_don_t_ever.mdwn b/posts/2018/04/07/storing_passwords_in_cleartext_don_t_ever.mdwn
new file mode 100644
index 0000000..dad6c83
--- /dev/null
+++ b/posts/2018/04/07/storing_passwords_in_cleartext_don_t_ever.mdwn
@@ -0,0 +1,49 @@
+[[!meta title="Storing passwords in cleartext: don't ever"]]
+[[!meta date="2018-04-07 10:28"]]
+[[!tag rant secuity]]
+
+[T-mobile]: https://motherboard.vice.com/en_us/article/7xdeby/t-mobile-stores-part-of-customers-passwords-in-plaintext-says-it-has-amazingly-good-security
+[Qvisqve]: http://www.qvarn.org/qvisqve/
+[ick]: http://ick.liw.fi/
+[source]: http://git.qvarnlabs.net/qvisqve/tree/qvisqve_secrets/secrets.py
+
+This year I've implemented a rudimentary authentication server for
+work, called [Qvisqve][]. I am in the process for also using it for my
+current hobby project, [ick][], which provides HTTP APIs and needs
+authentication. Qvisqve stores passwords using scrypt: [source][].
+It's not been audited, and I'm not claiming it to be perfect, but it's
+at least not storing passwords in cleartext. (If you find a problem,
+do email me and tell me: liw@liw.fi.)
+
+This week, two news stories have reached me about service providers
+storing passwords in cleartext. One is a Finnish system for people
+starting a new business. The password database has leaked, with about
+130,000 cleartext passwords. The other is about [T-mobile][] in
+Austria bragging on Twitter that they store customer passwords in
+cleartext, and some people not liking that.
+
+In both cases, representatives of the company claim it's OK, because
+they have "good security". I disagree. Storing passwords is itself
+shockingly bad security, regardless of how good your other security
+measures are, and whether your password database leaks or not.
+Claiming it's ever OK to store user passwords in cleartext in a
+service is incompetence at best.
+
+When you have large numbers of users, storing passwords in cleartext
+becomes more than just a small "oops". It becomes a security risk for
+all your users. It becomes gross incompetence.
+
+A bank is required to keep their customers' money secure. They're not
+allowed to store their customers cash in a suitcase on the pavement
+without anyone guarding it. Even with a guard, it'd be negligent,
+incompetent, to do that. The bulk of the money gets stored in a vault,
+with alarms, and guards, and the bank spends much effort on making
+sure the money is safe. Everyone understands this.
+
+Similar requirements should be placed on those storing passwords, or
+other such security-sensitive information of their users.
+
+Storing passwords in cleartext, when you have large numbers of users,
+should be criminal negligence, and should carry legally mandated
+sanctions. This should happen when the situation is realised, even if
+the passwords haven't leaked.

Publish
diff --git a/posts/2018/03/08/new_chapter_of_hacker_noir_on_patreon.mdwn b/posts/2018/03/08/new_chapter_of_hacker_noir_on_patreon.mdwn
index 5d95a8c..155e0af 100644
--- a/posts/2018/03/08/new_chapter_of_hacker_noir_on_patreon.mdwn
+++ b/posts/2018/03/08/new_chapter_of_hacker_noir_on_patreon.mdwn
@@ -1,6 +1,6 @@
 [[!meta title="New chapter of Hacker Noir on Patreon"]]
 [[!meta date="2018-03-08 13:20"]]
-[[!tag draft hacker-noir patreon announcement]]
+[[!tag hacker-noir patreon announcement]]
 
 For the 2016 [NaNoWriMo][] I started writing a novel about software
 development, "Hacker Noir". I didn't finish it during that November,
@@ -18,10 +18,11 @@ Patreon.
 I don't expect to make a lot of money, but I am hoping having active
 supporters will motivate me to keep writing.
 
-I'm writing the first draft. It's likely to be as horrific as every
-first-time author's first draft is. If you'd like to read it as raw as
-it gets, please do. Once the first draft is finished, I expect to read
-it myself, and be horrified, and throw it all away, and start over.
+I'm writing the first draft of the book. It's likely to be as horrific
+as every first-time author's first draft is. If you'd like to read it
+as raw as it gets, please do. Once the first draft is finished, I
+expect to read it myself, and be horrified, and throw it all away, and
+start over.
 
 Also, I should go get some training on marketing.
 

creating tag page tag/hacker-noir
diff --git a/tag/hacker-noir.mdwn b/tag/hacker-noir.mdwn
new file mode 100644
index 0000000..785f19a
--- /dev/null
+++ b/tag/hacker-noir.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged hacker-noir"]]
+
+[[!inline pages="tagged(hacker-noir)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tag/patreon
diff --git a/tag/patreon.mdwn b/tag/patreon.mdwn
new file mode 100644
index 0000000..0389d5c
--- /dev/null
+++ b/tag/patreon.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged patreon"]]
+
+[[!inline pages="tagged(patreon)" actions="no" archive="yes"
+feedshow=10]]

Publish log entry
diff --git a/posts/2018/03/08/new_chapter_of_hacker_noir_on_patreon.mdwn b/posts/2018/03/08/new_chapter_of_hacker_noir_on_patreon.mdwn
new file mode 100644
index 0000000..5d95a8c
--- /dev/null
+++ b/posts/2018/03/08/new_chapter_of_hacker_noir_on_patreon.mdwn
@@ -0,0 +1,32 @@
+[[!meta title="New chapter of Hacker Noir on Patreon"]]
+[[!meta date="2018-03-08 13:20"]]
+[[!tag draft hacker-noir patreon announcement]]
+
+For the 2016 [NaNoWriMo][] I started writing a novel about software
+development, "Hacker Noir". I didn't finish it during that November,
+and I still haven't finished it. I had a year long hiatus, due to work
+and life being stressful, when I didn't write on the novel at all.
+However, inspired by both the [Doctorow method][] and the
+[Seinfeld method][], I have recently started writing again.
+
+I've just published a new chapter. However, unlike last year, I'm
+publishing it on my [Patreon][] only, for the first month, and only
+for patrons. Then, next month, I'll be putting that chapter on the
+book's public site ([noir.liw.fi][]), and another new chapter on
+Patreon.
+
+I don't expect to make a lot of money, but I am hoping having active
+supporters will motivate me to keep writing.
+
+I'm writing the first draft. It's likely to be as horrific as every
+first-time author's first draft is. If you'd like to read it as raw as
+it gets, please do. Once the first draft is finished, I expect to read
+it myself, and be horrified, and throw it all away, and start over.
+
+Also, I should go get some training on marketing.
+
+[NaNoWriMo]: https://nanowrimo.org/
+[Doctorow method]: http://www.locusmag.com/Features/2009/01/cory-doctorow-writing-in-age-of.html
+[Seinfeld method]: https://www.writersstore.com/dont-break-the-chain-jerry-seinfeld/
+[Patreon]: https://www.patreon.com/user?u=7874725
+[noir.liw.fi]: http://noir.liw.fi/

Publish log entry
diff --git a/posts/2018/03/05/dpkg_maintainer_script_containerisation.mdwn b/posts/2018/03/05/dpkg_maintainer_script_containerisation.mdwn
new file mode 100644
index 0000000..f96b0df
--- /dev/null
+++ b/posts/2018/03/05/dpkg_maintainer_script_containerisation.mdwn
@@ -0,0 +1,52 @@
+[[!meta title="dpkg maintainer script containerisation"]]
+[[!meta date="2018-03-05 12:58"]]
+[[!tag debian]]
+
+Random crazy Debian idea of today: add support to dpkg so that it uses
+containers (or namespaces, or whatever works for this) for running
+package maintainer scripts (pre- and postinst, pre- and postrm), to
+prevent them from accidentally or maliciously writing to unwanted
+parts of the filesystem, or from doing unwanted network I/O.
+
+I think this would be useful for third-party packages, but also for
+packages from Debian itself. You heard it here first! Debian package
+maintainers have been known to make mistakes.
+
+Obviously there needs to be ways in which these restrictions can be
+overridden, but that override should be clear and obvious to the user
+(sysadmin), not something they notice because they happen to be
+running strace or tcpdump during the install.
+
+Corollary: dpkg could restrict where a .deb can place files based on
+the origin of the package.
+
+Example: Installing chrome.deb from Google installs a file in
+`/etc/apt/sources.list.d`, which is a surprise to some. If dpkg were
+to not allow that (as a file in the .deb, or a file created in
+postinst), unless the user was told and explicitly agreed to it, it
+would be less of a nasty surprise.
+
+Example: Some [stupid][] Debian package maintainer is very busy at
+work and does Debian hacking when they should really be sleeping, and
+types the following into their postrm script, while being asleep:
+
+    #!/bin/sh
+
+    PKG=perfectbackup
+    LIB="/var/lib/ $PKG"
+
+    rm -rf "$LIB"
+
+See the mistake? Ideally, this would be found during automated testing
+_before_ the package gets uploaded, but that assumes said package
+maintainer uses tools like piuparts.
+
+I think it'd be better if we didn't rely only infallible,
+indefatigable people with perfect workflows and processes for safety.
+
+Having dpkg make the whole filesystem read-only, except for the parts
+that clearly belong to the package, based on some sensible set of
+rules, or based a suitable override, would protect against mistakes
+like this.
+
+[stupid]: https://contributors.debian.org/contributor/lars@debian/

Publish log entry
diff --git a/posts/2018/02/17/what_is_debian_all_about_really_or_friction_packaging_complex_applications.mdwn b/posts/2018/02/17/what_is_debian_all_about_really_or_friction_packaging_complex_applications.mdwn
new file mode 100644
index 0000000..e244da0
--- /dev/null
+++ b/posts/2018/02/17/what_is_debian_all_about_really_or_friction_packaging_complex_applications.mdwn
@@ -0,0 +1,419 @@
+[[!meta title="What is Debian all about, really? Or: friction, packaging complex applications"]]
+[[!meta date="2018-02-17 15:05"]]
+[[!tag debian opinion]]
+
+[discussion]: https://lists.debian.org/debian-devel/2018/02/msg00295.html
+[blog post]: https://apebox.org/wordpress/linux/1229
+
+Another weekend, another big mailing list thread
+=============================================================================
+
+This weekend, those interested in Debian development have been having
+a [discussion][] on the debian-devel mailing list about "What can
+Debian do to provide complex applications to its users?". I'm
+commenting on that in my blog rather than the mailing list, since this
+got a bit too long to be usefully done in an email.
+
+directhex's recent [blog post][] "Packaging is hard. Packager-friendly
+is harder." is also relevant.
+
+
+The problem
+=============================================================================
+
+To start with, I don't think the email that started this discussion
+poses the right question. The problem not really about complex
+applications, we already have those in Debian. See, for example,
+LibreOffice. The discussion is really about how Debian should deal
+with the way some types of applications are developed upstream these
+days. They're not all complex, and they're not all big, but as usual,
+things only get interesting when _n_ is big.
+
+A particularly clear example is the whole nodejs ecosystem, but it's
+not limited to that and it's not limited to web applications. This is
+also not the first time this topic arises, but we've never come to any
+good conclusion.
+
+My understanding of the problem is as follows:
+
+> A current trend in software development is to use programming
+> languages, often interpreted high level languages, combined with
+> heavy use of third-party libraries, and a language-specific package
+> manager for installing libraries for the developer to use, and
+> sometimes also for the sysadmin installing the software for
+> production to use. This bypasses the Linux distributions entirely.
+> The benefit is that it has allowed ecosystems for specific
+> programming languages where there is very little _friction_ for
+> using libraries written in that language to be used by developers,
+> speeding up development cycles a lot.
+
+
+When I was young(er) the world was horrible
+=============================================================================
+
+In comparison, in the old days, which for me means the 1990s, and
+before Debian took over my computing life, the cycle was something
+like this: 
+
+> I would be writing an application, and would need to use a library
+> to make some part of my application easier to write. To use that
+> library, I would download the source code archive of the latest
+> release, and laboriously decipher and follow the build and
+> installation instructions, fix any problems, rinse, repeat. After
+> getting the library installed, I would get back to developing my
+> application. Often the installation of the dependency would take
+> hours, so not a thing to be undertaken lightly.
+
+
+Debian made some things better
+=============================================================================
+
+With Debian, and apt, and having access to hundreds upon hundreds of
+libraries packaged for Debian, this become a much easier process.
+But only for the things packaged for Debian.
+
+For those developing and publishing libraries, Debian didn't make the
+process any easier. They would still have to publish a source code
+archive, but also hope that it would eventually be included in Debian.
+And updates to libraries in the Debian stable release would not get
+into the hands of users until the next Debian stable release. This is
+a lot of friction. For C libraries, that friction has traditionally
+been tolerable. The effort of making the library in the first place is
+considerable, so any friction added by Debian is small by comparison.
+
+
+The world has changed around Debian
+=============================================================================
+
+In the modern world, developing a new library is much easier, and so
+also the friction caused by Debian is much more of a hindrance. My
+understanding is that things now happen more like this:
+
+> I'm developing an application. I realise I could use a library. I
+> run the language-specific package manager (pip, cpan, gem, npm,
+> cargo, etc), it downloads the library, installs it in my home
+> directory or my application source tree, and in less than the time
+> it takes to have sip of tea, I can get back to developing my
+> application.
+
+This has a lot less friction than the Debian route. The attraction to
+application programmers is clear. For library authors, the process is
+also much streamlined. Writing a library, especially in a high-level
+language, is fairly easy, and publishing it for others to use is quick
+and simple. This can lead to a virtuous cycle where I write a useful
+little library, you use and tell me about a bug or a missing feature,
+I add it, publish the new version, you use it, and we're both happy as
+can be. Where this might have taken weeks or months in the old days,
+it can now happen in minutes.
+
+
+The big question: why Debian?
+=============================================================================
+
+In this brave new world, **why would anyone bother with Debian
+anymore?** Or any traditional Linux distribution, since this isn't
+particularly specific to Debian. (But I mention Debian specifically,
+since it's what I now best.)
+
+A number of things have been mentioned or alluded to in the
+[discussion][] mentioned above, but I think it's good for the
+discussion to be explicit about them. As a computer user, software
+developer, system administrator, and software freedom _enthusiast_, I
+see the following reasons to continue to use Debian:
+
+* The freeness of software included in Debian has been vetted. I have
+  a **strong guarantee** that software included in Debian is free
+  software. This goes beyond the licence of that particular piece of
+  software, but includes practical considerations like the software
+  can actually be built using free tooling, and that I have access to
+  that tooling, because the tooling, too, is included in Debian.
+
+    * There was a time when Debian debated (with itself) whether it
+      was OK to include a binary that needed to be built using a
+      proprietary C compiler. We decided that it isn't, or not in the
+      main package archive.
+
+    * These days we have the question of whether "minimised
+      Javascript" is OK to be included in Debian, if it can't be
+      produced using tools packaged in Debian. My understanding is
+      that we have already decided that it's not, but the discussion
+      continues. To me, this seems equivalent to the above case.
+
+* I have a **strong guarantee** that software in a stable Debian
+  release won't change underneath me in incompatible ways, except in
+  special circumstances. This means that if I'm writing my application
+  and targeting Debian stable, the library API won't change, at least
+  not until the next Debian stable release. Likewise for every other
+  bit of software I use. Having things to continue to work without
+  having to worry is a good thing.
+
+    * Note that a side-effect of the low friction of library
+      development current ecosystems sometimes results in the library
+      API changing. This would mean my application would need to
+      change to adapt to the API change. That's friction for my work.
+
+* I have a **strong guarantee** that a dependency won't just
+  disappear. Debian has a large mirror network of its package archive,
+  and there are easy tools to run my own mirror, if I want to. While
+  running my own mirror is possible for other package management
+  systems, each one adds to the friction.
+
+    * The nodejs NPM ecosystem seems to be especially vulnerable to
+      this. More than once packages have gone missing, resulting other
+      projects, which depend on the missing packages, to start
+      failing.
+
+    * The way the Debian project is organised, it is almost impossible
+      for this to happen in Debian. Not only are package removals
+      carefully co-ordinated, packages that are depended on on by other
+      packages aren't removed.
+
+* I have a **strong guarantee** that a Debian package I get from a
+  Debian mirror is the official package from Debian: either the actual
+  package uploaded by a Debian developer or a binary package built by
+  a trusted Debian build server. This is because Debian uses
+  cryptographic signatures of the package lists and I have a trust
+  path to the Debian signing key.
+
+    * At least some of the language specific package managers fail to
+      have such a trust path. This means that I have no guarantees
+      that the library package I download today, was the same code
+      uploaded by library author.
+
+    * Note that https does not help here. It protects the transfer
+      from the package manger's web server to me, but makes absolutely
+      no guarantees about the validity of the package. There's been
+      enough cases of the package repository having been attacked that
+      this matters to me. Debian's signatures protect against malicious
+      changes on mirror hosts.
+
+* I have a **reasonably strong guarantee** that any problem I find can
+  be fixed, by me or someone else. This is not a strong guarantee,
+  because Debian can't do anything about insanely complicated code,
+  for example, but at least I can rely on being able to rebuild the
+  software. That's a basic requirement for fixing a bug.

(Diff truncated)
Fix: add missing link to Qvisqve
diff --git a/posts/2018/02/09/qvisqve_-_an_authorisation_server_first_alpha_release.mdwn b/posts/2018/02/09/qvisqve_-_an_authorisation_server_first_alpha_release.mdwn
index 1c21af9..7def4f9 100644
--- a/posts/2018/02/09/qvisqve_-_an_authorisation_server_first_alpha_release.mdwn
+++ b/posts/2018/02/09/qvisqve_-_an_authorisation_server_first_alpha_release.mdwn
@@ -3,9 +3,10 @@
 [[!tag announce qvisqve]]
 
 [QvarnLabs Ab]: http://qvarnlabs.com/
+[Qvisqve]: http://qvarn.org/qvisqve
 
 My company, [QvarnLabs Ab][], has today released the first alpha
-version of our new product, Qvisqve. Below is the press release. I
+version of our new product, [Qvisqve][]. Below is the press release. I
 wrote pretty much all the code, and it's free software (AGPL+).
 
 ---

creating tag page tag/qvisqve
diff --git a/tag/qvisqve.mdwn b/tag/qvisqve.mdwn
new file mode 100644
index 0000000..937a9c5
--- /dev/null
+++ b/tag/qvisqve.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged qvisqve"]]
+
+[[!inline pages="tagged(qvisqve)" actions="no" archive="yes"
+feedshow=10]]

Publish log entry
diff --git a/posts/2018/02/09/qvisqve_-_an_authorisation_server_first_alpha_release.mdwn b/posts/2018/02/09/qvisqve_-_an_authorisation_server_first_alpha_release.mdwn
new file mode 100644
index 0000000..1c21af9
--- /dev/null
+++ b/posts/2018/02/09/qvisqve_-_an_authorisation_server_first_alpha_release.mdwn
@@ -0,0 +1,39 @@
+[[!meta title="Qvisqve - an authorisation server, first alpha release"]]
+[[!meta date="2018-02-09 16:30"]]
+[[!tag announce qvisqve]]
+
+[QvarnLabs Ab]: http://qvarnlabs.com/
+
+My company, [QvarnLabs Ab][], has today released the first alpha
+version of our new product, Qvisqve. Below is the press release. I
+wrote pretty much all the code, and it's free software (AGPL+).
+
+---
+
+Helsinki, Finland - 2018-02-09. QvarnLabs Ab is happy to announce the
+first public release of Qvisqve, an authorisation server and identity
+provider for web and mobile applications. Qvisqve aims to be secure,
+lightweight, fast, and easy to manage. "We have big plans for Qvisqve,
+and helping customers' manage cloud identities" says Kaius Häggblom,
+CEO of QvarnLabs.
+
+In this alpha release, Qvisqve supports the OAuth2 client credentials
+grant, which is useful for authenticating and authorising automated
+systems, including IoT devices. Qvisqve can be integrated with any web
+service that can use OAuth2 and JWT tokens for access control.
+
+Future releases will provide support for end-user authentication by
+implementing the OpenID Connect protocol, with a variety of
+authentication methods, including username/password, U2F, TOTP, and
+TLS client certificates. Multi-factor authentication will also be
+supported. "We will make Qvisqve be flexible for any serious use case",
+says Lars Wirzenius, software architect at QvarnLabs. "We hope Qvisqve
+will be useful to the software freedom ecosystem in general" Wirzenius
+adds.
+
+Qvisqve is developed and supported by QvarnLabs Ab, and works together
+with the Qvarn software, which is award-winning free and open-source
+software for managing sensitive personal information. Qvarn is in
+production use in Finland and Sweden and manages over a million
+identities. Both Qvisqve and Qvarn are released under the Affero
+General Public Licence.

Publish log entry
diff --git a/posts/2018/01/22/ick_a_continuous_integration_system.mdwn b/posts/2018/01/22/ick_a_continuous_integration_system.mdwn
new file mode 100644
index 0000000..6c4c6bb
--- /dev/null
+++ b/posts/2018/01/22/ick_a_continuous_integration_system.mdwn
@@ -0,0 +1,137 @@
+[[!meta title="Ick: a continuous integration system"]]
+[[!meta date="2018-01-22 20:11"]]
+[[!tag announcement ick]]
+
+**TL;DR:** Ick is a continuous integration or CI system. See
+<http://ick.liw.fi/> for more information.
+
+More verbose version follows.
+
+First public version released
+-----------------------------------------------------------------------------
+
+The world may not need yet another continuous integration system (CI),
+but I do. I've been unsatisfied with the ones I've tried or looked at.
+More importantly, I am interested in a few things that are more
+powerful than what I've ever even heard of. So I've started writing my
+own.
+
+My new personal hobby project is called ick. It is a CI system, which
+means it can run automated steps for building and testing software.
+The home page is at <http://ick.liw.fi/>, and the [download][] page
+has links to the source code and .deb packages and an Ansible playbook
+for installing it.
+
+[download]: http://ick.liw.fi/download/
+
+I have now made the first publicly advertised release, dubbed ALPHA-1,
+version number 0.23. It is of alpha quality, and that means it doesn't
+have all the intended features and if any of the features it does have
+work, you should consider yourself lucky.
+
+Invitation to contribute
+-----------------------------------------------------------------------------
+
+Ick has so far been my personal project. I am hoping to make it more
+than that, and invite contributions. See the [governance][] page for
+the constitution, the [getting started][] page for tips on how to
+start contributing, and the [contact][] page for how to get in touch.
+
+[governance]: http://ick.liw.fi/governance/
+[getting started]: http://ick.liw.fi/getting-started/
+[contact]: http://ick.liw.fi/contact/
+
+Architecture
+-----------------------------------------------------------------------------
+
+Ick has an architecture consisting of several components that
+communicate over HTTPS using RESTful APIs and JSON for structured
+data. See the [architecture][] page for details.
+
+[architecture]: http://ick.liw.fi/architecture/
+
+Manifesto
+-----------------------------------------------------------------------------
+
+Continuous integration (CI) is a powerful tool for software
+development.
+It should not be tedious, fragile, or annoying. It should be quick and
+simple to set up, and work quietly in the background unless there's a
+problem in the code being built and tested.
+
+A CI system should be simple, easy, clear, clean, scalable, fast,
+comprehensible, transparent, reliable, and boost your productivity to
+get things done. It should not be a lot of effort to set up, require a
+lot of hardware just for the CI, need frequent attention for it to
+keep working, and developers should never have to wonder why something
+isn't working.
+
+A CI system should be flexible to suit your build and test needs. It
+should support multiple types of workers, as far as CPU architecture
+and operating system version are concerned.
+
+Also, like all software, CI should be fully and completely free
+software and your instance should be under your control.
+
+(Ick is little of this yet, but it will try to become all of it. In
+the best possible taste.)
+
+Dreams of the future
+-----------------------------------------------------------------------------
+
+In the long run, I would ick to have features like ones described
+below. It may take a while to get all of them implemented.
+
+* A build may be triggered by a variety of events. Time is an obvious
+  event, as is source code repository for the project changing. More
+  powerfully, any build dependency changing, regardless of whether the
+  dependency comes from another project built by ick, or a package
+  from, say, Debian: ick should keep track of all the packages that
+  get installed into the build environment of a project, and if any of
+  their versions change, it should trigger the project build and tests
+  again.
+
+* Ick should support building in (or against) any reasonable target,
+  including any Linux distribution, any free operating system, and any
+  non-free operating system that isn't brain-dead.
+
+* Ick should manage the build environment itself, and be able to do
+  builds that are isolated from the build host or the network. This
+  partially works: one can ask ick to build a container and run a
+  build in the container. The container is implemented using
+  systemd-nspawn. This can be improved upon, however. (If you think
+  Docker is the only way to go, please contribute support for that.)
+
+* Ick should support any workers that it can control over ssh or a
+  serial port or other such neutral communication channel, without
+  having to install an agent of any kind on them. Ick won't assume
+  that it can have, say, a full Java run time, so that the worker can
+  be, say, a micro controller.
+
+* Ick should be able to effortlessly handle very large numbers of
+  projects. I'm thinking here that it should be able to keep up with
+  building everything in Debian, whenever a new Debian source package
+  is uploaded. (Obviously whether that is feasible depends on whether
+  there are enough resources to actually build things, but ick itself
+  should not be the bottleneck.)
+
+* Ick should optionally provision workers as needed. If all workers of
+  a certain type are busy, and ick's been configured to allow using
+  more resources, it should do so. This seems like it would be easy to
+  do with virtual machines, containers, cloud providers, etc.
+
+* Ick should be flexible in how it can notify interested parties,
+  particularly about failures. It should allow an interested party to
+  ask to be notified over IRC, Matrix, Mastodon, Twitter, email, SMS,
+  or even by a phone call and speech syntethiser. "Hello, interested
+  party. It is 04:00 and you wanted to be told when the hello package
+  has been built for RISC-V."
+
+
+Please give feedback
+-----------------------------------------------------------------------------
+
+If you try ick, or even if you've just read this far, please share
+your thoughts on it. See the [contact][] page for where to send it.
+Public feedback is preferred over private, but if you prefer private,
+that's OK too.

Publish: pull request commentary
diff --git a/posts/2018/01/09/on_using_github_and_a_pr_based_workflow.mdwn b/posts/2018/01/09/on_using_github_and_a_pr_based_workflow.mdwn
index 2392d4b..f224e68 100644
--- a/posts/2018/01/09/on_using_github_and_a_pr_based_workflow.mdwn
+++ b/posts/2018/01/09/on_using_github_and_a_pr_based_workflow.mdwn
@@ -1,6 +1,6 @@
 [[!meta title="On using Github and a PR based workflow"]]
 [[!meta date="2018-01-09 17:25"]]
-[[!tag draft git github pull-request workflow]]
+[[!tag git github pull-request workflow]]
 
 In mid-2017, I decided to experiment with using pull-requests (PRs) on
 Github. I've read that they make development using git much nicer. The

Fix: typos and other language fixes
diff --git a/posts/2018/01/09/on_using_github_and_a_pr_based_workflow.mdwn b/posts/2018/01/09/on_using_github_and_a_pr_based_workflow.mdwn
index 1dd37c8..2392d4b 100644
--- a/posts/2018/01/09/on_using_github_and_a_pr_based_workflow.mdwn
+++ b/posts/2018/01/09/on_using_github_and_a_pr_based_workflow.mdwn
@@ -2,18 +2,18 @@
 [[!meta date="2018-01-09 17:25"]]
 [[!tag draft git github pull-request workflow]]
 
-In 2017, I decided to experiment with using pull-requests (PRs) on
+In mid-2017, I decided to experiment with using pull-requests (PRs) on
 Github. I've read that they make development using git much nicer. The
 end result of my experiment is that I'm not going to adopt a PR based
 workflow.
 
-The project is [vmdb2][], a tool for generating disk images with
-Debian. I put it up on Github, and invited people to send pull
-requests or patches, as they wished. I got a bunch of PRs, mostly from
-two people. For a little while, there was a flurry of activity. It has
-has now calmed down, I think primarily because the software has
-reached a state where the two contributors find it useful and don't
-need it to be fixed or have new features added.
+The project I chose for my experiment is [vmdb2][], a tool for
+generating disk images with Debian. I put it up on Github, and invited
+people to send pull requests or patches, as they wished. I got a bunch
+of PRs, mostly from two people. For a little while, there was a flurry
+of activity. It has has now calmed down, I think primarily because the
+software has reached a state where the two contributors find it useful
+and don't need it to be fixed or have new features added.
 
 [vmdb2]: https://github.com/larswirzenius/vmdb2
 
@@ -23,7 +23,7 @@ PRs and a workflow based on them:
 
 * they reduce some of the friction of contributing, making it easier
   for people to contribute; from a contributor point of view PRs
-  certainly seems like a better way than sending patches over email or
+  certainly seem like a better way than sending patches over email or
   sending a message asking to pull from a remote branch
 * merging a PR in the web UI is very easy
 
@@ -35,14 +35,14 @@ I also found some bad things:
   basic "something happened" notification, which prompt me to check
   the web UI
 * PRs are a centralised feature, which is something I prefer to avoid;
-  further, thery's tied to Github, which is something I object to on
+  further, they're tied to Github, which is something I object to on
   principle, since it's not free software
   - note that Gitlab provides support for PRs as well, but I've not
     tried it; it's an "open core" system, which is not fully free
     software in my opinion, and so I'm wary of Gitlab; it's also a
     centralised solution
   - a "distributed PR" system would be nice
-* mering a PR is perhaps too easy, and I worry that it leads me to
+* merging a PR is perhaps too easy, and I worry that it leads me to
   merging without sufficient review (that is of course a personal
   flaw)
 

creating tag page tag/github
diff --git a/tag/github.mdwn b/tag/github.mdwn
new file mode 100644
index 0000000..831b39d
--- /dev/null
+++ b/tag/github.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged github"]]
+
+[[!inline pages="tagged(github)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tag/workflow
diff --git a/tag/workflow.mdwn b/tag/workflow.mdwn
new file mode 100644
index 0000000..4028013
--- /dev/null
+++ b/tag/workflow.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged workflow"]]
+
+[[!inline pages="tagged(workflow)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tag/pull-request
diff --git a/tag/pull-request.mdwn b/tag/pull-request.mdwn
new file mode 100644
index 0000000..34c1e03
--- /dev/null
+++ b/tag/pull-request.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged pull-request"]]
+
+[[!inline pages="tagged(pull-request)" actions="no" archive="yes"
+feedshow=10]]

Publish log entry
diff --git a/posts/2018/01/09/on_using_github_and_a_pr_based_workflow.mdwn b/posts/2018/01/09/on_using_github_and_a_pr_based_workflow.mdwn
new file mode 100644
index 0000000..1dd37c8
--- /dev/null
+++ b/posts/2018/01/09/on_using_github_and_a_pr_based_workflow.mdwn
@@ -0,0 +1,55 @@
+[[!meta title="On using Github and a PR based workflow"]]
+[[!meta date="2018-01-09 17:25"]]
+[[!tag draft git github pull-request workflow]]
+
+In 2017, I decided to experiment with using pull-requests (PRs) on
+Github. I've read that they make development using git much nicer. The
+end result of my experiment is that I'm not going to adopt a PR based
+workflow.
+
+The project is [vmdb2][], a tool for generating disk images with
+Debian. I put it up on Github, and invited people to send pull
+requests or patches, as they wished. I got a bunch of PRs, mostly from
+two people. For a little while, there was a flurry of activity. It has
+has now calmed down, I think primarily because the software has
+reached a state where the two contributors find it useful and don't
+need it to be fixed or have new features added.
+
+[vmdb2]: https://github.com/larswirzenius/vmdb2
+
+This was my first experience with PRs. I decided to give it until the
+end of 2017 until I made any conclusions. I've found good things about
+PRs and a workflow based on them:
+
+* they reduce some of the friction of contributing, making it easier
+  for people to contribute; from a contributor point of view PRs
+  certainly seems like a better way than sending patches over email or
+  sending a message asking to pull from a remote branch
+* merging a PR in the web UI is very easy
+
+I also found some bad things:
+
+* I really don't like the Github UI or UX, in general or for PRs in
+  particular
+* especially the emails Github sends about PRs seemed useless beyond a
+  basic "something happened" notification, which prompt me to check
+  the web UI
+* PRs are a centralised feature, which is something I prefer to avoid;
+  further, thery's tied to Github, which is something I object to on
+  principle, since it's not free software
+  - note that Gitlab provides support for PRs as well, but I've not
+    tried it; it's an "open core" system, which is not fully free
+    software in my opinion, and so I'm wary of Gitlab; it's also a
+    centralised solution
+  - a "distributed PR" system would be nice
+* mering a PR is perhaps too easy, and I worry that it leads me to
+  merging without sufficient review (that is of course a personal
+  flaw)
+
+In summary, PRs seem to me to prioritise making life easier for
+contributors, especially occasional contributors or "drive-by"
+contributors. I think I prefer to care more about frequent
+contributors, and myself as the person who merges contributions. For
+now, I'm not going to adopt a PR based workflow.
+
+(I expect people to mock me for this.)

Fix: drop meta author
diff --git a/posts/2017/12/17/the_proof_is_in_the_pudding.mdwn b/posts/2017/12/17/the_proof_is_in_the_pudding.mdwn
index 44f18a3..655cf28 100644
--- a/posts/2017/12/17/the_proof_is_in_the_pudding.mdwn
+++ b/posts/2017/12/17/the_proof_is_in_the_pudding.mdwn
@@ -1,6 +1,5 @@
 [[!meta title="The proof is in the pudding"]]
 [[!meta date="2017-12-17 11:09"]]
-[[!meta author=liw]]
 [[!tag philosophical sofware-development success]]
 
 I wrote these when I woke up one night and had trouble getting back to

Fix: meta directive
diff --git a/posts/2017/12/17/the_proof_is_in_the_pudding.mdwn b/posts/2017/12/17/the_proof_is_in_the_pudding.mdwn
index 6bf542c..44f18a3 100644
--- a/posts/2017/12/17/the_proof_is_in_the_pudding.mdwn
+++ b/posts/2017/12/17/the_proof_is_in_the_pudding.mdwn
@@ -1,6 +1,6 @@
 [[!meta title="The proof is in the pudding"]]
 [[!meta date="2017-12-17 11:09"]]
-[[!mete author=liw]]
+[[!meta author=liw]]
 [[!tag philosophical sofware-development success]]
 
 I wrote these when I woke up one night and had trouble getting back to

creating tag page tag/sofware-development
diff --git a/tag/sofware-development.mdwn b/tag/sofware-development.mdwn
new file mode 100644
index 0000000..5e2fc1e
--- /dev/null
+++ b/tag/sofware-development.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged sofware-development"]]
+
+[[!inline pages="tagged(sofware-development)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tag/philosophical
diff --git a/tag/philosophical.mdwn b/tag/philosophical.mdwn
new file mode 100644
index 0000000..86f3dae
--- /dev/null
+++ b/tag/philosophical.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged philosophical"]]
+
+[[!inline pages="tagged(philosophical)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tag/success
diff --git a/tag/success.mdwn b/tag/success.mdwn
new file mode 100644
index 0000000..ad3a5fc
--- /dev/null
+++ b/tag/success.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged success"]]
+
+[[!inline pages="tagged(success)" actions="no" archive="yes"
+feedshow=10]]

Publish log entry
diff --git a/posts/2017/12/17/the_proof_is_in_the_pudding.mdwn b/posts/2017/12/17/the_proof_is_in_the_pudding.mdwn
new file mode 100644
index 0000000..6bf542c
--- /dev/null
+++ b/posts/2017/12/17/the_proof_is_in_the_pudding.mdwn
@@ -0,0 +1,25 @@
+[[!meta title="The proof is in the pudding"]]
+[[!meta date="2017-12-17 11:09"]]
+[[!mete author=liw]]
+[[!tag philosophical sofware-development success]]
+
+I wrote these when I woke up one night and had trouble getting back to
+sleep, and spent a while in a very philosophical mood thinking about
+life, success, and productivity as a programmer.
+
+Imagine you're developing a piece of software.
+
+* You don't know it works, unless you've used it.
+
+* You don't know it's good, unless people tell you it is.
+
+* You don't know you can do it, unless you've already done it.
+
+* You don't know it can handle a given load, unless you've already tried it.
+
+* The real bottlenecks are always a surprise, the first time you measure.
+
+* It's not ready for production until it's been used in production.
+
+* Your automated tests always miss something, but with only manual
+  tests, you always miss more.

creating tag page tag/analogy
diff --git a/tag/analogy.mdwn b/tag/analogy.mdwn
new file mode 100644
index 0000000..37520cc
--- /dev/null
+++ b/tag/analogy.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged analogy"]]
+
+[[!inline pages="tagged(analogy)" actions="no" archive="yes"
+feedshow=10]]

Publish log entry
diff --git a/posts/2017/11/13/unit_and_integration_testing_an_analogy_with_cars.mdwn b/posts/2017/11/13/unit_and_integration_testing_an_analogy_with_cars.mdwn
new file mode 100644
index 0000000..a542de2
--- /dev/null
+++ b/posts/2017/11/13/unit_and_integration_testing_an_analogy_with_cars.mdwn
@@ -0,0 +1,21 @@
+[[!meta title="Unit and integration testing: an analogy with cars"]]
+[[!meta date="2017-11-13 00:09"]]
+[[!tag programming analogy testing]]
+
+A unit is a part of your program you can test in isolation. You write
+unit tests to test all aspects of it that you care about. If all your
+unit tests pass, you should know that your unit works well.
+
+Integration tests are for testing that when your various well-tested,
+high quality units are combined, integrated, they work together.
+Integration tests test the integration, not the individual units.
+
+You could think of building a car. Your units are the ball bearings,
+axles, wheels, brakes, etc. Your unit tests for the ball bearings
+might test, for example, that they can handle a billion rotations, at
+various temperatures, etc. Your integration test would assume the ball
+bearings work, and should instead test that the ball bearings are
+installed in the right way so that the car, as whole, can run a
+kilometers, and accelerate and brake every kilometer, uses only so
+much fuel, produces only so much pollution, and doesn't kill
+passengers in case of a crash.

creating tag page tag/gdpr
diff --git a/tag/gdpr.mdwn b/tag/gdpr.mdwn
new file mode 100644
index 0000000..28e24c5
--- /dev/null
+++ b/tag/gdpr.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged gdpr"]]
+
+[[!inline pages="tagged(gdpr)" actions="no" archive="yes"
+feedshow=10]]

Publish log entry
diff --git a/posts/2017/10/10/debian_and_the_gdpr.mdwn b/posts/2017/10/10/debian_and_the_gdpr.mdwn
new file mode 100644
index 0000000..6d15a20
--- /dev/null
+++ b/posts/2017/10/10/debian_and_the_gdpr.mdwn
@@ -0,0 +1,54 @@
+[[!meta title="Debian and the GDPR"]]
+[[!meta date="2017-10-10 18:08"]]
+[[!tag debian gdpr privacy]]
+
+[GDPR][] is a new EU regulation for privacy. The name is short for
+"General Data Protection Regulation" and it covers all organisations
+that handle personal data of EU citizens and EU residents. It will
+become enforceable May 25, 2018 ([Towel Day][]). This will affect
+Debian. I think it's time for Debian to start working on compliance,
+mainly because the GDPR requires sensible things.
+
+[GDPR]: https://en.wikipedia.org/wiki/General_Data_Protection_Regulation
+[Towel Day]: https://en.wikipedia.org/wiki/Towel_Day
+
+I'm not an expert on GDPR legislation, but here's my understanding of
+what we in Debian should do:
+
+* do a privacy impact assessment, to review and **document** what
+  data we have, and collect, and what risks that has for the people
+  whose personal data it is if the data leaks
+
+* only collect personal information for specific purposes, and only
+  use the data for those purposes
+
+* get explicit consent from each person for all collection and use of
+  their personal information; archive this consent (e.g., list
+  subscription confirmations)
+
+* allow each person to get a copy of all the personal information
+  we have about them, in a portable manner, and let them correct it if
+  it's wrong
+
+* allow people to have their personal information erased
+
+* maybe appoint one or more data protection officers (not sure this is
+  required for Debian)
+
+There's more, but let's start with those.
+
+I think Debian has at least the following systems that will need to be
+reviewed with regards to the GDPR:
+
+* db.debian.org - Debian project members, "Debian developers"
+* nm.debian.org
+* contributors.debian.org
+* lists.debian.org - **at least** membership lists, maybe archives
+* possibly irc servers and log files
+* mail server log files
+* web server log files
+* version control services and repositories
+
+There may be more; these are just off the top of my head.
+
+I expect that mostly Debian will be OK, but we can't just assume that.

Drop: comments from front page, comment feed
diff --git a/comments.mdwn b/comments.mdwn
deleted file mode 100644
index e22b50a..0000000
--- a/comments.mdwn
+++ /dev/null
@@ -1,10 +0,0 @@
-[[!sidebar content="""
-[[!inline pages="comment_pending(./posts/*)" feedfile=pendingmoderation
-description="comments pending moderation" show=-1]]
-Comments in the [[!commentmoderation desc="moderation queue"]]:
-[[!pagecount pages="comment_pending(./posts/*)"]]
-"""]]
-
-Recent comments on posts in the [[blog|index]]:
-[[!inline pages="./posts/*/Discussion or comment(./posts/*)"
-template="comment"]]
diff --git a/index.mdwn b/index.mdwn
index 65fdc03..0343655 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -1,9 +1,8 @@
 Welcome to my web log. See the [[first_post|posts/welcome]] for an
-introduction. See the [[archive|posts]] page for all posts, and
-[[comments]] for a feed of comments only. (There is an
+introduction. See the [[archive|posts]] page for all posts. (There is an
 [[english language feed|englishfeed]] if you don't want to see Finnish.)
 
-[[Archives|posts]] [[Tags|tag]] [[Recent Comments|comments]]
+[[Archives|posts]] [[Tags|tag]]
 [Moderation policy](http://liw.fi/moderation/)
 [Main site](http://liw.fi/)
 
@@ -11,12 +10,8 @@ All content outside of comments is copyrighted by Lars Wirzenius, and
 licensed under a <a rel="license"
 href="http://creativecommons.org/licenses/by-sa/3.0/">Creative Commons
 Attribution-Share Alike 3.0 Unported License</a>. Comments are
-copyrighted by their authors.
+copyrighted by their authors. (No new comments are allowed.)
 
 ---
 
 [[!inline pages="page(posts/*) and !Discussion and !tagged(draft)"]]
-
----
-
-For more, see [[the archive|posts]].

Publish: draft article on attractig contributors
diff --git a/posts/2017/10/01/attracting_contributors_to_a_new_project.mdwn b/posts/2017/10/01/attracting_contributors_to_a_new_project.mdwn
index 363f0ef..c462f51 100644
--- a/posts/2017/10/01/attracting_contributors_to_a_new_project.mdwn
+++ b/posts/2017/10/01/attracting_contributors_to_a_new_project.mdwn
@@ -1,6 +1,6 @@
 [[!meta title="Attracting contributors to a new project"]]
 [[!meta date="2017-10-01 10:21"]]
-[[!tag draft meta community]]
+[[!tag meta community]]
 
 How do you attract contributors to a new free software project?
 

creating tag page tag/community
diff --git a/tag/community.mdwn b/tag/community.mdwn
new file mode 100644
index 0000000..20c713f
--- /dev/null
+++ b/tag/community.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged community"]]
+
+[[!inline pages="tagged(community)" actions="no" archive="yes"
+feedshow=10]]