Welcome to my web log. See the first post for an introduction. See the archive page for all posts. (There is an english language feed if you don’t want to see Finnish.)

Archives Tags Moderation policy Main site

Me on Mastodon, for anything that is too small to warrant a blog post.

All content outside of comments is copyrighted by Lars Wirzenius, and licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License. Comments are copyrighted by their authors. (No new comments are allowed.)


Billion file filesystem

For a lark I made an ext4 file system with a billion empty files. https://gitlab.com/larswirzenius/create-empty-files has the program I wrote for this. I’ve done this before, but this time I made it a little simpler for me to do it again: everything in one Rust program rather than clunky scripts.

The disk image starts out as a terabyte-size sparse file, taking no space, and a file system is put in that. Thus, the image is all zeroes except for what actually gets written to it by the file system. With a billion empty files, the image uses 276 GiB disk space. On my desktop machine it took about 26 hours to create.

Compressing makes it smaller:

command size in GiB size in bytes
gzip -1 20 20546385531
gzip -9 15 15948883594
xz -T0 -1 11 11389236780
xz -T0 -M 60GiB -9 10 10465717256

(I did not measure compression times, sorry. If that interests you, you’ll have to do the work yourself.)

If you have a use for such a filesystem, please get in touch. However, if you can spend a day, you can easily create one yourself and save me a bit of bandwidth.

If you’d like a different filesystem, it should be easy enough to adjust the program I wrote to use another filesystem type.

Of what use is a filesystem with many empty files? You could use it to benchmark things that do things to all the files in a directory tree. For example, what is the fastest way to list all those files? Delete them? Back them up? Create an archive file with all of them? (These might be an interesting project for someone in university or college, maybe?)

The XY solution

The “XY problem”:

The XY problem is asking about your attempted solution rather than your actual problem. This leads to enormous amounts of wasted time and energy, both on the part of people asking for help, and on the part of those providing help.

I am coining a new term: The “XY solution”.

The XY solution is when you’re building a solution for problem X, and other people think existing solution Y would help with that, because they share some similarities, so they push you to use Y, even though when you look at it carefully, Y is not actually suitable for solving X, especially with the constraints or requirements you have.

I had this when I first started talking publicly about implementing backup software. I had specific goals and requirements that had led me to a particular approach, and well-meaning strangers kept strongly pushing me towards building my backup solution on top of tar, zip, or rsync, or some other existing thing for copying files around. None of those were actually suitable for what I was trying to achieve.

I see this often in the software industry, because it’s obviously good to build on top of existing building blocks. However, it’s only good if the building blocks are actually suited for the task at hand. Making the judgment call on whether the blocks are good or not is an important skill in software development.

People suggesting Y to you is rarely a problem, unless they are doing it so very forcefully. It’s good for you to consider many approaches. But it can be frustrating to have to reject the same suggestions repeatedly.

Example: Don’t write your own operating system kernel, use Minix/MS-DOS/Windows/Solaris/whatever instead.

Example: Don’t write your own version control system, use Subversion/CVS/tar balls/whatever instead.

Example: Don’t write your own git server, use github/gitlab/whatever instead.

De-duplicating $PATH

For various reasons, the way my $PATH environment variable gets constructed results in a long, repetitive list of directories.

This is, of course, silly. For those same reasons, it’s difficult to avoid constructing such a silly list, so instead I wrote a simple little program to remove duplicates.

The source code for pathdedup is at https://codeberg.org/liw/pathdedup.

Maybe it’s of any use for someone else.

New PGP key, and practice around it
pub   ed25519/31DA8032081D901D 2023-11-12 [C] [expires: 2024-11-11]
      Key fingerprint = EA0B 7399 ECCF 9282 A74E  F8F8 31DA 8032 081D 901D
uid                 [ultimate] Lars Wirzenius
uid                 [ultimate] Lars Wirzenius <liw@liw.fi>
sub   ed25519/6766716690EC0D85 2023-11-12 [S]
sub   cv25519/2BC6E410BAD2972F 2023-11-12 [E]

I’ve created a new OpenPGP key, to replace my old one. The new key uses elliptic curve 25519, where the old key uses RSA and is 4096 bits. This means new key is smaller, which is convenient for me.

I also get to start from a clean slate: I’ve made a bit of a mess with the way my old key and its subkeys are stored. I have subkeys that now only exist on a single Yubikey, which means when (not if) that stops working, there are files I can’t decrypt anymore. (I’ll have to make sure to decrypt, and maybe re-encrypt, all such data, as long as I have access to the key.)

I use an OpenPGP key for encryption and signing:

It would be possible for me to replace all of this with other methods, but I don’t want to. I know a lot of people don’t want to use OpenPGP. This is about me.

My threat model for OpenPGP, these days, is that I lose the device where the key is stored. I mitigate this by only ever storing key material on encrypted media, which is reasonably easy, as I routinely encrypt all my storage anyway. My threat model no longer includes people using violence to get access to my key, or protecting my secrets and privacy against well funded attackers without scruples, or otherwise targeting me and being willing and able to take extraordinary measures.

(That’s my threat model. Yours is probably different. I’m not giving you advice on defending yourself here.)

Here’s what I’m doing now:

  • A primary key for certification only.
    • only used to certify other people’s keys and my own subkeys
    • elliptic curve 25519
    • a software key, so I can back it up
    • stored on more than one encrypted USB drives, but not on my laptop
    • it’s OK that it’s a little cumbersome to use this key, as I do it rarely
    • expires in 12 months, but I will extend this before it does; I’ve set up some automation to remind me to do that using an innovative application of artif… a calendar
  • An encryption subkey.
    • used for encrypting and decrypting data so nobody else can read it
    • elliptic curve 25519
  • A signing subkey.
    • used to sign data so others know it’s from me
    • elliptic curve 25519
  • No authentication subkey.
    • I don’t want to use my OpenPGP key for SSH.

I’ve uploaded the new key to https://keys.openpgp.org/ and http://the.earth.li/. I’ve certified the new key with my old one, and vice versa. The new key is also available via WKD.

Overall, this allows me to use OpenPGP conveniently, but sufficiently securely for my personal needs.

Posted
Updated my GTD introduction guide

I’ve updated, by completely rewriting, my introductory guide and overview of the Getting Things Done (GTD) system, from a hacker’s point of view. It’s at https://gtdfh.liw.fi/.

The previous significant update was over a decade ago. I’ve since evolved my implementation of GTD, but mostly I’m back where I was back then.

Why is Debian the way it is?

Debian is a large, complex operating system, and a huge open source project. It’s thirty years old now. To many people, some of its aspects are weird. Most such things have a good reason, but it can be hard to find out what it is. This is an attempt to answer some such questions, without being a detailed history of the project.

What Debian wants to be

Debian wants to be a high-quality, secure general purpose operating system that consists only of free and open source software that runs on most kinds of computers that are in active use in the world.

By general purpose I mean Debian should be suitable for most people for most purposes. There will always be situations where it’s not suitable, for whatever reason, but it’s a good goal to aim for. Some other distributions aim for specific purposes: a desktop, a server, playing games, doing scientific research, etc. It’s fine to aim to be general purpose, or specific purpose, but the choice of goal leads to different decisions along the way.

For Debian, aiming to be general purpose means that Debian doesn’t choose what to package based on the purpose of the software. The only real choice Debian makes here is on whether the software is free and whether it’s plausible for Debian to maintain a high quality package.

The constitution, power structure, governance

Debian is one of the more explicitly democratic open source organizations. It has well-defined processes for making decisions, and elects a project leader every year. Further, the powers of the project leader are strictly constrained, and most powers usually associated with leadership are explicitly delegated to other people.

The historic background for this is that the first Debian project leaders were implicitly all-powerful dictators until they chose to step down. Then one project leader went too far, and a revolt threw them out, and democracy was introduced. As part of this, the project got a formal constitution, which defines rules for the project.

The reason Debian has the rules it has, is because less rules, and less bureaucracy, didn’t work for Debian earlier in its history.

Social contract and Debian free software guidelines

In the mid-1990s, before the term open source had been introduced, what was “free software” was defined by the Free Software Foundation, but in a way that left much to be interpreted. Debian wanted to have clearer rules, and came up with the Debian Free Software Guidelines, and made them part of its Social Contract.

The social contract is Debian’s promise to itself and to the world at large about what Debian is and does. The DFSG is part of that. This is a foundation document for Debian, and changing it is intentionally made difficult in the Debian constitution.

The more detailed rules have made it clearer what Debian will accept, and have simplified discussions about this. There is still a lot to discuss, of course.

The DFSG was later the basis of the Open Source Definition.

Self-contained

Debian insists on being self-contained. Anything that is packaged in Debian, by Debian, must be built (compiled) using only dependencies in Debian. Also, everything in Debian must be built by Debian. This can cause a lot of extra work. For example, current programming language tooling often assumes it can download dependencies from online repositories at build time, and that is not acceptable to Debian.

The main reason for this is that a dependency might not be available later. Debian has no control over third party package repositories, and if a package, or entire repository, goes away, it might be impossible for Debian to rebuild the package. Debian needs to rebuild to upgrade to a new compiler, to fix a security problem, to port to a new architecture, or just to make some change to the packaged software, including bug fixes.

If Debian weren’t self-contained, it would be at the mercy of any of the tens of thousands of packages it has, and all their dependencies, being available when an urgent security fix needs to be released. This is not acceptable to Debian, and so Debian chooses to do the work of packaging all dependencies.

That means, of course, that for Debian to package something can be a lot of work.

No bundled libraries

Debian avoids using copies of libraries, or other dependencies, that are bundled with the software it packages. Many upstream projects find it easier to bundle or “vendor” dependencies, but for Debian, this means that there can be many copies of some popular libraries. When there is a need to fix a security or other severe problem in such a library, Debian would have to find all copies to fix them. This can be a lot of work, and if the security problem is urgent, it wastes valuable time to have to do that.

As an example: the zlib is used by a very large number of projects. By its nature, it needs to process data that may be constructed to exploit a vulnerability in the library. This has happened. At one point, Debian found dozens of bundled copies of zlib in its archive, and spent considerable effort making sure only the packaged version of zlib is used by packages in Debian.

Thus, Debian chooses to do the work up front, before it’s urgent, while packaging the software, and make sure the package in Debian uses the version of the library packaged in Debian.

This is not always appreciated by upstream developers, who would prefer to only have to deal with the version of the library they bundle. That’s the version they’ve verified their own software with. This sometimes leads to friction with Debian.

Membership process

Given the size and complexity of Debian as an operating system, and its popularity, the project needs to trust its members. This especially means trusting those who upload new packages. Because of technical limitations in Linux in the 1990s, every Debian package has full root access during its installation. In other words, every Debian developer can potentially become the root user on any machine running Debian. With tens of millions of machines running Debian, that is potentially a lot of power.

Debian vets its new members in various ways. Ideally, every new member has been part of the Debian development community sufficiently long that they are known to others, and they’ve built trust within the community.

The process can be quite frustrating to those wanting to join Debian, especially to someone used to a smaller open source project.

Release code names

Debian assigns a code name for its each major release. This was originally done to make mirroring the Debian package archive less costly.

In the mid-1990s, when Debian was getting close to making its 1.0 release, code names weren’t used. Instead, the archive had a directory for each release, named after its version. Developing a new release takes a while, so the directory “1.0” was created well ahead of time. Unfortunately, a publisher of CD-ROMs, prematurely mass-produced a disc they labeled 1.0, before Debian had actually finished making 1.0. This meant that people who got the Debian 1.0 CD-ROM got something that wasn’t actually 1.0.

An obvious solution to prevent this from happening again would have been to prepare the release in a directory called “1.0-not-released”, and rename the directory to “1.0” after the release was finished. However, this would’ve meant that all the mirrors would’ve had to re-download the release when the name of the directory changed. That would’ve been costly, given the massive size of Debian (hundreds of packages! tens of megabytes!). Thus, Debian chose to use code names instead.

Later, the “pool” structure was added to the Debian archive. With this, the files for all releases are in the same directory tree, and metadata files specify what files belong to each release. This makes mirroring easier. It might be possible to drop the code names and stick to versions, now, but I don’t know if Debian would be interested in that.

Changing slowly

As implied above, Debian is huge. It’s massive. It’s enormous, It’s really not very small at all, any more.

Large ships stop slowly. Large projects change slowly. Any change in Debian that affects large portion of its packages may require hundreds of volunteers to do work. That is not going to happen quickly.

Sometimes the work can be done with just a small number of people, and Debian has processes to enable that. As an example, if a new version of the GNU C compiler is uploaded, the work of finding out what fixes in other packages need to be made can usually be done by a handful of people.

Often a change takes time because there’s a need to build consensus, and that requires extensive discussion, which takes time and can only rarely be short-circuited.

This all also means Debian developers tend to be conservative in technical decisions. They often prefer solutions that don’t require large scale changes.


To comment publicly, please use this fediverse thread.

Posted
Tickets, issues, tasks

Introduction

I have strong opinions, strongly held, about tickets, issues, and tasks. They’re based on my experience: they’re not hard facts based on extensive research. If you disagree with my opinions, especially with evidence from research, I would be interested in hearing how and why.

I explain things from the basics, to make it clear what I mean. I’ve found that people across the software industry do not always use words with the same meanings, and I want to be clear.

The context here is software development and maintenance, especially in an organization providing a service using the software. The viewpoint is from the developers at the service provider. I don’t have enough experience with customer service or management to have useful opinions about them.

Basic concepts: ticket, issue, and task

A ticketing system is a great way to keep track of issues and tasks. You create a ticket for every issue and task so that you don’t forget about them. Over time, you collect and update any information related to the issue or task in the ticket, and you track the state as well, so that you can instantly see if something is still an ongoing concern.

It matters how a ticketing system is used. Tickets should include a description of the issue or task that is clear and complete, and kept up to date over time. Tickets may be related to each other, and this should be recorded in the tickets as well, and the relationship information should also be kept up to date.

Common problems in ticketing systems

Some problems I’ve seen in several ticketing systems I’ve needed to use:

  • Tickets are not gardened. Resolved issues have tickets that are still open. The total number of open tickets is so large it’s not practically possible to get an overview of the state of the service, or software. This makes it difficult to find out anything useful. This can end up being so bad the ticketing system is entirely waste.

  • It’s hard to find out the current situation of whatever the ticket represents. What’s the actual issue? What’s the status of the work? What information in the ticket is current and what’s obsolete, or entirely wrong? Sometimes all the information is in the ticket, but it’s spread out in a long chain of comments, requiring careful reading to get the picture. (A summary that is kept up to date helps a lot here.)

  • Tickets don’t actually capture all the relevant communication about an issue. Customer reports an issue, a ticket is created, and then further communication happens out of band, in person, on the phone, in private email, and is not captured by the issue. To find out what’s going on you have to ask the relevant people, who may not be available, or have forgotten all the important parts.

  • Issues and tasks are conflated. Sometimes it’s clear what needs to be done to resolve an issue, and sometimes it’s clear what issue is being resolved from a description of a task. Often neither is clear. It leads to unnecessary cognitive burden to not be clear and explicit. Ideally, a ticket, or related tickets, would explain both what the issue is, and what needs to be done to resolve it.

Issues vs tasks vs tickets

Issues are not tasks, and conflating them gets confusing to readers and collaborators. A ticket can represent an issue or a task.

An issue is anything that bothers a user of a system. It may be caused by a bug in the system, a missing feature, a temporary glitch, a misunderstanding, or something else. The important point of an issue is that a user has a problem. An issue often results in a need for some work to be done, but that is a task.

It often happens that several people have the same issue: if there’s a bug, many people may run into it. However, sometimes superficially similar issues do not have the same cause, and in that case they should not be treated in the same way. For example, if there’s a bug that causes a beep every time the user saves a file, that doesn’t mean every spurious beep is caused by this bug.

An issue should be phrased as a description of a problem, in a way aimed at the user who has the problem. If the user opens the issue themselves, their description is not always clear, or even useful, and it behooves the service provider to clarify. The goal is to allow the user to usefully, constructively review the ticket. This is important: if the user doesn’t agree that the issue describes their actual problem, it becomes less likely that the issue can be resolved to their satisfaction. That would be a waste of everyone’s time. It is ideal if the user can confirm that their issue has actually been resolved, but this is not always possible, so it is also practical if the issue is described in a way that someone else can make that evaluation.

An issue might be phrased as follows:

When I click a link in my web browser, it takes several minutes for the page to open.

A task may be created to resolve an issue, or a group of related issues, but a task may be created without being related to issues.

A task should be phrased as a thing to do, aimed at the people who do the work, and also the people who need to review the work for acceptability and completeness.

A task might be phrased as follows:

Change the DNS resolver configuration to use a working resolver to avoid a two-minute DNS time out.

Often the task to resolve an issue is implicitly clear at least after some thought. However, it is worth being explicit for the sake of the poor soul who may not have all the context to leap the same implicit conclusion later.

It’s also much easier to do a task if it’s clear what needs to be done, how, and when one can consider it to be done.

Issue and task descriptions: checklists

Each ticket should be clear if it’s an issue, a task, or both. How to do this depends on the ticketing system, but if nothing else, it can be done by careful wording.

For an issue description:

  • the primary audience is the user so that they can confirm it describes their actual problem
    • this means it needs to be written using “user terminology”
  • secondary audience is people at the service provider who need to be able to decide what needs to be done to deal with the issue, and to determine if the issue has been dealt with, if the user is not available
  • should explain how to reproduce the issue, when that is relevant
  • does not need to describe what needs to be done to remove the cause of the issue, unless it’s something the user needs to do
  • can include workaround for the user until the root cause is dealt with

For a task description:

  • the primary audience it the people who need to perform the task
    • this means it needs to be written in “developer terminology”
    • a task should always be clear why it needs to be done: if it is to resolve an issue, the issue should be described, or referenced
  • the secondary audience is the people who need to evaluate the work: has it been completely done, and done in an acceptable way?
  • should probably start with a verb representing a physical action to take: “buy a floppy drive”, “install floppy drive in desktop PC”, “add logging to function xyzzy so that we can monitor for issue 12765 ever happening again”
    • it seems to be a mistake to phrase the task as a description of what the end result should look like, but it can be useful additional information in the task ticket
  • should explain the task in sufficient detail for the intended people to do the work
    • I find it useful to write the description as if the person doing the work will have suffered a highly localized, limited amnesia and not assume they remember the context of the task
    • such amnesia may be caused by a long vacation
  • should include all the information to complete the task without having to look elsewhere
    • including by linking or by reference is fine, the goals is to avoid making the person working on the task spend extra time on research
    • this also makes it even remotely possible to roughly estimate how long the task will take

Regular review

Further, tickets should be reviewed from time to time. During the review, any needed updates should be made. Without review, tickets sometimes, unintentionally, accidentally, fall between the cracks, and don’t get updated when they need to be, and may linger needlessly, littering the ticketing system. This leads to work that needs to be done either not being done, or taking longer than it needs to.

Experience has shown me that relying on everyone updating tickets promptly doesn’t work. Regular review is necessary to catch oversights.

On the effort needed

Obviously, maintaining tickets in the way I describe here takes effort. When there are many other demands on time and energy, it can be too much to ask to put in this effort in tickets. I’ve found that it pays off to do so, however, and raises the quality of the software and service provided, even if only a little effort can be afforded.

Comments

To comment publicly, please use this fediverse thread.

Posted
New job: Radicle

I’ve started a new job this month. I now work on and for Radicle, a distributed git hosting system and peer-to-peer code collaboration platform. I’ll be working on continuous integration support.

It’s open source and written in Rust, and is making git be #distributed again.

vmdb2 and v-i releases

I’ve recently made releases of vmdb2 and v-i. vmdb2 is my tool for creating a disk image with Debian installed. It’s useful for making virtual machine images. Gunnar Wolf uses it to build Raspberry Pi Debian images. Earlier this month I released version 0.28, with a lot of changes, mostly by other people. vmdb2 has been working for me for many years now.

v-i is my installer for Debian. It installs Debian on bare metal PCs. I’ve been using it now for a couple of years, and I like it, but I don’t know if it works for anyone else. I wrote v-i because I wanted a non-interactive, repeatable, and fast installation method. While the first install now takes around five minutes for me, subsequent ones take around a minute and a half. I need to give one command to start the installation, and then one to reboot the machine into the installed system.

Grossly simplifying:

  • vmdb2 = parted + mkfs + debootstrap + grub + some config & logic
  • v-i = vmdb2 + Ansible + some config & logic

At a very high level, both are very simple tools, mostly relying on other, more magical tools. At the nitty gritty detail level, both deal with sufficiently esoteric parts of an operating system (especially boot loaders) to be awful to develop and debug. But they now work for me.

Maybe they might work for you, too?

Using clap to build nice command line interfaces

This is a blog post version of a short talk I gave recently at a Rust Finland meetup.

Introduction

Command line programs are nicer to use if the command line interface (CLI) is nicer to use. This means, among other things:

  • the CLI conforms to the conventions of the underlying platform
    • I only use Linux, so what I talk about here may not apply to other systems
    • long and short options, and values for options
  • the CLI should have built-in help of some sort
    • the --help option is non-negotiable
    • the -h alias for --help would be nice
  • for complex programs, allow subcommands
    • with their own help
  • the program should check for errors the user may have made and give helpful error messages

Motivation

Why would you care about making a nice CLI?

  • your users will like it
    • you do like your users, don't you?
  • you will like it
    • you do use your own software, don't you?
  • a nice CLI tends to be less error prone to use
    • this means you get fewer support requests
  • if you have competition, a nicer user experience will give you a boost
  • it turns out that a nice CLI is easier and cheaper to maintain
    • a nice CLI requires a nice command line parser and that, in turn, means code to define the CLI is simpler, easier to get right

Program to greet

Below I will show a few ways to implement a CLI for a program that greets the user ("hello, world"). The program is used like this:

  • greet → "hello, world"
  • greet --whom Earth → "hello, Earth"
  • greet --whom=Earth → "hello, Earth"

For reasons of how the Unix command line conventions evolved, a long option value may be part of the argument (with an equals sign) or the next argument. Users expect this, but it complicates the command line parser.

CLI without dependencies beyond std

The code below uses only the std library, and parses the command line manually. Note that it is buggy: this usually happens, when you write command line parsing manually.

let mut whom = Some("world".into());
let mut got_whom = false;
let mut args = std::env::args();
args.next();
for arg in args {
    if let Some(suffix) = arg.strip_prefix("--whom=") {
        whom = Some(suffix.to_string());
    } else if arg == "--whom" {
        got_whom = true;
    } else if got_whom {
        whom = Some(arg.to_string());
        got_whom = false;
    } else {
        eprintln!("usage error!");
        std::process::exit(1);
    }
}

CLI with clap, imperative

The clap crate is by far the most commonly used Rust library for command line parsing. A lot of effort has been put into making it both a pleasure to use for the programmer, and for the user.

The code below uses the traditional, imperative approach: you create a Command value, and configure that to know about the accepted command line arguments. This is straightforward, but scales badly if to programs with very large numbers of options: curl, for example, has 255 options.

Note that this code lacks help texts, for brevity.

use clap::{Arg, Command};

fn main() {
    let matches = Command::new("greet")
        .arg(Arg::new("whom")
            .long("whom")
            .default_value("world"))
        .get_matches();
    let whom: &String = matches.get_one("whom").unwrap();
    println!("hello, {}", whom);
}

CLI with clap, declarative

The derive feature of clap allows a declarative approach for defining command line syntax. The code below does that. It still lacks help texts.

This style feels more magic, but is easy to work with and fully as powerful as the imperative style. Using the declarative style would be a good idea for anyone who wants to write a curl clone in a weekend.

use clap::Parser;

fn main() {
    let args = Args::parse();
    println!("{}", args.whom);
}

#[derive(Parser)]
struct Args {
    #[clap(long, default_value = "world")]
    whom: String,
}

CLI with clap, with subcommands (1/4)

The example program we're looking at could have a farewell mode as well as a greeting mode. This can be done using subcommands:

  • greet hello --whom=world
  • greet goodbye --whom=world

The code below demonstrates one approach for how to implement this using clap. The doc comments get turned into help text shown to the user.

use clap::{Parser, Subcommand};

fn main() {
    let args = Args::parse();
    match args.cmd {
        Command::Hello(x) => x.run(),
        Command::Goodbye(x) => x.run(),
    }
}

/// General purpose greet/farewell messaging.
#[derive(Parser)]
struct Args {
    #[command(subcommand)]
    cmd: Command,
}

#[derive(Subcommand)]
enum Command {
    Hello(Hello),
    Goodbye(Goodbye),
}

/// Greet someone.
#[derive(Parser)]
struct Hello {
    /// Whom should we greet?
    #[clap(long, default_value("world"))]
    whom: String,
}

impl Hello {
    fn run(&self) {
        println!("hello, {}", self.whom);
    }
}

/// Say good bye to someone.
#[derive(Parser)]
struct Goodbye {
    /// Whom should we say good by to?
    #[clap(long, default_value("cruel world"))]
    whom: String,
}

impl Goodbye {
    fn run(&self) {
        println!("good bye, {}", self.whom);
    }
}

Output: top level help

$ cargo run -q -- --help
General purpose greet/farewell messaging

Usage: greet <COMMAND>

Commands:
  hello    Greet someone
  goodbye  Say good bye to someone
  help     Print this message or the help of the given subcommand(s)

Options:
  -h, --help  Print help

Output: help for a subcommand

$ cargo run -q -- hello --help
Greet someone

Usage: greet hello [OPTIONS]

Options:
      --whom <WHOM>  Whom should we greet? [default: world]
  -h, --help         Print help

Output: subcommands

$ cargo run -q -- hello
hello, world

$ cargo run -q -- hello --whom=Earth
hello, Earth

Error: missing subcommand

$ cargo run -q -- 
General purpose greet/farewell messaging

Usage: greet <COMMAND>

Commands:
  hello    Greet someone
  goodbye  Say good bye to someone
  help     Print this message or the help of the given subcommand(s)

Options:
  -h, --help  Print help

Error: extra argument

$ cargo run -q -- hello there
error: unexpected argument 'there' found

Usage: greet hello [OPTIONS]

For more information, try '--help'.

AD: I do Rust training for money

Basics of Rust

Posted