Feed for Planet Debian.
We needed a router and wifi access point in the office, and simultaneously both I and my co-worker Ivan needed such a thing at our respective homes. After some discussion, and after reading articles in Ars Technica about building PCs to act as routers, we decided to do just that.
The PC solution seem to offer better performance, but this is actually not a major reason for us.
We want to have systems we understand and can hack. A standard x86 PC running Debian sounds ideal to use.
Why not a cheap commercial router? They tend to be opaque and mysterious, and can't be managed with standard tooling such as Ansible. They may or may not have good security support. Also, they may or may not have sufficient functionality to be nice things, such as DNS for local machines, or the full power if iptables for firewalling.
Why not OpenWRT? Some models of commercial routers are supported by OpenWRT. Finding good hardware that is also supported by OpenWRT is a task in itself, and not the kind of task especially I like to do. Even if one goes this route, the environment isn't quite a standard Linux system, because of various hardware limitations. (OpenWRT is a worthy project, just not our preference.)
We got some hardware:
|Barebone||Qotom Q190G4, VGA, 2x USB 2.0, 134x126x36mm, fanless||130€|
|CPU||Intel J1900, 2-2.4GHz quad-core||-|
|NIC||Intel WG82583, 4x 10/100/1000||-|
|Memory||Crucial CT102464BF160B, 8GB DDR3L-1600 SODIMM 1.35V CL11||40€|
|SSD||Kingston SSDNow mS200, 60GB mSATA||42€|
|WLAN||AzureWave AW-NU706H, Ralink RT3070L, 300M 802.11b/g/n, half mPCIe||17€|
|mPCIe adapter||Half to full mPCIe adapter||3€|
|Antennas||2x 2.4/5GHz 6dBi, RP-SMA, U.FL Cables||7€|
These were bought at various online shops, including AliExpress and verkkokauppa.com.
After assembling the hardware, we installed Debian on them:
Connect the PC to a monitor (VGA) and keyboard (USB), as well as power.
I built a "factory image" to be put on the SSD, and a USB stick installer image, which includes the factory one. Write the installer image on a USB stick, boot off that, then copy the factory image to the SSD and reboot off the SSD.
The router now runs a very bare-bones, stripped-down Debian system, which runs a DHCP server on eth3 (marked LAN4 on the box). You can log as root on the console (no password), or via ssh, but for ssh you need to replace the
/home/ansible/.ssh/authorized_keysfile with one that contains only your public ssh key.
Connect a laptop to the Ethernet port marked LAN4, and get an IP address with DHCP.
Log in with ssh to
firstname.lastname@example.org, and verify that
sudo idworks without password. Except you can't do this, unless you put in your ssh key in the authorized keys file above.
Git clone the ansible playbooks, adjust their parameters in
minipc-router.ymlas wanted, and run the playbook. Then reboot the router again.
You should now have wifi, routing (with NAT), and be generally speaking able to do networking.
There's a lot of limitations and problems:
There's no web UI for managing anything. If you're not comfortable doing sysadmin via ssh (with or without ansible), this isn't for you.
No IPv6. We didn't want to enable it yet, until we understand it better. You can, if you want to.
No real firewalling, but adjust
roles/router/files/ferm.confas you wish.
The router factory image is 4 GB in size, and our SSD is 60 GB. That's a lot of wasted space.
The router factory image embeds our public keys in the
ansibleuser's authorized keys file for ssh. This is because we built this for ourselves first. If there's interest by others in using the images, we'll solve this.
Probably a lot of stupid things. Feel free to tell us what it is (email@example.com would be a good address for that).
If you'd like to use the images and Ansible playbooks, please do. We'd be happy to get feedback, bug reports, and patches. Send them to me (firstname.lastname@example.org) or my ticketing system (email@example.com).
A year ago I got tired of Jenkins and wrote a CI system for myself, Ick. It's served me well since, but it's a bit clunky and awkward and I have to hope nobody else wants to use it.
I've been thinking about re-architecting Ick from scratch, and so I wrote down some of my thinking about this. It's very raw, but just in case someone else might be interested, I put it online at ick2.
At this point I'm still thinking about very high level concepts. I've not written any code, and probably won't in the next couple of months. But I had to get this out of my brain.
I gave a talk about the early days of Linux at the jubilee symposium arranged by the University of Helsinki CS department. Below is an outline of what I meant to speak about, but the actual talk didn't follow it exactly. You can compare these to the video once it comes online.
- Linus and I met at uni, the only 2 Swedish speaking new students that year, so we naturally migrated towards each other.
- After a year away for military service, got back in touch, summer of
- C & Unix course fall of 1990; Minix.
- Linus didn't think atime updates in real time were plausible, but I showed him; funnily enough, atime updates have been an issue in Linux until fairly recently, since they slow things down (without being particularly useful)
- Jan 5, 1991 bought his first PC (i386 + i387 + 4 MiB RAM and a small hard disk); he had a Sinclair QL before that.
- Played Prince of Persia for a couple of months.
- Then wanted to learn i386 assembly and multitasking.
- A/B threading demo.
- Terminal emulation, Usenet access from home.
- Hard disk driver, mistaking hard disk for a modem.
- More ambition, announced Linux to the world for the first time
- first ever Linux installation.
- Upload to ftp.funet.fi, directory name by Ari Lemmke.
- Originally not free software, licence changed early 1992.
- First mailing list was created and introduced me to a flood of email (managed with VAX/VMS MAIL and later mush on Unix).
- I talked a lot with Linus about design at this time, but never really participated in the kernel work (partly because disagreeing with Linus is a high-stress thing).
- However, I did write the first sprintf for the kernel, since Linus hadn't learnt about varargs functions in C; he then ruined it and added the comment "Wirzenius wrote this portably..." (add google hit count for wirzenius+fucked).
- During 1992 Linux grew fast, and distros happened, and a lot of packaging and porting of software; porting was easier because Linus was happy to add/change things in the kernel to accomodate software
- A lot of new users during 1992 as well.
- End of 1992 I and a few others founded the Linux Documentation Project to help all the new users, some of who didn't come from a Unix background.
- In fact, things progressed so fast in 1992 that Linus thought he'd release 1.0 very soon, resulting in a silly sequence of version numbers: 0.12, 0.95, 0.96, 0.96b, 0.96c, 0.96c++2.
- X server ported to Linux; almost immediate prediction of the year of the Linux desktop never happening unless ALL the graphics cards were supported immediately.
- Linus was of the opinion that you needed one process (not thread) per window in X; I taught him event driven programming.
- Bug in network code, resulting in ban on uni network.
- Pranks in the shared office room.
- We released 1.0 in an event at the CS dept in March, 1994; this included some talks and a ritual compilation of the release version during the event.
Today it is 23 years ago since Ian Murdock published his intention to develop a new Linux distribution, Debian. It also about 20 years since I became a Debian developer and made my first package upload.
In the time since:
I've retired a couple of times, to pursue other interests, and then un-retired.
I've maintained a bunch of different packages, most importantly the PGP2 software in the 90s. (I now only maintain software for which I'm also upstream, in order to make jokes about my upstream being an unco-operative jerk, and my packager being unhelpful in the extreme.)
Got kicked out from the Debian mailing lists for insulting another developer. Not my proudest moment. I was allowed back later, and I've tried to be polite ever since. (See also rules 6.)
I've been to a few Debconfs (3, 5, 6, 9, 10, 15). I'm looking forward to going to many more in the future. It's clear that seeing many project members at least every now and then has a very big impact on project cohesion.
I had a gig where I was paid to improve the technical quality of Debian. After a few months of bug fixing (which isn't my favourite pastime), I wrote piuparts in order to find new bugs. (I gave that project away many years ago, but it seems to still be going strong.)
I've almost ran for DPL twice, but I'm glad I didn't actually. I've carefully avoided any positions of power or responsibility in the project. (I live in fear that someone decides to nominate me for something where I'd actually have make important decisions.)
Not being responsible means I can just ignore the project for a while when something annoying happens. (Or retire again.) With such a large project, eventually something really annoying does happen.
Came up with the DEP process with Zack and Dato. I also ran the second half of the DEP5 process to get the debian/copyright machine readable format accepted. (I'm no longer involved, though, and I don't think DEP is much now.)
I've taught several workshops about Debian packaging, including online for Debian-Women. It's always fun when others "get" how easy packaging really is, despite all the efforts of the larger variety in tooling and random web pages go to to obscure the fundamental simplicity.
Over the years Í've enjoyed many of the things developed within Debian (without claiming any credit for myself):
the policy manual, perhaps the most important technical achievement of the project
the social contract and Debian free software guidelines, unarguably the most important non-technical achievements of the project
the whole package management system, but especially apt
debhelper's dh, which made the work of packaging simple cases so easy it's nearly a no-brainer
d-i made me not hate installing Debian (although I think time is getting ripe to replace d-i with something new; catch me in a talkative mood at party to hear more)
Debian-Women made an almost immediate improvement to the culture of the larger project (even if there's still much too few women developers)
the diversity statement made me a lot happier about being a project member.
I'd like to thank everyone who's worked on these and made them happen. These are important milestones in Debian.
I've opened my mount in a lot of places over the years, which means a lot of people know of me, but nobody can actually point at anything useful I've actually done. Which is why when I've given talks at, say, FOSDEM, I get introduced as "the guy who shared an office with Linus Torvalds a long time ago".
I've made a number of friends via participation in Debian. I've found jobs via contacts in Debian, and have even started a side business with someone.
It's been a good twenty years. And the fun ain't over yet.
I write free software and I have some users. My primary support channels are over email and IRC, which means I do not have direct access to the system where my software runs. When one of my users has a problem, we go through one or more cycles of them reporting what they see and me asking them for more information, or asking them to try this thing or that thing and report results. This can be quite frustrating.
I want, nay, need to improve this. I've been thinking about this for a while, and talking with friends about it, and here's my current ideas.
First idea: have a script that gathers as much information as possible, which the user can run. For example, log files, full configuration, full environment, etc. The user would then mail the output to me. The information will need to be anonymised suitably so that no actual secrets are leaked. This would be similar to Debian's package specific reportbug scripts.
Second idea: make it less likely that the user needs help solving their issue, with better error messages. This would require error messages to have sufficient explanation that a user can solve their problem. That doesn't necessarily mean a lot of text, but also code that analyses the situation when the error happens to include things that are relevant for the problem resolving process, and giving error messages that are as specific as possible. Example: don't just fail saying "write error", but make the code find out why writing caused an error.
Third idea: in addition to better error messages, might provide diagnostics tools as well.
A friend suggested having a script that sets up a known good set of operations and verifies they work. This would establish a known-working baseline, or smoke test, so that we can rule things like "software isn't completely installed".
Do you have ideas? Mail me (firstname.lastname@example.org) or tell me on identi.ca (@liw) or Twitter (@larswirzenius).
Warning: This blog post includes instructions for a procedure that can lead you to lock yourself out of your computer. Even if everything goes well, you'll be hunted by dragons. Keep backups, have a rescue system on a USB stick, and wear flameproof clothing. Also, have fun, and tell your loved ones you love them.
I've recently gotten two U2F keys. U2F is a open standard for authentication using hardware tokens. It's probably mostly meant for website logins, but I wanted to have it for local logins on my laptop running Debian. (I also offer a line of stylish aluminium foil hats.)
Having two-factor authentication (2FA) for local logins improves security if you need to log in (or unlock a screen lock) in a public or potentially hostile place, such as a cafe, a train, or a meeting room at a client. If they have video cameras, they can film you typing your password, and get the password that way.
If you set up 2FA using a hardware token, your enemies will also need to lure you into a cave, where a dragon will use a precision flame to incinerate you in a way that leaves the U2F key intact, after which your enemies steal the key, log into your laptop and leak your cat GIF collection.
Looking up information for how to set this up, I found a blog post by Sean Brewer, for Ubuntu 14.04. That got me started. Here's what I understand:
PAM is the technology in Debian for handling authentication for logins and similar things. It has a plugin architecture.
Yubico (maker of Yubikeys) have written a PAM plugin for U2F. It is packaged in Debian as
libpam-u2f. The package includes documentation in
By configuring PAM to use
libpam-u2f, you can require both password and the hardware token for logging into your machine.
Here are the detailed steps for Debian stretch, with minute differences from those for Ubuntu 14.04. If you follow these, and lock yourself out of your system, it wasn't my fault, you can't blame me, and look, squirrels! Also not my fault if you don't wear sufficient protection against dragons.
- As your normal user,
mkdir ~/.config/Yubico. The list of allowed U2F keys will be put there.
- Insert your U2F key and run
pamu2fcfg -u$USER > ~/.config/Yubico/u2f_keys, and press the button on your U2F key when the key is blinking.
/etc/pam.d/common-authand append the line
auth required pam_u2f.so cue.
- Reboot (or at least log out and back in again).
- Log in, type in your password, and when prompted and the U2F key is blinking, press its button to complete the login.
pamu2fcfg reads the hardware token and writes out its identifying data
in a form that the PAM module understands; see the pam-u2f
documentation for details. The data can be stored in the user's home
directory (my preference) or in
Once this is set up, anything that uses PAM for local authentication (console login, GUI login, sudo, desktop screen lock) will need to use the U2F key as well. ssh logins won't.
Next, add a second key to your
u2f_keys. This is important, because if
you lose your first key, or it's damaged, you'll otherwise have no way
to log in.
- Insert your second U2F key and run
pamu2fcfg -n > second, and press the second key's button when prompted.
~/.config/Yubico/u2f_keysand append the output of
secondto the line with your username.
- Verify that you can log in using your second key as well as the first key. Note that you should have only one of the keys plugged in at the same time when logging in: the PAM module wants the first key it finds so you can't test both keys plugged in at once.
This is not too difficult, but rather fiddly, and it'd be nice if someone wrote at least a way to manage the list of U2F keys in a nicer way.
For those who use my
code.liw.fi/debian APT repository, please be
advised that I've today replaced the signing key for the repository.
The new key has the following fingerprint:
8072 BAD4 F68F 6BE8 5F01 9843 F060 2201 12B6 1C1F
I've signed the key with my primary key and sent the new key with signature to the key servers. You can also download it at http://code.liw.fi/apt.asc.
In March we started a new company, to develop and support the software whose development I led at my previous job. The software is Qvarn, and it's fully free software, licensed under AGPL3+. The company is QvarnLabs (no website yet). Our plan is to earn a living from this, and our hope is to provide software that is actually useful for helping various organisations handle data securely.
The first press release about Qvarn was sent out today. We're still setting up the company and getting operational, but a little publicity never hurts. (Even if it is more marketing-speak and self-promotion than I would normally put on my blog.)
So this is what I do for a living now.
Helsinki, Finland 10.05.2016 – With Privacy by Design, integrated Gluu access management and comprehensive support for regulatory data compliance, Qvarn is set to become the Europe-wide platform of choice for managing workforce identities and providing associated value-added services.
Construction industry federations in Sweden, Finland and the Baltic States have been using the Qvarn Platform (http://www.qvarn.org) since October 2015 to securely manage the professional digital identities of close to one million construction workers. Developed on behalf of these same federations, Qvarn is now free and open source software; making it a compelling solution for any organization that needs to manage a secure register of workers’ data.
"There is something universal and fundamental at the core of the Qvarn platform. And that’s trust," said Qvarn evangelist Kaius Häggblom. "We decided to make it free, open source and include Gluu access management because we wanted all those using Qvarn or contributing to its continued development to have the freedom to work with the platform in whatever way is best for them."
Qvarn has been designed to meet the requirements of the European Union’s new General Data Protection Regulation (GDPR), enabling organizations that use the platform to ensure their compliance with the new law. Qvarn has also incorporated the principles of Privacy by Design to minimize the disclosure of non-essential personal information and to give people more control over their data.
"Today, Qvarn is used by the construction industry as a way to manage the data of employees, many of whom frequently move across borders. In this way the platform helps to combat the grey economy in the building sector, thereby improving quality and safety, while simultaneously protecting the professional identity data of almost a million individuals," said Häggblom. "Qvarn is so flexible and secure that we envision it becoming the preferred platform for the provision of any value-added services with an identity management component, eventually even supporting monetary transactions."
Qvarn is a cloud based solution supported to run on both Amazon Web Services (AWS) and OpenStack. In partnership with Gluu, the platform delivers an out-of-the-box solution that uses open and standard protocols to provide powerful yet flexible identity and access management, including mechanisms for appropriate authentication and authorization.
"Qvarn's identity management and governance capabilities perfectly compliment the Gluu Server's access management features," said Founder and CEO of Gluu, Michael Schwartz. "Free open source software (FOSS) is essential to the future of identity and access management. And the FOSS development methodology provides the transparency that is needed to foster the strong sense of community upon which a vibrant ecosystem thrives."
Qvarn’s development team continues to be led by recognized open source developer and platform architect Lars Wirzenius. He has been developing free and open source software for 30 years and is a renowned expert in the Linux environment, with a particular focus on the Debian distribution. Lars works at all levels of software development – from writing code to designing system architecture.
About the Qvarn Platform:
The Qvarn Platform is free and open source software for managing workforce identities. Qvarn is integrated with the Gluu Server’s access management features out of the box, using open and standard protocols to provide the platform with a single common digital identity and mechanisms for appropriate authentication and authorization. A cloud based solution, Qvarn is supported to run on both Amazon Web Services (AWS) and OpenStack. Privacy by Design is central to the architecture of Qvarn and the platform has been third party audited to a security level of HIGH.
For more information, please contact:
+358 40 161 5668
Today was my last day at Suomen Tilaajavastuu, where I worked on Qvarn. Tomorrow is my first day at my new job. The new job is for a new company, tentatively named QvarnLabs (registration is in process), to further develop and support Qvarn. The new company starts operation tomorrow, so you'll have to excuse me that there isn't a website yet.
Qvarn provides a secure, RESTful JSON HTTP API for storing and retrieving data, with detailed access control (and I can provide more buzzwords if necessary). If you operate in the EU, and store information about people, you might want to read up about the General Data Protection Regulation, and Qvarn may be a possible part of a solution you want to look into, once we have the website up.
In January and February of 2016 I ran an Obnam user survey. I'm not a statistician, but here is my analysis of the results.
Executive summary: Obnam is slow, buggy, and the name is bad. But they'd like to buy stickers and t-shirts.
I wrote up a long list of questions about things I felt were of interest to me. I used Google Forms to collect responses, and exported them as a CSV file, and analysed based on that.
I used Google Forms, even though it is not free software, as it was the easiest service I got to work that also seemed it'd be nice for people to use. I could have run the survey using Ikiwiki, but it wouldn't have been nearly as nice. I could have found and hosted some free software for this, but that would have been much more work.
Most questions had free form text responses, and this was both good and bad. It was good, because many of the responses included things I could never have expected. It was bad, because it took me a lot more time and effort to process those. I think next time I'll keep the number of free text responses down.
For some of the questions, I hand-processed the responses to a more or less systematic form, in order to count things with a bit of code. For others, I did not, and show the full list of responses (I'm lazy, we don't need a survey to determine that).
See http://code.liw.fi/obnam/survey-2016.html for the responses, after hand-processing.
For the questions for which it makes sense, a script has tabulated the various responses and calculated percentages. I haven't produced graphs, as I don't know how to do that easily. (Maybe next time I'll enlist the help of statisticians.)
There were 263 responses in total. I have no idea of knowing if the total number of Obnam users is about that, but the number correlates fairly well with the Debian popcon numbers, so I'm assuming Obnam has on the order of a few hundred users total.
A larger number might be more impressive, but it'd also mean that I would be responsible for much more data loss if I make a horrible mistake. That said, it is probably time to start spending some effort on growing the developer base of Obnam.
People seem to hear about Obnam primarily from my blog posts, or by searching the web for backup software. Also, from the Arch Linux or Gentoo wikis, or Joey Hess.
People use Obnam mostly for personal machines, but also at work.
Those who have tried Obnam, but don't use it, rejected it primarily for speed or because it's unstable or buggy. I hope that the bad bugs have mostly been fixed, and I'm working on improving the speed.
People seem to use either the latest version, or the version included in the release of their operating system (e.g., Debian jessie). Other versions are relatively rare.
Most people started using Obnam in the past two years.
People use Obnam on a variety of Linux based operating systems, but also others. Obnam users are especially skewed towards Debian and Ubuntu, which is not surprising, as I'm involved in Debian and have been publicising it there, and provide package for Debian myself.
About half the people have at least hundreds of thousands of files, containing hundreds of gigabytes of data. All extremes (very few or very many files, very little or very much data) are represented, though. A couple of people have at least a hundred million files, or at least ten terabytes of data.
Most people don't have a backup strategy, or at least not a documented one, and if they do, it's not regularly tested.
This isn't a good thing.
Most people had backed up within the past week as of the time of filling in the survey. This hopefully indicates that they back up frequently. Only one respondent said they'd never backed up.
Rather more people hadn't tested their backups, however, with about a fifth of the people having never tested their backup. This is also not good.
Most people only back up one machine to each repository, or at most a few. A total of 17 respondents reported that they don't have a backup, and do not fear clowns.
About half the people back up to a local drive, and nearly two thirds to an SFTP server.
People ask for more remote storage options, such as support for services like Amazon S3.
The things people like most about Obnam are on its list of core features: de-duplication, encryption, and ease of use / simplicity. FUSE is also well-liked, as are snapshot backups.
I didn't tabulate the reasons why people don't like Obnam, but performance and stability seem to be the most common reasons. My favourite response to this question is "the name obnam, does not sounds like a backup program".
Speed is also the pet bug people seem to have.
People seem to generally find Obnam documentation adequate. There's room for improvement, of course.
Nearly everyone finds it easy to get help if they have a problem with Obnam, but almost no-one uses the Obnam support mailing list or IRC channel.
Some people read the NEWS file, others do not. Few have sent patches, but some would like to. There's a bunch of suggestions for new features.
None of this is surprising to me, except perhaps that so many Obnam users actually do read the NEWS file, as it's been my experience in other projects that that's rare.
About half the people have heard of the green albatross. It's the name of the new way in which Obnam will be storing data on disk, which is a big factor in how fast or slow Obnam is. When the green albatross soars, Obnam will fly faster.
People use other backup software as well, which is sensible: no point in having all one's eggs in one basket. The top choices are rsync, duplicity, attic, and rsnapshot, but the list seems to mention most free backup software.
There's some interest in helping Obnam development, either by direct contributions, donations, paying for support or development, or by buying merchandise. Nearly no-one wants a printed version of the manual, but stickers and t-shirts might sell well enough.
A lot of people don't really want to, or are not able to, contribute, especially not by doing things, and that's OK. (They did contribute, however, by filling in the survey.)
When given an opportunity to say whatever they want to Obnam developers, most people say "thank you" in some form or another. This was very heartwarming.