Feed for Planet Debian.

I've been slowly writing on would-be novel, Hacker Noir. See also my Patreon post. I've just pushed out a new public chapter, Assault, to the public website, and a patron-only chapter to Patreon: "Ambush", where the Team is ambushed, and then something bad happens.

The Assault chapter was hard to write. It's based on something that happened to me earlier this year. The Ambush chapter was much more fun.

Posted Sat Jun 9 21:41:00 2018 Tags:

"Code smells" are a well-known concept: they're things that make you worried your code is not of good quality, without necessarily being horribly bad in and of themsleves. Like a bad smell in a kitchen that looks clean.

I've lately been thinking of "architecture aromas", which indicate there might be something good about an architecture, but don't guarantee it.

An example: you can't think of any compontent that could be removed from an architecture without sacrificing main functionality.

Posted Mon Apr 9 09:12:00 2018

This year I've implemented a rudimentary authentication server for work, called Qvisqve. I am in the process for also using it for my current hobby project, ick, which provides HTTP APIs and needs authentication. Qvisqve stores passwords using scrypt: source. It's not been audited, and I'm not claiming it to be perfect, but it's at least not storing passwords in cleartext. (If you find a problem, do email me and tell me: liw@liw.fi.)

This week, two news stories have reached me about service providers storing passwords in cleartext. One is a Finnish system for people starting a new business. The password database has leaked, with about 130,000 cleartext passwords. The other is about T-mobile in Austria bragging on Twitter that they store customer passwords in cleartext, and some people not liking that.

In both cases, representatives of the company claim it's OK, because they have "good security". I disagree. Storing passwords is itself shockingly bad security, regardless of how good your other security measures are, and whether your password database leaks or not. Claiming it's ever OK to store user passwords in cleartext in a service is incompetence at best.

When you have large numbers of users, storing passwords in cleartext becomes more than just a small "oops". It becomes a security risk for all your users. It becomes gross incompetence.

A bank is required to keep their customers' money secure. They're not allowed to store their customers cash in a suitcase on the pavement without anyone guarding it. Even with a guard, it'd be negligent, incompetent, to do that. The bulk of the money gets stored in a vault, with alarms, and guards, and the bank spends much effort on making sure the money is safe. Everyone understands this.

Similar requirements should be placed on those storing passwords, or other such security-sensitive information of their users.

Storing passwords in cleartext, when you have large numbers of users, should be criminal negligence, and should carry legally mandated sanctions. This should happen when the situation is realised, even if the passwords haven't leaked.

Posted Sat Apr 7 10:28:00 2018 Tags:

For the 2016 NaNoWriMo I started writing a novel about software development, "Hacker Noir". I didn't finish it during that November, and I still haven't finished it. I had a year long hiatus, due to work and life being stressful, when I didn't write on the novel at all. However, inspired by both the Doctorow method and the Seinfeld method, I have recently started writing again.

I've just published a new chapter. However, unlike last year, I'm publishing it on my Patreon only, for the first month, and only for patrons. Then, next month, I'll be putting that chapter on the book's public site (noir.liw.fi), and another new chapter on Patreon.

I don't expect to make a lot of money, but I am hoping having active supporters will motivate me to keep writing.

I'm writing the first draft of the book. It's likely to be as horrific as every first-time author's first draft is. If you'd like to read it as raw as it gets, please do. Once the first draft is finished, I expect to read it myself, and be horrified, and throw it all away, and start over.

Also, I should go get some training on marketing.

Posted Thu Mar 8 13:20:00 2018 Tags:

Random crazy Debian idea of today: add support to dpkg so that it uses containers (or namespaces, or whatever works for this) for running package maintainer scripts (pre- and postinst, pre- and postrm), to prevent them from accidentally or maliciously writing to unwanted parts of the filesystem, or from doing unwanted network I/O.

I think this would be useful for third-party packages, but also for packages from Debian itself. You heard it here first! Debian package maintainers have been known to make mistakes.

Obviously there needs to be ways in which these restrictions can be overridden, but that override should be clear and obvious to the user (sysadmin), not something they notice because they happen to be running strace or tcpdump during the install.

Corollary: dpkg could restrict where a .deb can place files based on the origin of the package.

Example: Installing chrome.deb from Google installs a file in /etc/apt/sources.list.d, which is a surprise to some. If dpkg were to not allow that (as a file in the .deb, or a file created in postinst), unless the user was told and explicitly agreed to it, it would be less of a nasty surprise.

Example: Some stupid Debian package maintainer is very busy at work and does Debian hacking when they should really be sleeping, and types the following into their postrm script, while being asleep:

#!/bin/sh

PKG=perfectbackup
LIB="/var/lib/ $PKG"

rm -rf "$LIB"

See the mistake? Ideally, this would be found during automated testing before the package gets uploaded, but that assumes said package maintainer uses tools like piuparts.

I think it'd be better if we didn't rely only infallible, indefatigable people with perfect workflows and processes for safety.

Having dpkg make the whole filesystem read-only, except for the parts that clearly belong to the package, based on some sensible set of rules, or based a suitable override, would protect against mistakes like this.

Posted Mon Mar 5 12:58:00 2018 Tags:

Another weekend, another big mailing list thread

This weekend, those interested in Debian development have been having a discussion on the debian-devel mailing list about "What can Debian do to provide complex applications to its users?". I'm commenting on that in my blog rather than the mailing list, since this got a bit too long to be usefully done in an email.

directhex's recent blog post "Packaging is hard. Packager-friendly is harder." is also relevant.

The problem

To start with, I don't think the email that started this discussion poses the right question. The problem not really about complex applications, we already have those in Debian. See, for example, LibreOffice. The discussion is really about how Debian should deal with the way some types of applications are developed upstream these days. They're not all complex, and they're not all big, but as usual, things only get interesting when n is big.

A particularly clear example is the whole nodejs ecosystem, but it's not limited to that and it's not limited to web applications. This is also not the first time this topic arises, but we've never come to any good conclusion.

My understanding of the problem is as follows:

A current trend in software development is to use programming languages, often interpreted high level languages, combined with heavy use of third-party libraries, and a language-specific package manager for installing libraries for the developer to use, and sometimes also for the sysadmin installing the software for production to use. This bypasses the Linux distributions entirely. The benefit is that it has allowed ecosystems for specific programming languages where there is very little friction for using libraries written in that language to be used by developers, speeding up development cycles a lot.

When I was young(er) the world was horrible

In comparison, in the old days, which for me means the 1990s, and before Debian took over my computing life, the cycle was something like this:

I would be writing an application, and would need to use a library to make some part of my application easier to write. To use that library, I would download the source code archive of the latest release, and laboriously decipher and follow the build and installation instructions, fix any problems, rinse, repeat. After getting the library installed, I would get back to developing my application. Often the installation of the dependency would take hours, so not a thing to be undertaken lightly.

Debian made some things better

With Debian, and apt, and having access to hundreds upon hundreds of libraries packaged for Debian, this become a much easier process. But only for the things packaged for Debian.

For those developing and publishing libraries, Debian didn't make the process any easier. They would still have to publish a source code archive, but also hope that it would eventually be included in Debian. And updates to libraries in the Debian stable release would not get into the hands of users until the next Debian stable release. This is a lot of friction. For C libraries, that friction has traditionally been tolerable. The effort of making the library in the first place is considerable, so any friction added by Debian is small by comparison.

The world has changed around Debian

In the modern world, developing a new library is much easier, and so also the friction caused by Debian is much more of a hindrance. My understanding is that things now happen more like this:

I'm developing an application. I realise I could use a library. I run the language-specific package manager (pip, cpan, gem, npm, cargo, etc), it downloads the library, installs it in my home directory or my application source tree, and in less than the time it takes to have sip of tea, I can get back to developing my application.

This has a lot less friction than the Debian route. The attraction to application programmers is clear. For library authors, the process is also much streamlined. Writing a library, especially in a high-level language, is fairly easy, and publishing it for others to use is quick and simple. This can lead to a virtuous cycle where I write a useful little library, you use and tell me about a bug or a missing feature, I add it, publish the new version, you use it, and we're both happy as can be. Where this might have taken weeks or months in the old days, it can now happen in minutes.

The big question: why Debian?

In this brave new world, why would anyone bother with Debian anymore? Or any traditional Linux distribution, since this isn't particularly specific to Debian. (But I mention Debian specifically, since it's what I now best.)

A number of things have been mentioned or alluded to in the discussion mentioned above, but I think it's good for the discussion to be explicit about them. As a computer user, software developer, system administrator, and software freedom enthusiast, I see the following reasons to continue to use Debian:

  • The freeness of software included in Debian has been vetted. I have a strong guarantee that software included in Debian is free software. This goes beyond the licence of that particular piece of software, but includes practical considerations like the software can actually be built using free tooling, and that I have access to that tooling, because the tooling, too, is included in Debian.

    • There was a time when Debian debated (with itself) whether it was OK to include a binary that needed to be built using a proprietary C compiler. We decided that it isn't, or not in the main package archive.

    • These days we have the question of whether "minimised Javascript" is OK to be included in Debian, if it can't be produced using tools packaged in Debian. My understanding is that we have already decided that it's not, but the discussion continues. To me, this seems equivalent to the above case.

  • I have a strong guarantee that software in a stable Debian release won't change underneath me in incompatible ways, except in special circumstances. This means that if I'm writing my application and targeting Debian stable, the library API won't change, at least not until the next Debian stable release. Likewise for every other bit of software I use. Having things to continue to work without having to worry is a good thing.

    • Note that a side-effect of the low friction of library development current ecosystems sometimes results in the library API changing. This would mean my application would need to change to adapt to the API change. That's friction for my work.
  • I have a strong guarantee that a dependency won't just disappear. Debian has a large mirror network of its package archive, and there are easy tools to run my own mirror, if I want to. While running my own mirror is possible for other package management systems, each one adds to the friction.

    • The nodejs NPM ecosystem seems to be especially vulnerable to this. More than once packages have gone missing, resulting other projects, which depend on the missing packages, to start failing.

    • The way the Debian project is organised, it is almost impossible for this to happen in Debian. Not only are package removals carefully co-ordinated, packages that are depended on on by other packages aren't removed.

  • I have a strong guarantee that a Debian package I get from a Debian mirror is the official package from Debian: either the actual package uploaded by a Debian developer or a binary package built by a trusted Debian build server. This is because Debian uses cryptographic signatures of the package lists and I have a trust path to the Debian signing key.

    • At least some of the language specific package managers fail to have such a trust path. This means that I have no guarantees that the library package I download today, was the same code uploaded by library author.

    • Note that https does not help here. It protects the transfer from the package manger's web server to me, but makes absolutely no guarantees about the validity of the package. There's been enough cases of the package repository having been attacked that this matters to me. Debian's signatures protect against malicious changes on mirror hosts.

  • I have a reasonably strong guarantee that any problem I find can be fixed, by me or someone else. This is not a strong guarantee, because Debian can't do anything about insanely complicated code, for example, but at least I can rely on being able to rebuild the software. That's a basic requirement for fixing a bug.

  • I have a reasonably strong guarantee that, after upgrading to the next Debian stable release, my stuff continues to work. Upgrades may always break, but at least Debian tests them and treats it as a bug if an upgrade doesn't work, or loses user data.

These are the reasons why I think Debian and the way it packages and distributes software is still important and relevant. (You may disagree. I'm OK with that.)

What about non-Linux free operating systems

I don't have much personal experience with non-Linux systems, so I've only talked about Linux here. I don't think the BSD systems, for example, are actually all that different from Linux distributions. Feel free to substitute "free operating system" for "Linux" throughout.

What is it Debian tries to do, anyway?

The previous section is one level of abstraction too low. It's important, but it's beneficial take a further step back and consider what it is Debian actually tries to achieve. Why does Debian exist?

The primary goal of Debian is to enable its users to use their computers using only free software. The freedom aspect is fundamentally important and a principle that Debian is not willing to compromise on.

The primary approach to achieve this goal is to produce a "distribution" of free software, to make installing a free software operating system and applications, and to maintain such a computer, a feasible thing for our users.

This leads to secondary goals, such as:

  • Making it easy to install Debian on a computer. (For values of easy that should be compared to toggling boot sector bytes manually.)

    We've achieved this, though of course things can always be improved.

  • Making it easy to install applications on a computer with Debian. (Again, compared to the olden days, when that meant configuring and compiling everything from scratch, with no guidance.)

    We've achieved this, too.

  • A system with Debian installed is reasonably secure, and easy to keep reasonably secure.

    This means Debian will provide security support for software it distributes, and has ways in which to install security fixes. We've achieved this, though this, too, can always be improved.

  • A system with Debian installed should keep working for extended periods of time. This is important to make using Debian feasible. If it takes too much effort to have a computer running Debian, it's not feasible for many people to that, and then Debian fails its primary goal.

    This is why Debian has stable releases with years of security support. We've achieved this.

The disconnect

On the one hand, we have Debian, which pretty much has achieved what I declare to be its primary goal. On the other hand, a lot of developers now expect much less friction than what Debian offers. This is a disconnect that is cause, I believe, the debian-devel discussion, and variants of that discussion all over the open source landscape.

These discussions often go one of two ways, depending on which community is talking.

  • In the distribution and more old-school communities, the low-friction approach of language-specific package managers is often considered to be a horror, and an abandonment of all the good things that the Linux world has achieved. "Young saplings, who do they think they are, all agile and bendy and with no principles at all, get off our carefully cultivated lawn."

  • In the low-friction communities, Linux distributions are something only old, stodgy, boring people care about. "Distributions are dead, they only get in the way, nobody bothers with them anymore."

This disconnect will require effort by both sides to close the gap.

On the one hand, so much new software is being written by people using the low-friction approach, that Linux distributions may fail to attract new users and especially new developers, and this will hurt them and their users.

On the other hand, the low-friction people may be sawing the tree branch they're sitting on. If distributions suffer, the base on which low-friction development relies on, will wither away, and we'll be left with running low-friction free software on proprietary platforms.

Things for low-friction proponents to improve

Here's a few things I've noticed that go wrong in the various communities oriented towards the low-friction approach.

  • Not enough care is given to copyright licences. This is a boring topic, but it's the legal basis that all of free software and open source is based on. If copyright licences are violated, or copyrights are not respected, or copyrights or licences are not expressed well enough, or incompatible licences are mixed, the result is very easily not actually either free software or open source.

    It's boring, but be sufficiently pedantic here. It's not even all that difficult.

  • Do provide actual source. It seems quite a number of Javascript projects only distribute "minimised" versions of code. That's not actually source code, any more than, say, Java byte code is, even if a de-compiler can make it kind of editable. If source isn't available, it's not free software or open source.

  • Please try to be careful with API changes. What used to work should still work with a new version of a library. If you need to make an API change that breaks compatibility, find a way to still support those who rely on the old API, using whatever mechanisms available to you. Ideally, support the old API for a long time, years. Two weeks is really not enough.

  • Do be careful with your dependencies. Locking down dependencies on a specific version makes things difficult for distributions, because they often can only provide one or a very small number of versions of any one package. Likewise, avoid embedding dependencies in your own source tree, because that explodes the amount of work distributions have to do to patch security holes. (No, distributions can't rely on tends of thousands of upstream to each do the patching correctly and promptly.)

Things for Debian to improve

There are many sources of friction that come from Debian itself. Some of them are unavoidable: if upstream projects don't take care of copyright licence hygiene, for example, then Debian will impose that on them and that can't be helped. Other things are more avoidable, however. Here's a list off the top of my head:

  • A lot of stuff in Debian happens over email, which might happen using a web application, if it were not for historical reasons. For example, the Debian bug tracking system (bugs.debian.org) requires using email, and given delays caused by spam filtering, this can cause delays of more than fifteen minutes. This is a source of friction that could be avoided.

  • Likewise, Debian voting happens over email, which can cause friction from delays.

  • Debian lets its package maintainers use any version control system, any packaging helper tooling, and packaging workflow they want. This means that every package is, to some extent, a new territory for someone other than its primary maintainers. Even when the same tools are used, they can be used in variety of different ways. Consistency should reduce friction.

  • There's too little infrastructure to do things like collecting copyright information into debian/control. This really shouldn't be a manual task.

  • Debian packaging uses arcane file formats, loosely based on email headers. More standard formats might make things easier, and reduce friction.

  • There's not enough automated testing, or it's too hard to use, making it too hard to know if a new package will work, or a modified package doesn't break anything that used to work.

  • Overall, making a Debian package tends to require too much manual work. Packaging helpers like dh certainly help, but not enough. I don't have a concrete suggestion how to reduce it, but it seems like an area Debian should work on.

  • Maybe consider supporting installing multiple versions of a package, even if only for, say, Javascript libraries. Possibly with a caveat that only specific versions will be security supported, and a way to alert the sysadmin if vulnerable packages are installed. Dunno, this is a difficult one.

  • Maybe consider providing something where the source package gets automatically updated to every new upstream release (or commit), with binary packages built from that, and those automatically tested. This might be a separate section of the archive, and packages would be included into the normal part of the archive only by manual decision.

  • There's more, but mostly not relevant to this discussion, I think. For example, Debian is a big project, and the mere size is a cause of friction.

Comments?

I don't allow comments on my blog, and I don't want to debate this in private. If you have comments on anything I've said above, please post to the debian-devel mailing list. Thanks.

Baits

To ensure I get some responses, I will leave these bait here:

Anyone who's been programming less than 12332 days is a young whipper-snapper and shouldn't be taken seriously.

Depending on the latest commit of a library is too slow. The proper thing to do for really fast development is to rely on the version in the unsaved editor buffer of the library developer.

You shouldn't have read any of this. I'm clearly a troll.

Posted Sat Feb 17 15:05:00 2018 Tags:

My company, QvarnLabs Ab, has today released the first alpha version of our new product, Qvisqve. Below is the press release. I wrote pretty much all the code, and it's free software (AGPL+).


Helsinki, Finland - 2018-02-09. QvarnLabs Ab is happy to announce the first public release of Qvisqve, an authorisation server and identity provider for web and mobile applications. Qvisqve aims to be secure, lightweight, fast, and easy to manage. "We have big plans for Qvisqve, and helping customers' manage cloud identities" says Kaius Häggblom, CEO of QvarnLabs.

In this alpha release, Qvisqve supports the OAuth2 client credentials grant, which is useful for authenticating and authorising automated systems, including IoT devices. Qvisqve can be integrated with any web service that can use OAuth2 and JWT tokens for access control.

Future releases will provide support for end-user authentication by implementing the OpenID Connect protocol, with a variety of authentication methods, including username/password, U2F, TOTP, and TLS client certificates. Multi-factor authentication will also be supported. "We will make Qvisqve be flexible for any serious use case", says Lars Wirzenius, software architect at QvarnLabs. "We hope Qvisqve will be useful to the software freedom ecosystem in general" Wirzenius adds.

Qvisqve is developed and supported by QvarnLabs Ab, and works together with the Qvarn software, which is award-winning free and open-source software for managing sensitive personal information. Qvarn is in production use in Finland and Sweden and manages over a million identities. Both Qvisqve and Qvarn are released under the Affero General Public Licence.

Posted Fri Feb 9 16:30:00 2018 Tags:

TL;DR: Ick is a continuous integration or CI system. See http://ick.liw.fi/ for more information.

More verbose version follows.

First public version released

The world may not need yet another continuous integration system (CI), but I do. I've been unsatisfied with the ones I've tried or looked at. More importantly, I am interested in a few things that are more powerful than what I've ever even heard of. So I've started writing my own.

My new personal hobby project is called ick. It is a CI system, which means it can run automated steps for building and testing software. The home page is at http://ick.liw.fi/, and the download page has links to the source code and .deb packages and an Ansible playbook for installing it.

I have now made the first publicly advertised release, dubbed ALPHA-1, version number 0.23. It is of alpha quality, and that means it doesn't have all the intended features and if any of the features it does have work, you should consider yourself lucky.

Invitation to contribute

Ick has so far been my personal project. I am hoping to make it more than that, and invite contributions. See the governance page for the constitution, the getting started page for tips on how to start contributing, and the contact page for how to get in touch.

Architecture

Ick has an architecture consisting of several components that communicate over HTTPS using RESTful APIs and JSON for structured data. See the architecture page for details.

Manifesto

Continuous integration (CI) is a powerful tool for software development. It should not be tedious, fragile, or annoying. It should be quick and simple to set up, and work quietly in the background unless there's a problem in the code being built and tested.

A CI system should be simple, easy, clear, clean, scalable, fast, comprehensible, transparent, reliable, and boost your productivity to get things done. It should not be a lot of effort to set up, require a lot of hardware just for the CI, need frequent attention for it to keep working, and developers should never have to wonder why something isn't working.

A CI system should be flexible to suit your build and test needs. It should support multiple types of workers, as far as CPU architecture and operating system version are concerned.

Also, like all software, CI should be fully and completely free software and your instance should be under your control.

(Ick is little of this yet, but it will try to become all of it. In the best possible taste.)

Dreams of the future

In the long run, I would ick to have features like ones described below. It may take a while to get all of them implemented.

  • A build may be triggered by a variety of events. Time is an obvious event, as is source code repository for the project changing. More powerfully, any build dependency changing, regardless of whether the dependency comes from another project built by ick, or a package from, say, Debian: ick should keep track of all the packages that get installed into the build environment of a project, and if any of their versions change, it should trigger the project build and tests again.

  • Ick should support building in (or against) any reasonable target, including any Linux distribution, any free operating system, and any non-free operating system that isn't brain-dead.

  • Ick should manage the build environment itself, and be able to do builds that are isolated from the build host or the network. This partially works: one can ask ick to build a container and run a build in the container. The container is implemented using systemd-nspawn. This can be improved upon, however. (If you think Docker is the only way to go, please contribute support for that.)

  • Ick should support any workers that it can control over ssh or a serial port or other such neutral communication channel, without having to install an agent of any kind on them. Ick won't assume that it can have, say, a full Java run time, so that the worker can be, say, a micro controller.

  • Ick should be able to effortlessly handle very large numbers of projects. I'm thinking here that it should be able to keep up with building everything in Debian, whenever a new Debian source package is uploaded. (Obviously whether that is feasible depends on whether there are enough resources to actually build things, but ick itself should not be the bottleneck.)

  • Ick should optionally provision workers as needed. If all workers of a certain type are busy, and ick's been configured to allow using more resources, it should do so. This seems like it would be easy to do with virtual machines, containers, cloud providers, etc.

  • Ick should be flexible in how it can notify interested parties, particularly about failures. It should allow an interested party to ask to be notified over IRC, Matrix, Mastodon, Twitter, email, SMS, or even by a phone call and speech syntethiser. "Hello, interested party. It is 04:00 and you wanted to be told when the hello package has been built for RISC-V."

Please give feedback

If you try ick, or even if you've just read this far, please share your thoughts on it. See the contact page for where to send it. Public feedback is preferred over private, but if you prefer private, that's OK too.

Posted Mon Jan 22 20:11:00 2018 Tags:

In mid-2017, I decided to experiment with using pull-requests (PRs) on Github. I've read that they make development using git much nicer. The end result of my experiment is that I'm not going to adopt a PR based workflow.

The project I chose for my experiment is vmdb2, a tool for generating disk images with Debian. I put it up on Github, and invited people to send pull requests or patches, as they wished. I got a bunch of PRs, mostly from two people. For a little while, there was a flurry of activity. It has has now calmed down, I think primarily because the software has reached a state where the two contributors find it useful and don't need it to be fixed or have new features added.

This was my first experience with PRs. I decided to give it until the end of 2017 until I made any conclusions. I've found good things about PRs and a workflow based on them:

  • they reduce some of the friction of contributing, making it easier for people to contribute; from a contributor point of view PRs certainly seem like a better way than sending patches over email or sending a message asking to pull from a remote branch
  • merging a PR in the web UI is very easy

I also found some bad things:

  • I really don't like the Github UI or UX, in general or for PRs in particular
  • especially the emails Github sends about PRs seemed useless beyond a basic "something happened" notification, which prompt me to check the web UI
  • PRs are a centralised feature, which is something I prefer to avoid; further, they're tied to Github, which is something I object to on principle, since it's not free software
    • note that Gitlab provides support for PRs as well, but I've not tried it; it's an "open core" system, which is not fully free software in my opinion, and so I'm wary of Gitlab; it's also a centralised solution
    • a "distributed PR" system would be nice
  • merging a PR is perhaps too easy, and I worry that it leads me to merging without sufficient review (that is of course a personal flaw)

In summary, PRs seem to me to prioritise making life easier for contributors, especially occasional contributors or "drive-by" contributors. I think I prefer to care more about frequent contributors, and myself as the person who merges contributions. For now, I'm not going to adopt a PR based workflow.

(I expect people to mock me for this.)

Posted Tue Jan 9 17:25:00 2018 Tags:

I wrote these when I woke up one night and had trouble getting back to sleep, and spent a while in a very philosophical mood thinking about life, success, and productivity as a programmer.

Imagine you're developing a piece of software.

  • You don't know it works, unless you've used it.

  • You don't know it's good, unless people tell you it is.

  • You don't know you can do it, unless you've already done it.

  • You don't know it can handle a given load, unless you've already tried it.

  • The real bottlenecks are always a surprise, the first time you measure.

  • It's not ready for production until it's been used in production.

  • Your automated tests always miss something, but with only manual tests, you always miss more.

Posted Sun Dec 17 11:09:00 2017 Tags: