Over my career I’ve ended up with an following approach to planning software development, and estimating how long it will take. It is predicated on the realization of the following two things:

  • the longer into the future I’m making plans, the more likely they are to required to be change due to things you I learn on the way, or things changing that are beyond my control

  • the bigger a piece of work is, the harder it is for me to estimate correctly how long it will take, even with very loose tolerances

I find that planning, in detail, beyond a week or two is likely to be useless. After that, too much changes that affects what I need to do, and how. Note the caveat of “in detail”: it’s fine to plan something like “over the next decade I will implement a backup program”, but planning to develop backups in September, restores in October, and adding encryption the first week in November is folly. What happens if in October you realize you need to learn and implement TCP/IP, HTTP, and TLS in your chosen language, because the existing implementations turn out not to work in the northern hemisphere? And what if I need to move to another country due to inheriting a castle in Spain?

I also find that estimating anything that’s going to take more than about half a work day is too likely to go wrong. There’s too much uncertainty in large chunks of work, and the variability becomes impossible to manage.

Thus, for detailed planning, I plan for one iteration at a time, and I keep my tasks for that iteration at four hours or less.

In fact, my estimates are divided into four buckets:

  • up to a 0.25 hours
  • up to 1 hour
  • up to 4 hours
  • too long, split before committing to them

Estimating between one and four hours to be seems futile for me. If it takes an hour, it can easily take two or three, because sometimes, inevitably, debugging happens and that’s impossible to estimate.