We recently got some really high praise from a client:

Thank you for everything you have done so far. Our reception has been very strong from our customer base and we seem to be off to a great start. […] We have a pretty good idea that our next feature will be around X. Right now, I need to know how much money this feature may cost in order to get a better idea of time frames and what we need to do with respect to current customer load and additional funding. […] The original estimate of N hours was almost spot on, and so was most everything else, so an estimate/guesstimate is fine. I’m sure this will generate more questions.

(Emphasis mine)

While the kudos are awesome, what really stuck out to me was the client’s perception that our estimate was spot on. I totally understand that perception: from a happy client’s perspective, a good estimate is in the result, not the process. But if an outside observer were to judge our estimation process for the project by conventional standards, they’d probably conclude that the estimate we put together was pathetic. “What?” you say? Let me explain…

First of all, estimate quality typically has a high correlation to the granularity of the tasks being estimated. The more a feature is decomposed, the more probability the estimates generated for the sub-pieces will be accurate. But on the push for this client’s initial Minimum Viable Product (MVP) we didn’t do any granular estimation – the N hours estimated were based on looking at their MVP as a whole.

It gets even more pathetic, though: you see, one important way to make sure you give good estimates is to closely track the actual time spent vs. the estimate on each task, and then to adjust your upcoming estimates as you figure out the variance between those two. Yet on this project, we didn’t even have individual, per-task estimates, and we didn’t track actual time on a per-task basis. So there was no midstream adjustment to the estimates at all – the original estimate was N hours, and that’s what we hit within 10%.

Now there is one indicator of a good estimate that we did have, and that is prior experience with the problem space. We’ve built a lot of web applications over the past four years, and we also had some pre-existing domain knowledge around the problem the client was solving with the initial MVP. But this just takes the estimate from really pathetic back to a plain old pathetic: there has to be (and is) a very powerful offset to the apparent problems with this estimate that made it work where it shouldn’t have.

Here it is:

Our estimate was spot on because the client was engaged, focused on a truly minimum viable product, and trusted us. In a nutshell, the estimate was awesome because the client was awesome.

We do our best work in a time box: give us an hourly budget, and we’ll have a constant conversation with you as we develop, constantly tweaking, simplifying, removing and even adding things to make sure the desired outcome is achieved when we hit the end of the runway. The reason it works is that in a startup, the desired outcome is not software to spec. The desired outcome is an MVP that you can take to market and get concrete feedback on.

As a matter of fact, in a startup environment the “keys to effective estimation” are all liabilities. Breaking features down to a granular level is time consuming and half the time you’re decomposing features that don’t make it into the final product. Tracking estimates vs. actuals is useless when the strategy and approach to each task change multiple times between estimation and realization. And while domain knowledge is awesome, startups are often solving a unique problem, making domain knowledge hard to come by.

Good estimates happen to savvy founders, since they set a reasonable time box and then actively engage with the team building the product to simplify and fit the result into the estimate. If that sounds like you, we need to talk!

Posted by Nathaniel on Jan 13th, 2010

You can still contact Nathaniel at nathaniel@terralien.com