(Or, How To Make Meaningful Estimates For Software Products, Part 2)
Last week I left you hanging. In How To Make Meaningful Estimates For Software Products I basically said that estimates don’t work for software projects. That’s still true.
But the fact that software estimates don’t work doesn’t mean that features don’t get completed and delivered. They do get finished, you just can’t predict with accuracy when that will happen. So, how do you make software delivery predictions if you have terrible software estimates?
How can you make software delivery predictions without estimates?
So the question arises – can we get to any degree of predictability, despite that?
Here are ways you can approach predictability, even in a world where estimates are impossible, and you can use them in combination:
- Don’t ship until it’s finished – that is, make the prediction about the quality and the scope, but don’t predict the time
- Ship on a regular basis, including only what’s finished – that is predict the time and the quality, but not the scope
- Ship partial features – predict the time and the quality, and accept partial scope
- Ship tiny features – only ship features you can estimate reliably, which means (remember this from last week’s post) they are not interesting
Mitigating the software estimates problem itself
And there are some mitigations for the fact that estimates don’t work. The most obvious is the one that Steve Johnson (@sjohnson717) mentioned in a comment on last week’s post:
- Estimate by comparison – “this feature seems about as big as that feature, which took us four weeks to implement”
The smaller the feature, the better this works, of course, because uncertainty grows with the value of the feature.
But that’s not all!
I have more thoughts on estimates and product planning predictability in my next post.
Estimating continues to be a hot topic. Execs want precision (how many hours or dollars?) while some developers want to avoid any precision at all. And alas, product managers weigh in with opinions on how long they think it should take.
I've found the key to be the people who do the work are the only ones qualified to give the estimate.
More on the comparison approach at http://appliedframeworks.com/blog/2013/01/23/esti…
Yep, I agree with Steve – Estimating is a hot topic (and definitely a sticky subject, because of the differing opinions!)
I generally recommend that, at the stage that you're grooming your backlog for useful ideas, a high-level estimation can be done. We've borrowed the scale from 'Planning Poker', which is essentially based on the 'Golden Ratio', where, as the numbers get higher, they get further apart: 1, 2, 3, 5, 8, 13, 20, 40, 100.
With this, it allows the product manager to weigh in with a guestimate, out of 100, which helps to compare the feature or idea against others in the backlog. If you do this with both Effort and Impact, you can get a rough guide on where your 'quick wins' are, versus the 'time sinks' you want to avoid: http://www.prodpad.com/how-to-guide/measuring-eff…
That said, estimates are just that: Estimates. And the earlier they are made, the less likely they are going to translate into exact hours or dollars in a measurable way. But that shouldn't stop teams from having a system to at least compare tasks in a relative, high-level sense!
As I sometimes do, I went pretty far out on a limb for the previous post, and with this one I'm trying to reel it back. In have another followup in draft form that I'll get to later this week. Obviously, there's a lot of value in getting rough sizing for big stories, and more detailed sizing for smaller stories. I would recommend the PM only use t-shirt sizing (S, M, L, XL, …) – at least for initial planning purposes – combined with understanding the approximate risks and similarities of the new items.