The big problem with PM tools is that they are stuck in “list mode” – they mostly treat requirements as a list to be gotten through. Agile has made this even worse – because it doesn’t even know about dependencies and other relationships. But many-to-many relationships, and even squishier relationships than that, are the stock in trade of product managers.
A big part of the job of a PM tool (in addition to keeping useful lists) is to be an offboard memory for me, so I don’t have to make the same decision twice, and if I do need to revisit a decision, I can easily remember what led to the original decision. I’d also like it to help me find the things that I’ve already done some thinking about and enable me to reuse that thinking in the same context or a new context.
So, some of the questions I regularly ask myself, and I’d rather ask my tool:
- Did I have this idea before? (It’s amazing how often this happens!)
- What did I push to the back of the line a while ago that I should check now?
- Why did I make this decision versus the other decision?
- Is there anything like this in the system already?
- Have I already been through this thought process for another feature, and can I reuse that thought process?
What I want is the system to recognize patterns, even very simple ones at first, and surface them to me. It has to be non-stupid, but it can be simple. For example, for the various impact areas I discussed last week, there are three potential states: “not applicable,” “ask me later,” and other. (“Other” being the actual description of the impact.) It would be hard to correlate the “other” answers – not impossible, there’s technology for it, but it would be hard. But it’s easy to make use of the N/A and “ask me later” states.
Obviously, “ask me later” means the system should remind me later, at a reasonable time in the future, that I haven’t answered those questions. Just as it might remind me at various points that I haven’t written the value proposition yet, or other things. For the N/A answers – at least it could list out the other features that have a similar set of N/A answers, which might reflect an underlying similarity that can be reused.
Taking that a step further, if I answer N/A for some impact area, that presumably means that the feature doesn’t impact that area at all, which has implications for the cost of implementing the feature. Some areas are costly to make changes to, while others are less costly. So if I get a low estimate for a feature that has an impact in a high cost area, the tool should warn me.
Another analysis of the same situation – if I have a feature that has an impact in a high cost area, I should double-check its rationale for existing – important customers want it, or it’s important to align to market themes, or whatever it might be.
Simply going through a list of features is never going to give me these kinds of insights. And today I have to remember to do all that myself.