After the fact – a new role for function points?

Just the other day I was chatting with a friend about the place of function points in software development.

While they are traditionally used as an approach to estimate the effort required to build a system, from my point of view this role has changed with the current prevalence of lean and agile methods.

Using function points to estimate effort in new project work

… is (IMHO) a difficult feat because there would be a serious amount of functional decomposition necessary which in turn would require extensive analysis which in itself would be a serious step towards BDUF. Furthermore it would require so much effort that a separate project would be necessary to get the funding for the work.

And this approach is neither very agile nor very lean. It does not address the knowledge gain – both about what the project is about and on how to go about the solution – during the project.

Making work between projects comparable with function points

… on the other hand seems quite feasible to me. Usually, after we have finished the work (and of course in an agile environment we have finished, really finished, at least some work after the first iteration) we do have measurable building blocks that can easily be measured and counted (in functionpoints).

Using function points to plan big projects

… is not such a good idea from my point of view. (Even when it is considered viable because epics seem too hard to plan with planning poker)
In my opinion using function point analyses for up-front planning is almost dangerous – for the aforementioned reasons of extensive up-front work (and implicitly commitment to solutions).
If estimating epics seems too hard there might be other reasons involved that would still be valid if function point analysis would be used. But with the kind of up-front analysis that often seems appropriate for function point analysis these points might become hidden behind too much detail. The problem with planning poker is of course that the “consensus amongst expert” that has been derived from wideband delphi depends on a certain level of detail and upon a sufficient number of available experts from the different areas of expertise.

In the end, all that planning poker does is condensing the formal approach of wideband delphi into a seemingly more informal approach based on verbal communication. Establishing a basis for estimation and installing a cross-functional group of experts is still necessary! Even if the process that can take weeks in wideband delphi is condensed to a relatively short interactive meeting. Such a group could – in a software development setting – consist of e.g. marketing, software-architects, database engineers, ux-specialists, testers, quality assurance, technical writers, and so on

If the requirements can‘t be estimated well enough, that problem is often rooted in too little experience in the domain, or missing decomposition into manageable – and understandable – units, for example stories on the next (more concrete) level of abstraction.
While function point analysis also enforces the decomposition of the requirements, it tends to drive the analysis towards a mindset of “What can be counted in function point analysis?” instead of a mindset of “What is a capability of the system that can actually be leveraged by an end-user and that I can estimate?” Therefore there is a genuine risk of trying to operate in the solution space before even the problem space has been explored well enough.

So, instead of opting for function point analysis when epics seem un-estimatable, I would rather suggest to break the epics down in such a form that a solid comparison with things that have been done before is possible. One approach to do this might be to at least name the stories on the next less abstract level. And additionally walk through a couple of user journeys.

Using planning poker to plan small increments of existing software

… on the other hand is a surprisingly good idea in my book.

The questions that have to be answered to get to the function point revolve around things like:

  • How many (already existing!) screens have to be modified and how complex are they?
  • How many tables are involved?
    (The data model and its physical representation usually also exist with existing, running software)
  • How many interface have to be touched? Are they inbound or outbound?
    Remember: The system is running already, so the interfaces are either already in place or an explicit part of the requirement.
  • How many functional blocks of what complexity are affected?

All of these issues are cleanly cut when adding small, well-defined requirements to an already existing system and thus can be counted quite easily. When implementing completely new epics, trying to put numbers to these issues requires at least the creation (a.k.a. design) of a conceptual data model and a functional decomposition of the requirements – things you would rather like to do during the discovery of the system, during and alongside the implementation.

My conclusion:
Function points can be really ugly critters – but used to the right ends they can be a tremendously efficient means.

’til next time
  Michael Mahlberg

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s