«Creating Feedback Loops» is not about having meetings

originally by Michael Mahlberg on agile-aspects
«Creating Feedback Loops» is not about having meetings

In many modern approaches to work, like The Kanban Method, Lean Startup, Agile Software Development, or DevOps, Feedback is an essential part of the approach.

Sometimes the role of feedback is explicit, whereas in some cases it is more of an implicit assumption that is only visible upon deeper inspection.

The Kanban Method has it pointed out explicitly as (currently) the fifth practice (Establish feedback loops) while the DevOps movement has one of its “three ways of DevOps” dedicated to it (The second way: the principles of feedback) which in itself consists of five principles.

“Let’s have more meetings” – a common misconception

Unfortunately, some of the currently popular approaches have introduced the notion that implementing feedback loops implies having some special meetings for feedback.

Feedback like that, could be a daily meeting regarding the current status of the work – especially focusing on problems or things to solve, or an event-based meetings, like post deployment retrospection.

For example, if you look into The Kanban Method, you’ll find a whole slew of other meetings to be held at different cadences to foster more feedback in your work.

While these meetings can be very helpful, they are not at all the best way to get real feedaback, really quick.

The problem with meetings as the primary source of feedback

The trouble with feedback that only comes periodically, and is dependent on human interaction, is that most of the time it comes too late.

Consider some feedback loops from outside the work organization world:

  • The speedometer of your car gives you feedback about your current speed – just waiting for the speeding tickets to come in would be way too slow as a feedback loop.
  • Or how about another thing in your car that you get information about: the oil in the motor via the oil warning lamp and the oil dipstick. For certain kinds of information the dipstick, at which we look from time to time gives us enough feedback. For the important short-term feedback that the oil pressure is too low, we need faster feedback. That’s why your car comes with an oil pressure warning lamp.

How can we crate feedback loops inherent in the ways we work?

What we actually want when we talk about feedback, is usually a very prompt response from the system we are interacting with. This system can be anything from a technical system through a physical system or a mechanical system to a system consisting of people interacting with one another.

One of the best ways to get early feedback is to actually remove inventory.

You may have heard that removing inventory is a central tenant of all the lean approaches, but when thinking specifically about feedback, removing inventory has the added benefit of making sure that we get our feedback earlier.

So really, what we mean by “creating feedback loops” is finding ways to see the final impact of the things we just did as early as possible instead of waiting for the effects to happen somewhere very far down stream.

till next time
Michael Mahlberg

from agile aspects https://ift.tt/LZFHrsy

Three strategies to ease the meeting pain

originally by Michael Mahlberg on agile-aspects
Three strategies to ease the meeting pain

“Since we started the new approach, I hardly ever get any work done, because we have so many meetings.” That is a sentiment, I here quite often when I’m visiting clients who have just started with some new approach. Surprisingly often that is the case if that new approrach is some flavor of “Agile.”

This seems more frequent if the client is a large corporation, but it certainly also happens at startups and SMEs.

And yet, on the other hand it seems to be increasingly hard to get any meetings scheduled. Let’s look at some approaches to make things a bit more manageable again

Once we start to differentiate between meetings that generate work and meetings that get work done it starts to get easier to handle the workload.

As described below, once we start making that distinction we can apply strategies like

  • planning the Work instead of the meetings (allocating time in my calendar for “getting stuff done” – especially helpful when applied –and negotiated– on a team or even multi-team level)
  • conscious capacity allocation (I will have 3,5 hours of working time and 3,5 hours of meeting time each day)
  • Actively keeping buffers open for unexpected, short term interactions (Putting blockers in my calendar that I remove only shortly before they are due)

Now let’s look at these strategies in detail:

Two types of meetings

Some people (maybe many) tend to view all meetings as “a waste of time” and “not real work” – I beg to differ.
I would say that we need to differentiate between meetings that leave us with more work than before and meetings that leave us with less work than before.

Work generating meetings (coordination time)

Some meetings leave us with more work than we had when before we attended the meeting.

  • Planning meetings, where the actual purpose of the meeting is to find or define work that needs to be done.
  • Status meetings, where the original intention is just to ”get in sync” but where it often happens that someone realized: ”oh, and we have to do X”
  • Knowledge sharing meetings, where not everyone affected is invited and thus we need to share the knowledge again.
  • Knowledge building and gathering meetings where the purpose is to better understand something, we didn’t fully understand before – be it a user interview in a product development company, a design session for something be build ourselves, some kind of process improvement meeting, or something else in the same vain.

This list is of course by no means conclusive, but it should give you an idea of the kind of meetings that could be put in that category.

Meetings that get work done (creation time)

On the other hand there are meetings that actually get work done. Especially for work that needs more than one person to complete it.

  • Design Sessions that end with decisions.
  • Pair-Writing an article or a piece of software
  • Co-creating an outline for an offer
  • Co-Creating the calculations for next years budget (if your company still does budgeting the old way)

Try not to mix the two types of meetings. At least not too much. Especially try to make the second kind of meeting really a meeting that gets work done. As in done-done. Make sure that there is no ”X will write this up, and we’ll review it it two days.”

If it’s good enough in the meeting, it’s probably good enough for work.

If we introduce some kind of follow-up work, especially follow up-work that has to be reviewed again, we actually prevent people from using the result of the work we just did in that meeting. Try to make it “good enough for now” and then let’s get on with creating value at other places.

And if it takes too long to create those documents in the Meeting with the tools you have available in the meeting, you probably have some great opportunity to re-think your choice of tools.

With this in mind, let’s look at the three strategies in a bit more detail.

And even though the strategies are persented in a specific order, there is no real ordering between them. Each of them works well on it’s own and you can combine them in any possible way.

Strategy one: Plan the work, not the meetings

Even if you apply only this one strategy it can be a real game changer.
Instead of keeping your agenda open for meetings and then work during the few times where no meeting is scheduled, no meeting needs preparation, and no meeting needs post-processing, switch it around.

Start by filling your schedule with “creation time” – time slots where you intend to do the part of your work that directly creates stuff. When you’re a knowledge worker in the times of a pandemic, this might also include meetings, but those should be only meetings that create tangible results. (This could be a design session with colleagues if you’re in manufacturing, it could be an editing session on a paper if you’re in academia, or maybe a pair- or mob- (ensemble) programming session if you’re in software development. Any meeting that outputs work.)

Only after you filled your schedule with a reasonable amount of time allocated to ”creation time” fit those other things, that I like to call “coordination time”, in some of the remaining spaces on your calendar.

This “coordination time” can include planning, status updates, learning and agreeing upon how you want to do things, understanding the challenge you’re currently working on, and so on. It is basically the coordination you need to efficiently get stuff done in the “creation time.”

Some people tend to call only “creation time” Work and the rest of the time Meetings. However, meetings that neither add value through creation nor through a better understanding of who is doing what when and how, should be eliminated altogether. And maybe replaced by an e-mail or

Especially when we work on process improvements or introduce new approaches we tend to start by planning when the related events (or ceremonies to use an older term 😉 ) should occur to include all the necessary participants.

I suggest to first try to agree upon the times out when all the participants can do their “creation work” and then fit the events and other necessary meetings around that.

Combining this approach with a conscious allocation on capacity makes it even more powerful.

Strategy two: Allocate capacity consciously

Don’t just look at the days of the week as a long stream of hours passing by. Make a conscious decision on how to invest the time beforehand.

If you’re involved with some kind of process framework you probably have some of the time allocation already done for you “daily standups”, “plannings”, “review” and “retrospectives” to name but a few.

But is the rest of the time really uniform? For most of us it isn’t. It consists of periods where I can just chop away at my work, of periods where I need information from other people and of periods where other people need information from me.

Creating even an informal and rough plan of how you intend to allocate your time helps a lot in reasoning about the number of meetings and makes the gut feeling a lot more tangible and negotiable.

Such a rough and informal plan might just look like this:

Allocation per Week (on average)
Process related 4h (8h in total every two weeks)
Creating stuff 20h (4h per day)
Helping others 10h (2h per day)
Slack for surprises 6h (a bit over an hour per day)

With this little list it is already much easier to argue for or against meetings. And if we start tracking how we actually use our time against this list, it usually gets even more helpful. You might want to give it a try.

Strategy three: Plan your slack ahead of time

Just put “Slack Spacers” in your agenda and remove shortly before their time comes up. This way if someone asks you whether you have time for them today you might well be able to say “yes” without having to move any other appointments.

To be able to react to things that are happening every systems needs some slack. If there is not enough slack in the system every little disruption or interference will wreak havoc on the system and might even result in a total system breakdown.

Back in the seventies it was “common knowledge” that in knowledge work one should never plan out more than 60% of one’s day. Simply because “things will happen.” How does that fit in with calendars that are filled up to the brim for the next two weeks?

If you allocate specific times for “creation work” and put them in your calendar you might already have one thing that absorbs some of the “things that happen”, but that’s not always quite what you intended to do with those allocated time slots.

A simple and effective strategy to deal with this is the usage of “Slack Spacers” – appointments with yourself, that are just in your agenda to make sure you don’t plan too much of time too far in advance.

Those could go from 30 minute slices which you remove on the evening of the day before they come up to 4 hour slots twice a week which you remove on Sunday evenings. Or any other sizing and timing that works for you.

Depending on your environment you might either declare them for what they are or hide them behind inconspicuous titles like “Preparation for the XYZ project.”


So these are three strategies you could put into effect right now

  • Foster collaboration by planning the time you work together
  • Get control of the amount of work you can do by allocating capacity deliberately
  • Create maneuverability by explicitly blocking time for work that shows up unannounced.

till next time
  Michael Mahlberg

from agile aspects https://ift.tt/EytLMP5

Unplanned work is killing us – really?

originally by Michael Mahlberg on agile-aspects
Unplanned work is killing us – really?

One of the things I often hear teams complain about is the amount of unplanned work they have to handle.

Drowning in irrefutable small requests

This unplanned work also frequently seems to be “irrefutable.” But is it? What does it mean to take up an unlimited amount of irrefutable work that has to be done right away?

Starting a new task immediately when it arrives means that you either have been idle when it arrived or –just as plausible– you had to put the stuff you were working on to the side. As long as you only have one item of irrefutable work at a time that might work. However the problem begins as soon as the next piece of unplanned work arrives before you were able to complete the current one.

In this situation you’re most probably not idle (since you’re working on the previous irrefutable piece of work) and you can’t easily put away your current work (because, well, it is also irrefutable).

This dynamic usually leads to a cascade of interrupted work that has been labeled as “irrefutable” and that still gets tossed in the “waiting bin” at the back end.

Most of the time, deciding for stuff to hunker in some “waiting“ state late in the process makes the “client” unhappy – the very person who insisted on the the irrefutability of the work.

This problem gets worse because often there isn’t any time to inform the original client that their work has been paused. After all, the new piece of irrefutable work had to be started immediately!

Thus, even though people try to work on the requirements coming at them as fast as they can it seems to be an uphill battle without much chance of ever getting a grip on the work.

But is that really the only way?

Accept reality

Once we face the fact that in these situations things will take longer to be completed than the mere net working time, we can employ other approaches to get on top of the situation.

There is this seemingly little trick that enables us to transform unplanned work into planned work. It’s called Planning. And the cool thing is that it doesn’t have to be big.

Once you know how many irrefutable small request usually land in your lap each day you can re-structure your day to handle them way more effectively.

You can get that number either from your gut feeling, or from some simple kind of low tech metric like tally marks on a sticky-note near your keyboard. Or maybe just start with an arbitrary guess and iterate towards better numbers later.

Planning to plan

So if you come to the conclusion that if all that work came in structured you could do it in 2 hours a day on average, there are two structural elements you could introduce to your daily structure to handle this

  • Firstly block out those two hours from your schedule. You will lose 2 hours per day anyway in which you will not be working on standard work. This is part of the “accept reality” thinking.
  • Set aside a couple of minutes for planning when you will work on these items and for feedback every couple of hours. Assuming you work 8 hours a day, I would take 5 minutes every two hours for “planning” which leaves us with 2 planning events per day.

All you do in these 5 Minutes is a quick check whether the requests actually fall into the category of “small” request.

If they do, schedule them for later today or next day, based on a rough guesstimation of the amount of work you already scheduled for the respective window and the perceived importance of the task. After scheduling the request you might want to let the client know that you scheduled the item and for when.

If they are not of the category “small” you have a different problem at hand – here you might still want to reserve a small amount of time in the 2 hour window to draft a more detailed feedback on why this request has to be discussed on another level. Still, you do this answering as a planned activity.

With just accepting that the two hours you ‘lose’ per day are actually lost for standard work and subtracting 10 more standard-work-minutes from your working day, you can probably convert 90% of your unplanned work into planned work. Without adding to the actual customer lead time of the items that used to ruin your day in the form of unplanned work.

And as almost every situation is unique, you most probably will have to come up with different numbers, but the general principles statet here should be applicable to most situations.

till next time
  Michael Mahlberg

from agile aspects https://ift.tt/sq8UOyr

Is the user story overrated? Some story patterns and formats to learn from

originally by Michael Mahlberg on agile-aspects
Is the user story overrated? Some story patterns and formats to learn from

The term “User Story” or simply “Story” as a shorthand for a requirement has become quite widespread these days. But what does it actually mean and how can we benefit best from it?

We all know, what a story is, don’t we?

Let’s try this one on, for size:

“Once upon a time, there was… here goes the story … ever after”

That’s the kind of story that most people in the real world think about, when they hear the term “story.”

In the agile realm stories seem to be a different kind of beast

As I point out below, my personal recommendation is something quite different, but in the realm of Agile, stories seem to be something other than in the rest of the world. Within the realm of Agile, the majority of people seem to believe that the “requirements packaged in the form of a story” is the central element that everything revolves around.

That extends so far, that even the “speed” of development teams is (way too) often measured in something called story-points – even though at least one of the potential inventors of the story-point concept says “I […]may have invented story points, and if I did, I’m sorry now.

And almost everyone in that realm, as well as in its adjacent territories, have –at one time or the other– heard the stipulation that a well-crafted story

  • starts with “As a <role>…”,
  • has an important “…I want <System behavior>…” in the middle
  • and –in the better cases– ends with “…so that <desired business effect>.”

So – why is this incarnation of the concept “story” so prevalent in the realm of Agile? And is it really the best way to handle requirements in contemporary endeavors? To write better stories today, we need to have a look at how stories came to be such an important instrument in the realm of “Agile Software Development”1 in the first place.

How stories came to software development

Back in the day, before the “Manifesto for Agile Software Development” was written, there were several approaches whose champions called their movement “lightweight software development” and who would later come together and write down what unified their approaches under the moniker “Agile Software Development.” These approaches used all kinds of helpful ways to describe what the system should be able to do.

In Scrum they had the PBI (Product Backlog Item), in Crystal the use case was somewhat prominent, other approaches used comparable artifacts. Extreme Programming was the one that used something called a User Story.

This concept of the user story somehow had such an appeal, that many of the other approaches embraced the idea – more or less.

It was more about the telling, than about the story

A key component behind the idea to use “stories” has even made it into the Manifesto for Agile Software Development – To quote the sixth principle from that manifesto

“The most efficient and effective method of
conveying information to and within a development
team is face-to-face conversation.”

Before the recommendation that requirements should be talked about was written down in that form, it was embodied in ideas like CCC Card – Conversation – Confirmation or the nice quote from the Book XP-Installed from the year 2000 that a card is a promise to have a “series of conversations about the topic.”

Unfortunately, in today’s world the concept of On-site customers often has been reduced to a person who is called Product Owner but doesn’t have any real business authority and spends about two hours with the team every two weeks. Under these circumstances it seems questionable whether this approach to product development is still viable for all cases.

But I am convinced that understanding why it was okay to write only one sentence to represent a complex requirement back in the early days of lightweight methods helps a lot with writing good stories today.

The fact that the way of working that lead to the original user story is hardly feasible in today’s “corporate agile” with all its compromises, has a direct impact here. It implies that we need something more than just the concept of a “User Story” if we want to capture and process requirements in an efficient manner.

Don’t put the story in the center, focus on the value and the work item

What most approaches propose, is some container that represents “value for someone.” In the process framework Scrum this is called Product Backlog Item, in more general approaches –like the Kanban Method– it is often simply called Work Item.

Such a work item –to got with the broader term– can have many structures. A few common attributes of many such item types are:

Of course, one of the attributes needs to be the actual requirement. And that could be represented by a story. But does that have to be a user story? Actually, there are some pretty helpful alternatives out there.

If you use some kind of story, get to know several types of stories well

As it is often the case, the habitat of the original user story provided many things that were no longer present once the concept was mimicked elsewhere. And as time went by, some people re-discovered what a story could mean for them. Some other people –many, actually– got confused by the story concept since they never really saw it in action and only knew about it through very indirect word of mouth.

Stakeholder Story

After the “As a «role» I want…” format for user stories had been around for quite a while, Liz Keogh pointed out that many of the so called user stories out there are not actual user stories but instead Stakeholder Stories.

  • Format of the Stakeholder Story
    • Liz Keogh described her ideas and observations in the 2010 Article “They’re not User Stories.”

    • The generic form of this kind of story –the way I use it these days– is

      • In order to «the required business effect»

      • «some stakeholder or stakeholder persona»

      • «wants,need,requires,…» «some kind of system behavior or future state»

  • Context for the Stakeholder Story
    • This is an extremely useful perspective if you have to describe requirements that are not actually wanted by the end user of the system, or that don’t actually have a direct user interaction.
    • Most of the requirements I encounter in enterprise contexts are more stakeholder-driven than user driven. (Legal requirement for example. Something like “To avoid being sued for GDPR violations our CISO requires that we have some GDPR-compliant deletion mechanisms that could be executed at least manually if ever a user actually should file a complaint that conforms to article 17 of the GDPR.”)
  • Caveats for the Stakeholder Story
    • The stakeholder should be as tangible and concrete as possible. Unlike with the model of personas in user stories for stakeholders in user stories, it is extremely helpful to name a real person for stakeholders in stakeholder-stories.
  • What to avoid for the Stakeholder Story
    • The most common problem I see with stakeholder stories these days is that the required business effect gets confused mixed up with the system behavior or future state.

User Story

It was probably Mike Cohn who popularized the now so common form of user stories in his 2004 and 2005 books “User Stories Applied” and “Agile Estimation and Planning” but to my knowledge Rachel Davies came up with it around 2002 at Connextra (actually that’s also what Mike Cohn’s post about the three part user story tells us)

  • Format of the User Story
    • The now prevalent way to capture user stories is the well known

    • “As a «role or persona» I want «system behavior» so that «desired business outcome».”

    • This is described (amongst other sources) in the often quoted Article Why the Three-Part User Story Template Works So Well by Mike Cohn.

  • Context for the User Story in this sense
    • Helpful if you really have a product (sometimes a project and seldomly a service) that has actual interactions with actual users
  • Caveats for the User Story in this sense
    • It should describe an interaction between a user and a system that will be possible after the requirements has been implemented.
  • What to avoid for the User Story
    • A story like “As a team member, I want another team member to implement the database logic for the WhatNotField so that it will be available” is using the format alright, but misses almost all point of using User Stories.

Job Story

To my knowledge the whole “Jobs to be Done” way of approaching product challenges became popularized through Alex Osterwalder’s work with Strategyzer around the value proposition canvas. [Please let me know, if you know the whole back-story, I’d be really interested in learning about that] Soon after that the JTBD idea proved so powerful that it spawned it’s own community.

Thanks to my esteemed colleague Matthias I learned about the job story format and the whole idea of using job stories to work on product ideas

  • Format of the Job Story
    • The article Replacing The User Story With The Job Story describe the idea of the Job Story as separating situation, motivation and expected outcome by using the format
    • When ________, (the situation part)

    • I want to ________ (the motivation part)

    • so I can ________ (the expected outcome part)

  • Context for the Job Story
    • Good for very young stories, when you still try to figure out what you’re really talking about.
  • Caveats for the Job Story
    • Unlike Stakeholder Stories and User Stories, Job Stories don’t (yet) provide an easy way to fill out the ________ part, so you really need to dive into the ideas outlined in the above mentioned articles and there can be a lot of discussion about the “right” way to write such a story.
  • What to avoid for the Job Story
    • Don’t treat it like a piece of functionality that just needs to be executed. Job Stories make for good candidates or the narrative flow of Story Maps. There’s also an 2-page summary explanation of Story Maps if you want to knwo more about that concept.

Of course this only covers some aspects of the usage of stories in todays post-agile society, and I would strongly encourage anyone to look (deeply) into the stuff about INVEST and SMART and at User Story Mapping, to get event more background with regard to working effectively with stories to represent aspects of requirements, but I hope this article gives you some ideas on when and how to use some other kinds of stories to represent requirements that are really hard to fit in the “As a «Role» I want…” format.

till next time
  Michael Mahlberg

  1. (Remember: There is not really an Agile Manifesto)↩︎

from agile aspects https://ift.tt/dfZlL8I

There is no Agile Manifesto

originally by Michael Mahlberg on agile-aspects
There is no Agile Manifesto

Just a little reminder: what many people nowadays think is a way of living or even a way of designing whole organisations was originally something quite different…

What most people call “The Agile Manifesto” actually has a title.

it is called Manifesto for Agile Software Development

And its authors propose the “Twelve Principles of Agile Software.

  • It does not specify a defined approach to continuous improvement – TPS (Toyota Production System) does that, for example
  • It does not elaborate on good ways to optimize lead times – The ToC (Theory of Constraints) does that, for example
  • It does not express any opinion on how a company should be structured in the post-Taylor era – Sociocracy and its derivates do that for example. So does New Work
  • It does not tell anyone how to handle finances without upfront budget plans – Beyond Budgeting does that, for example

And all of the approaches on the right hand side came into existence long before 2001, the year the “Manifesto for Agile Software Development“ was drafted.

If you look a bit further on the original web-page that launched the term “Agile” into the world, you’ll find that in the section “About the Manifesto” as well as in the headline above the twelve principles, it has been called “The Agile Manifesto” even by its authors. Maybe this helps explaining some of the confusion.

Personally, I find it very helpful to remember the context where the whole idea of “Agile” came from – maybe it’s helpful for you, too.

till next time
  Michael Mahlberg

from agile aspects https://ift.tt/GgZY8dAEj

The difference between acceptance criteria and the definition of done

originally by Michael Mahlberg on agile-aspects
The difference between acceptance criteria and the definition of done

When it comes to building things, we often want to know when it's really done. Two terms have gained popularity over the last couple of years within the realms of software development and other areas that use spillover ideas from the agile movement. These two concepts are acceptance criteria and the definition of done. Unfortunately those concepts are often mixed up which leads to subpar results.

The distinction can be pretty short: the definition of done (DoD) is a property of a process-step, while acceptance criteria are properties of a request for a capability. However, the question remains: why does it matter?

Let’s clarify some terms

I intentionally used the uncommon way to refer to a requirement as “a request for a capability“ to avoid notions such as story, requirement, feature, epic etc. Sometimes just saying what we actually mean instead of using an overused metaphor can make things much clearer. For now I will call “requests for a capability” simply work items, since that term has –at least up until now– very few connotations.

Where does the definition of done come from, and what does it mean?

To be perfectly honest I don't exactly know where the phrase came from. (I'll come back to Scrum in the postscriptum below) I've heard jokes about “If it works on your machine, you're about 80% done” and “do you mean done, or do you mean done done“ since the 80s. So obviously it's not really a new phenomenon that it's hard to tell when something really is done.

The term became more formalized, especially in the Scrum community between 2005 and 2011, when “Definition of done” became a top-level topic with it’s own heading in the scrum-guide. In this context the definition of done is the sum of all quality requirements a work item has to fullfil to be considered “done.”

If we look at it from a process perspective, this is a policy all work items have to comply with before they can move from “working on it” to “done.”

where the DoD applies

Who brought us acceptance criteria, and why?

Again, the origins are lost in the depth of time. At least to me. But the first experiences I had with them as a part of agile software development were back in my earlier XP-days, around the turn of the century.

At that time it was “common practice” (at the places I was around) to put requirements on cards. And when the time came to find the answer to “how would you know that this item was done” with the onsite customer, we just flipped over the card and jotted his acceptance criteria on the back of the card.

Those acceptance criteria hardly ever included anything technical, let alone any requirements regarding the documentation or in which source code repository it should reside. Those things were captured by our working agreements. In a section that nowadays would be called definition of done.

The acceptance criteria usually were things the customer would be able to do with the system once the requirement had been implemented. Someting like: “I can see the list of all unbooked rooms in an area when I search by zip code“ as one acceptance criterion for a card called “find available rooms” in a booking system.

Remember that these were the days of real on-site customers in a high trust environment and stories were written according to the CCC idea of Card – Conversation – Confirmation. Therefore it was quite okay to have such a vague acceptance criterion, where there was no up-front definition of what a “search by zip-code” actually means or how the “unbooked rooms” state had to be determined.

Nowadays these acceptance criteria are sometimes formulated as BDD or ATDD style scenarios and examples, wich allows for very concrete and specific acceptance criteria (but without enforcing them).

Now, what is the difference between acceptance criteria and the definition of done?

After we defined the terms, the terse explanation from above “the definition of done (DoD) is a property of a process-step while acceptance criteria are properties of a request for a capability” might actually make sense.

So, the «defintion of done» is a rule that applies to all the work items in the system and is a policy on a very specific edge between two columns on a board, namely the edge separating the “done” column from the one right before it. In contrast, «acceptance criteria» give the answer to the question “what functionality does the system have to provide to consider this work-item to conform to the customers expectations?”

And so, both are necessary and neither is a replacement for the other. Acceptance critery go with work items, and the definition of done goes with the system.

till next time
  Michael Mahlberg

P.S. In most real life settings, processes tend to have more policies than just the definition of done.

And some of them describe the expections at process boundaries. If you use the Kanban method to model these processes you would naturally make these policies explicit as well, like I described in an earlier post.

P.P.S.: Scrum didn't start of with the now prominent Definition of Done as a first class citizen.

In the original books, that used to be literally required reading for aspiring scrum masters in 2005 –Agile Software Development With Scrum[ASD] and Agile Project Management with Scrum [APM]– there is “Done” on the same level as “Burndown”,“Iteration”,“Chicken” and “Pig" [APM, p141] and no notion of "Definition of Done" in either of the books.

Even in the Scrum Guide from 2010 –one year before the DoD moved up and got its own headline– there are paragraphs like

If everyone doesn’t know what the definition of “done” is, the other two legs of empirical process control don’t work. When someone describes something as done, everyone must understand what done means.

But still not yet quite the now seemingly well established term “Definition of Done” that we see today.

from agile aspects https://ift.tt/2ST53rg

Options can be expensive — not only at the stock market

originally by Michael Mahlberg on agile-aspects
Options can be expensive — not only at the stock market

What do you actually get, when you buy a cinema ticket? (In those ancient times when cinemas were still a thing)

You buy yourself an option. The right –but not the obligation– to execute an operation at a later time. In this case the right to watch a certain movie at a certain time.

The cinema, on the other hand, sells a commitment. They are (to a degree) obliged to actually show that specific movie at the stipulated time. If we look at it like this, it is a considerable investment the cinema promises, in exchange for those couple of bucks your option costs.

And while it is often thought to be helpful to think in options, it is also almost always important to make sure that you're on the right side of that transaction.

Where's the problem with options?

What does that mean for our day-to-day actions? If we hand out options too freely, we quickly end up in a quagmire of "maybes" that is hard to get out of. As I mentioned in an earlier post, the whole thinking around real options in the way Olav Maassen and Chris Matts describe in their book "Commitment", is quite fascinating and well worth a read. But for today let's just look at one thing we don't do often enough, when we use options in our personal lives.

We tend to offer options without an expiry date. And that can leave us with a huge amount of commitments, and very few options left for ourselves. One of the prime offenders here is doodle (or similar meeting planners) and the way they are often used these days. Just the other day I got a doodle poll for 58 30-minute slots stretched over two weeks. Scheduled in about six months from now. And the closing date for these options was meant to be set three <sic> months in the future. So worst case, I would have committed to keep 29 hours blocked for three months. Which would have left me unable to actually plan anything for those weeks in the next three months.

Of course doodle only makes this visible – it happens all the time. Look at this scenario:

  • We could go on vacation either at the beginning of the summer break or at the end

  • I could renovate the shelter either in the beginning of the summer break or towards the end

  • Our kids could go on their "no parents camping weekend" either in the beginning of the summer break or at the end

For as long as you don't decide the first one of these, those options create a deadlock.

And the situation makes it almost impossible to actually decide anythingrelated to the summer break as well, for that matter.

Set an expiration date to ease the pain

The solution is simple, really. But it takes some uncommon behavior to apply it. Let's look at the way the stock market handles options. Options at the stock market have a window of execution and an expiry date. Once that date has passed the option can no longer be converted. Merely adding this expiry date, already mitigates the risk of too many open ended options even for the side which holds the commitment end of it.

A lot of options that we encounter have this attribute of an expiration date in one way or another: When we get a quote for some repair work for our house, car or even bicycle, it usually says "valid until." The same is true for business offers, medical quotes, and almost everything we consider as "business."

Amending the options we hand out with expiration dates, even if it is not in a formal business setting, may feel a little strange at first. But it makes life so much easier. Whether it's toward a colleague, a significant other, friends or even yourself. Reducing the amount of open options also reduces the number of times you have to say "I don't know yet, I might have another engagement."

till next time
  Michael Mahlberg

from agile aspects https://ift.tt/3vi0c1z

How to do physical {Kanban|Taks|Scrum} boards virtually

originally by Michael Mahlberg on agile-aspects
How to do physical {Kanban|Taks|Scrum} boards virtually

As I’ve mentioned earlier most of the time it is a good idea to start visualization with a physical board. And very often it is a good idea to stick with that. For all the reasons that make it worthwhile to start with it.

One of the biggest advantages of a physical board is the one thing that command and control organizations perceive as it’s biggest drawback: A physical board knows nothing about the process.

The fact that the physical board knows nothing about the process forces the people who use it, to actually know about their working agreements. And to negotiate them. And to make them explicit in some way. Well, at least if the want to remember the gist of their long discussion from Friday afternoon on Wednesday morning.

As my esteemed colleague Simon Kühn put it all those years back: The intelligence is in front of the board.

But we’re all working remote now

Now, that we‘re not in a shared office space anymore, real physical boards are hard to do, aren’t they? Well – yes and no. If you look at the important advantages of physical boards, they are easy to replicate with today’s electronic white board solutions.

Whether you use use google drawings, miro, or conceptboard –to name just the ones I’m familiar with– is mostly a question of taste and, more importantly, legal and company policy considerations.

Using a simple collaborative whiteboard enables people to have most of the advantages they can have from physical board, while retaining the advantages of remote work.

What are the big advantages of a physical board?

A physical board can easily be changed by everyone. Just pick up a marker and draw something on it. The same is true for electronic whiteboards. In both cases it is a good idea to also have a talk with your colleagues to make sure they are okay with the additional (or removed) thing you did to the board.

One could say “individuals and interaction over workflows (process) embedded somewhere in a ticket system (tool)” – just to reiterate the first value pair from the Manifesto for Agile Software Development as an argument for “physical” boards.

Physical boards have extremely quick iterations. Trying out whether a new policy makes sense takes just a pen, a sticky note, a quick discussion and a couple of days (sometimes only hours) to see if it works. Conversely, with ticket systems even proposing a change to the admins often takes weeks and needs a predefined concept and sign-off. Not exactly agile. But with electronic whiteboards you can do just the same things you would do on a physical board. Which is why they provide tremendously quick feedback loops.

And as Boyd’s law of iteration says: speed of iteration beats quality of iteration.

If you decide to add a new status on a physical board or add new meta-information on a ticket, you don’t have to migrate all the old tickets. And you don’t have to coordinate that meta-information with the names of the meta-information of all other projects in the organization. Another huge an advantage of physical boards over ticket systems. And you can achieve the exact same independence with electronic white boards.

But where do the details go?

When I have these discussion in real life, I usually get a couple of questions about the details. Let’s look at two of them.

Q: On a physical board I used to write my acceptance-criteria on the back of the card. I can’t do that with an electronic whiteboard.

A: True, but then again you can put a link on the card on the electronic whiteboard and that can point to almost any place you like. For example a wiki-page that contains that additional information.

Q: But if I use a dedicated bug tracker (the origin of Jira) or any other ticket system I have all those nifty fields for details.

A: But do you need them on the card? Wouldn‘t they better be put on a documentation page?

My general advise here: put only meta-data on the card and all the other information in appropriate systems like a wiki. This also gives you the opportunity to put the information where it belongs in the long run, instead of putting it on the perishable ticket. On the page related to the ticket you can just link to or include that central information.

But what about metrics?

One of the things that gets lost with the ”physical” board is the automated transcription of relevant data for statistics. And I have to admit that this is a real disadvantage. With an electronic whiteboard you could either write a little plugin that tracks the movement of the cards or do a very strict separation of concerns and use different tools for different topics.

A word of caution – writing that little tool for the electronic whiteboard might not be that easy, after all. And even if you were going to do that eventually, it would be a good idea to start by collecting the metrics manually.

Either way: if you start with the metrics that you really need now and create your own tools for those –based on spreadsheets or databases, after all you’re in the software development business— you have a huge advantage over the metrics provided out of the box by many tools: you actually know, what the data means.

And some of the most important metrics are actually easy to evaluate and some of them even easier to capture.

Just give electronic whiteboards a try – if you adhere to the same ideas and first principles that guide your usage of a physical whiteboard you should reap almost all of the same benefits and get a couple of helpful things like links on the cards and enough space for dozens of people to stand in front of the board on top.

till next time
  Michael Mahlberg

from agile aspects https://ift.tt/3b9uw52

The benefits of continuous blocker clustering

originally by Michael Mahlberg on agile-aspects
The benefits of continuous blocker clustering

If you manage your work by using some kind of visualization, the chances are high that you also visualize your blockages.

One of the most common visualizations is some kind of task board that represents the subsequent stages work items go through. Assuming you have such a board it can be quite helpful to visually mark which of those work items are currently blocked. This enables the whole team or organization (depending on the scope of your board) to see where work is stuck and to do something about it.

Usually (in the physical world ) these markers had the form of post-it notes in a different color and denoting the reason for the blockage. If you add just a little bit additional information, these blockers can be utilized to identify typical hindrances in the flow. Information you might want to gather are a reference to the original work item this blocker was attached to, the time(stamp) the blockage occurred and the time(stamp) it was solved.

In the Kanban community there is a practice known as “blocker clustering” where all involved parties come together at specific points in time and cluster these blockers according to things that stand out once you try to sort them and categorize them.

Blocker clusters can be either things like “Waiting for department «X»” or “Problems with system «Y»” or something completely different like “discovered missing information from phase «Z»” – that really very much depends on your individual environment. And usually these blocker clusters change over time. And so the should.

Now, here’s an idea: why only do this at certain intervals? Just as pair-programming in software development could also be called continuous code-review, the practice of blocker clustering could be done each time a blocker is resolved.

Granted, this wouldn’t make the big blocker clustering superfluous. After all that is where all concerned parties can decide whether they want to treat the resulting blocker clusters as special case variation –where one-off events caused the blockage– or common cause variation, where the blockage is caused by things that happen “all the time”.

The distinction between these two kinds of variation in the flow is important. One of them, the special case variation, hast to be handled on a case by case basis, whereas the other one is a great opportunity for structural improvements in the way you work.

And this is where continuous blocker clustering really can make a difference. Instead of waiting for the big blocker clustering, people come together and decide into which blocker cluster the blocker goes as soon as it is finished. This doesn’t have to happen in a separate meeting.

After all, the blocker (and the way it got solved) would be announced in the next board walk anyway. Which is also a good place to have this discussion.

And once you do continuous blocker clustering, you can have additional agreements like for example: If there are more than five new blockers in a category you can immediately (or at least very shortly afterwards) come together to discuss whether you want to treat this as a new common cause variation and whether you see a chance to improve your way of working together to address this new common cause. The number five is just an arbitraty number, depending on things like number of people involved, throughput etc. your numbers will differ.

You could also have an agreement to hold such a meeting whenever you have collected five blockers that you couldn’t categorize into a blocker cluster in less than 2 minutes and were therefore grouped under “uncategorized”. (Another working agreement.) The opportunities for demand driven improvements through this approach are vast.

The same basic idea is behind the concept of signal-based Kaizen meetings, that happen whenever specific –agreed upon– circumstances trigger the need for improvement and invoke a spontaneous get-together of the involved parties. Opposed to having only improvement meetings at fixed intervals this makes for much tighter feedback loops and thus enables quicker improvement.

till next time
  Michael Mahlberg

(Special note for people who rely solely on jira: It is a bit hard to implement this is an easy way in jira, but it is possible. And also helpful. But it does include some creative use of fieldvalues, some JQL-Fu and some dedicated Jira-boards. Keep in mind that Jira-Boards are nothing more, and nothing less, than a visualized database query. There’s a lot of power in that, once you start moving beyond the pre-packed solutions.)

from agile aspects https://ift.tt/3iZWd3M

Bringing Agile to non-IT work… those who don’t remember history are doomed to repeat it

originally by Michael Mahlberg on agile-aspects
Bringing Agile to non-IT work… those who don’t remember history are doomed to repeat it

People tend forget that many of the agile approaches borrowed heavily from the Toyota Production System and its relatives, commonly known under the umbrella term "Lean."

These days we're experiencing an interesting development. People try to bring things they perceive as "typical agile practices" to non-IT work. For knowledge workers —a term coined by Peter Drucker in the 1960s— this might perhaps be a valid approach. Even though I doubt it. For non-knowledge-worker work on the other hand, I would like to point out what happened here. Approaches taken from the Lean context of shop-floor management and vehicle design, related to continuous improvement and optimizing the flow of work, were translated into a very different environment — that of software development. And even the considerable body of knowledge from other fields of expertise that is at the foundation of agile was put into a very specific context here. Actually, the so-called "agile manifesto" is called "Manifesto for Agile Software Development" and thus very specific with regards to the domain it targeted. So nowadays, when people try to "apply Agile to non-IT situations", they basically take the adaptations that have been made to non-IT approaches to make them helpful in Software Development and try to re-apply what's left of them back to non-IT work. Of course the original non-IT approaches have also evolved since the days that –just to pick an example– Ken Schwaber and Jeff Sutherland read 1986 paper "The new new product development game" (sic!), and took parts of those ideas as a foundation of their own agile approach (Scrum). Hence it seems kind of silly to me to derive ideas for modern ways to organize non-IT work from the spin-offs from more than two decades ago instead of going directly to the source.

Of course sometimes re-applying the stuff we learned from agile software development actually works, but I think going directly to the source is a much better idea. Perhaps instead of trying to derive the helpful approaches non-knowledge-worker work from the shadows they cast unto the walls of the Agile world –to paraphrase Plato– it might be a good idea to look at the origins and try to understand the original approaches to non-knowledge-worker work. Of course, oftentimes non-knowledge-worker work was simply called "work", back in the day. Directly adopting from approaches like Lean (from the 1950s) or New Work (which originated in the 1970s) might be an approach to improving work that avoids the 'Chinese-whispers' effect of the indirect approach via "Agile."

To end on a more positive note: the Kanban method is a great example of an approach that targets the challenges of the (non-it) knowledge worker and brings ideas from Lean and similar fields into a new context. And even though many people use the Kanban method in the realm of IT, it has many equally –if not more– effective applications outside of IT. Maybe that's because the Kanban method avoids the triangulation via the older agile approaches and builds directly upon the common ancestors. I guess that is one of the reasons why David Anderson called the Kanban method "post-agile" even back in 2010.

till next time

  Michael Mahlberg

from agile aspects https://ift.tt/3ac2THz