Tuesday 1 December 2015

Guest author: more on agile practices from Brian Sadler

G'day:
Once again I'm promoting a comment from another article (Guest author: a response to "Why good developers write bad code") to be an article in its own right, cos I reckon people ought to read it. Firstly I'll repost that to which Brian is replying (for context), then on with Brian's reply.

Estragon says:

@Brian - yes to smaller chunks of the elephant at a time, tasks that are too large are a major contributor to this behaviour ... but then what's too large for one dev is fine for another, and it can be hard to judge it right, especially when someone doesn't give early feedback on the requirements and then proves to be "quite reluctant to provide honest progress reports if things aren't going well". I guess getting the task size right for individual developers, and distinguishing legitimate concerns/cries for help from the usual background griping is one of the skills that makes a good team lead. Not easy though

I've actually never had the luxury of having a QA person on a dev team, so while what you say is no doubt true in those kinds of environments, my experience has been that there's no surer way to get a developer to lose sight of the big picture than presenting them with a bug list. At that stage it becomes all about fixing the bugs, not the code design

Brian says:

@Estragon Thanks for the comment! I'm nowhere near as engaged or as prolific as Adam, so apologies for the tardy response!

"what's too large for one dev is fine for another"

Agreed there's always going to be some variation in developer productivity, but I don't think that's hugely significant in most organisations. Rock star developers tend to clump together, as do mere mortals.

I'm going to assume that the developer(s) doing the work provides the estimates - and if that's not the case then run for the hills! :-) If we then impose time-boxing (estimate 1 day max per task) and mandatory progress reports (daily stand ups), then we have a mechanism for checking if the work is progressing as planned. The time-boxing addresses the potential issue of differences in individuals' capabilities. A slower developer might have to break down a large requirement into more one-day and sub-one-day tasks than a faster developer, but whatever speed developers work at, we have a daily report on progress.

"quite reluctant to provide honest progress reports if things aren't going well"

We then have to tackle the thorny question of getting honest progress reports. Again all sorts of things come into play, but in essence we have to:
1. encourage and reward honest reporting (especially when there's bad news to be delivered)
2. penalise dishonest reporting, and
3. enforce transparency.

The first two points are all about the culture you work in - and management must set the tone here. Raising your hand because you're falling behind schedule should result in assistance not admonition. If delays/problems are flagged up as soon as they are known by developers it gives management the largest possible amount of time to take corrective action. Hiding schedule problems and hoping to catch up time on other tasks is a recipe for disaster. The management's opportunity to take corrective action starts to recede and inevitably quality takes a nose dive on the other remaining tasks as we attempt to make up for last time.

Clearly we don't want a never ending stream of schedule problems, but that's where sprints / iterations and retrospectives come in. If we honestly, diligently and faithfully log the effort required to complete each task and, as part of our retrospective activities, compare expended effort against our initial original estimates we'll start to generate a feedback loop that should help us to get better at producing reliable estimates over time.

On the third point, Continuous Integration can play a major role here but we can also do things like define what "done" means to ensure that everyone has a common understanding of what it takes to complete a task. Many devs are quite happy to declare that they're finished when they press the save button on their IDE, more "enterprise" organisations might have a definition of done that includes all sorts of checks - unit testing, code review, functional testing, CI regression tests, performance tests etc. etc. The important point is that if a developer claims to have finished a task, then it must meet the organisation's definition of done, whatever that definition is (in agile circles that usually means, deployable to production).

On top of that, I'd mandate that any developer claiming to have finished a task should be able to demonstrate it. The conversation I've heard many times is,:

Dev: "Yeh I finished Feature A"
Colleague: "Great! Let's have a look!"
Dev: "Oh, actually you can't see it just yet, QA haven't signed it off"
Colleague: "OK... So, has your code been reviewed?"
Dev: "Err... no, not yet"
Colleague: "Have you even merged your code?"
Dev: "Erm... well, you see, I haven't committed my code yet"
Colleague: "So QA hasn't signed it off, the code hasn't been merged or reviewed, indeed you haven't even committed your code, but Feature A is done right?"
Dev: "Yeh!".

At this point I call BS :-) For a task to be finished it has to be in a state where it can at least be demo'ed. We're back to the agile principle of working software being the primary measure of progress. Add in the concept of binary milestones (there is no 90% done, it's either done or it's not done) and we start to concentrate minds.

"I've actually never had the luxury of having a QA person on a dev team"

It's not a luxury! ;-) I think the key here is shortening the feedback cycle. Some people use the analogy that unreleased code is akin to stock inventory in a traditional business, so unreleased code is just filling up space in the warehouse. Having code that is not only unreleased but untested is even worse. What it means is that we've got to manage code that we're not even sure is fit for purpose, and depending on your branching model it's either hanging around going stale on a feature branch or cluttering up your mainline.

Having testers and devs work together enables us to be much leaner, and get close to a kind-of JIT stock control system for coding. We write very small amounts of code which gets reviewed, tested and fixed very quickly, and as soon as possible we release it to production so that the business owners can make a return on their investment. If organisations deliberately choose to lengthen the feedback cycle they're just wasting time, money and effort. In my opinion that sort of waste is a luxury.

"especially when someone doesn't give early feedback on the requirements"

QA can play a key role here as well. If a requirement isn't testable, then there are almost certainly problems with it, so get QA involved before we start to code, maybe even before the devs look at the requirement. If QA has no idea how to test a requirement, how can a developer hope to know when they have finished the task? The only answer is to down tools until the customer / business analyst / project manager can provide clarity.

"there's no surer way to get a developer to lose sight of the big picture than presenting them with a bug list"

To my mind that's a symptom of the tasks being too big. If you keep to one day max per task, how much damage can a half-decent developer do in one day's coding? Another question this raises is, if working at a micro level produces a mountain of bugs, how can our development at a macro level be expected to be any better?

"At that stage it becomes all about fixing the bugs, not the code design"

Whether we're fixing bugs or delivering new features we should always be fixing the design - evolutionary architecture mandates constant refactoring, even when we're fixing bugs.

"Not easy though"

Absolutely - but (to mix my metaphors) Rome wasn't built in a day and there is no Silver Bullet! :-) Sadly, I think many people choose to take a very short-term, static view of how agile methodologies might apply to them or their team. They look at a bunch of agile-like practices and think, meh, that won't make us better, faster, stronger etc. I have some sympathy for such viewpoints, because taken in isolation, and in the very short term, they won't produce dramatic results, but they're wrong!

The last principle listed on the agile manifesto states, "At regular
intervals, the team reflects on how to become more effective, then tunes
and adjusts its behaviour accordingly" and to my mind this is the key
to producing the dramatic improvements and eliminating the problems described in the OP.

If we take a look at how individuals and groups learn and grow over time, I think it's self-evident that regular, frequent opportunities to measure, reflect, take on board feedback and make changes are crucial for improvement to occur. The agile principle above demonstrates that such opportunities are hard-wired into agile, but without such opportunities I'd argue that improvement is virtually impossible.
Yeah, I've highlighted some bits as if I'm a student reading a text book. This is great stuff, and it's as if Mr Sadler should be writing more stuff more often. Anyway, I'm glad to have the content for this blog, so cheers fella.

Time to address my day job...

--
Adam