Showing posts with label Estragon. Show all posts
Showing posts with label Estragon. Show all posts

Tuesday 1 December 2015

Guest author: more on agile practices from Brian Sadler

G'day:
Once again I'm promoting a comment from another article (Guest author: a response to "Why good developers write bad code") to be an article in its own right, cos I reckon people ought to read it. Firstly I'll repost that to which Brian is replying (for context), then on with Brian's reply.

Estragon says:

@Brian - yes to smaller chunks of the elephant at a time, tasks that are too large are a major contributor to this behaviour ... but then what's too large for one dev is fine for another, and it can be hard to judge it right, especially when someone doesn't give early feedback on the requirements and then proves to be "quite reluctant to provide honest progress reports if things aren't going well". I guess getting the task size right for individual developers, and distinguishing legitimate concerns/cries for help from the usual background griping is one of the skills that makes a good team lead. Not easy though

I've actually never had the luxury of having a QA person on a dev team, so while what you say is no doubt true in those kinds of environments, my experience has been that there's no surer way to get a developer to lose sight of the big picture than presenting them with a bug list. At that stage it becomes all about fixing the bugs, not the code design

Brian says:

@Estragon Thanks for the comment! I'm nowhere near as engaged or as prolific as Adam, so apologies for the tardy response!

"what's too large for one dev is fine for another"

Agreed there's always going to be some variation in developer productivity, but I don't think that's hugely significant in most organisations. Rock star developers tend to clump together, as do mere mortals.

I'm going to assume that the developer(s) doing the work provides the estimates - and if that's not the case then run for the hills! :-) If we then impose time-boxing (estimate 1 day max per task) and mandatory progress reports (daily stand ups), then we have a mechanism for checking if the work is progressing as planned. The time-boxing addresses the potential issue of differences in individuals' capabilities. A slower developer might have to break down a large requirement into more one-day and sub-one-day tasks than a faster developer, but whatever speed developers work at, we have a daily report on progress.

"quite reluctant to provide honest progress reports if things aren't going well"

We then have to tackle the thorny question of getting honest progress reports. Again all sorts of things come into play, but in essence we have to:
1. encourage and reward honest reporting (especially when there's bad news to be delivered)
2. penalise dishonest reporting, and
3. enforce transparency.

The first two points are all about the culture you work in - and management must set the tone here. Raising your hand because you're falling behind schedule should result in assistance not admonition. If delays/problems are flagged up as soon as they are known by developers it gives management the largest possible amount of time to take corrective action. Hiding schedule problems and hoping to catch up time on other tasks is a recipe for disaster. The management's opportunity to take corrective action starts to recede and inevitably quality takes a nose dive on the other remaining tasks as we attempt to make up for last time.

Clearly we don't want a never ending stream of schedule problems, but that's where sprints / iterations and retrospectives come in. If we honestly, diligently and faithfully log the effort required to complete each task and, as part of our retrospective activities, compare expended effort against our initial original estimates we'll start to generate a feedback loop that should help us to get better at producing reliable estimates over time.

On the third point, Continuous Integration can play a major role here but we can also do things like define what "done" means to ensure that everyone has a common understanding of what it takes to complete a task. Many devs are quite happy to declare that they're finished when they press the save button on their IDE, more "enterprise" organisations might have a definition of done that includes all sorts of checks - unit testing, code review, functional testing, CI regression tests, performance tests etc. etc. The important point is that if a developer claims to have finished a task, then it must meet the organisation's definition of done, whatever that definition is (in agile circles that usually means, deployable to production).

On top of that, I'd mandate that any developer claiming to have finished a task should be able to demonstrate it. The conversation I've heard many times is,:

Dev: "Yeh I finished Feature A"
Colleague: "Great! Let's have a look!"
Dev: "Oh, actually you can't see it just yet, QA haven't signed it off"
Colleague: "OK... So, has your code been reviewed?"
Dev: "Err... no, not yet"
Colleague: "Have you even merged your code?"
Dev: "Erm... well, you see, I haven't committed my code yet"
Colleague: "So QA hasn't signed it off, the code hasn't been merged or reviewed, indeed you haven't even committed your code, but Feature A is done right?"
Dev: "Yeh!".

At this point I call BS :-) For a task to be finished it has to be in a state where it can at least be demo'ed. We're back to the agile principle of working software being the primary measure of progress. Add in the concept of binary milestones (there is no 90% done, it's either done or it's not done) and we start to concentrate minds.

"I've actually never had the luxury of having a QA person on a dev team"

It's not a luxury! ;-) I think the key here is shortening the feedback cycle. Some people use the analogy that unreleased code is akin to stock inventory in a traditional business, so unreleased code is just filling up space in the warehouse. Having code that is not only unreleased but untested is even worse. What it means is that we've got to manage code that we're not even sure is fit for purpose, and depending on your branching model it's either hanging around going stale on a feature branch or cluttering up your mainline.

Having testers and devs work together enables us to be much leaner, and get close to a kind-of JIT stock control system for coding. We write very small amounts of code which gets reviewed, tested and fixed very quickly, and as soon as possible we release it to production so that the business owners can make a return on their investment. If organisations deliberately choose to lengthen the feedback cycle they're just wasting time, money and effort. In my opinion that sort of waste is a luxury.

"especially when someone doesn't give early feedback on the requirements"

QA can play a key role here as well. If a requirement isn't testable, then there are almost certainly problems with it, so get QA involved before we start to code, maybe even before the devs look at the requirement. If QA has no idea how to test a requirement, how can a developer hope to know when they have finished the task? The only answer is to down tools until the customer / business analyst / project manager can provide clarity.

"there's no surer way to get a developer to lose sight of the big picture than presenting them with a bug list"

To my mind that's a symptom of the tasks being too big. If you keep to one day max per task, how much damage can a half-decent developer do in one day's coding? Another question this raises is, if working at a micro level produces a mountain of bugs, how can our development at a macro level be expected to be any better?

"At that stage it becomes all about fixing the bugs, not the code design"

Whether we're fixing bugs or delivering new features we should always be fixing the design - evolutionary architecture mandates constant refactoring, even when we're fixing bugs.

"Not easy though"

Absolutely - but (to mix my metaphors) Rome wasn't built in a day and there is no Silver Bullet! :-) Sadly, I think many people choose to take a very short-term, static view of how agile methodologies might apply to them or their team. They look at a bunch of agile-like practices and think, meh, that won't make us better, faster, stronger etc. I have some sympathy for such viewpoints, because taken in isolation, and in the very short term, they won't produce dramatic results, but they're wrong!

The last principle listed on the agile manifesto states, "At regular
intervals, the team reflects on how to become more effective, then tunes
and adjusts its behaviour accordingly" and to my mind this is the key
to producing the dramatic improvements and eliminating the problems described in the OP.

If we take a look at how individuals and groups learn and grow over time, I think it's self-evident that regular, frequent opportunities to measure, reflect, take on board feedback and make changes are crucial for improvement to occur. The agile principle above demonstrates that such opportunities are hard-wired into agile, but without such opportunities I'd argue that improvement is virtually impossible.
Yeah, I've highlighted some bits as if I'm a student reading a text book. This is great stuff, and it's as if Mr Sadler should be writing more stuff more often. Anyway, I'm glad to have the content for this blog, so cheers fella.

Time to address my day job...

--
Adam

Thursday 26 November 2015

Guest author: a response to "Why good developers write bad code"

G'day:
I could get used to this. Another one of my readers has provided my copy for today, in response to my earlier guest-authored article "Why good developers write bad code". This originally was a comment on that article (well: it still is) but it stands on its own merit, and Brian's cleared me to reproduce it as an article. Brian's a mate, one of my colleagues at Hostelworld.com. The quote below is unabridged. I have corrected one typo, and the links are my own.

Brian says:

Really thought provoking post, Adam, and my immediate impression is that your mate seems to be describing an awful lot of dysfunctionality here. I'll get my declaration of interest out in the open straightaway and own up to being a massive Agile/XP fanboy, and consequently I think much of what your mate describes could be cured by the adoption of Agile/XP practices.

The "...but then he gets a chunky piece of work and, more often than you'd expect, it goes astray" quote is very revealing, because it would appear that there's no effort made to break large requirements down into smaller ones and we're on a slippery slope from here on in.

Developers often receive very coarse grained requirements and irrespective of our natural coding ability, we're not naturally adept at managing large pieces of work. That's not to say we can't manage large requirements, but without training, mentoring and support developers who might be first rate coders, can often struggle. In this case, it sounds like the developer in question is trying to eat an elephant in one sitting, and then being left alone to go dark and "fail". Reading between the lines it seems that this is accompanied by a seat of the pants estimate, which becomes a commitment as soon as he's uttered it, and before long we're into death march territory.

Also I wouldn't want to blame the developer in question, because as your mate points out, the same thing keeps happening time and again but there's no intervention or help from the guy's colleagues or his manager. Clearly they're at a loss as to how to fix the problem but unless someone can show him a better way, how is he going to improve?...

Anyway... if I was working with him I'd strongly encourage him to forget about eating the whole elephant, and just start by trying to nibble its trunk. Use TDD religiously from the outset, stick to the red-green-refactor cycle and You Ain't Gonna Need It (even more religiously), avoid Big Design Up Front, and let the application architecture evolve as he uncovers smells in the code that he's writing.

See how far he gets nibbling the elephant's trunk in one day, and if all goes then we'll reconvene and decide on the next bit of the elephant to consume, and carry on in that fashion. If he takes longer than a day - and we'll know that because we have daily stand-ups right? - then we'll pair up and see what the problem is. Invariably if the guy's a first rate coder, it will only be because he's bitten off more than he can chew, so get him to stop, re-plan the work, and decrease the size of the immediate task in front of him so he can actually finish it. In short, KISS, keep tasks small and try to conquer complexity. Writing code is intrinsically hard, we should actively avoid anything that makes it harder.

I think it's also worth bearing in mind that many, if not most developers, are naturally quite reluctant to provide honest progress reports if things aren't going well. Combine that with our tendency to give hopelessly optimistic estimates and we're already in a very bad place. I've found that it's almost endemic in our industry that developers are scared of giving bad news to anyone, let alone their line manager or project manager. Developers are usually clever people who spent much of their academic careers being praised for their intellectual prowess, and to be put in a position where they are suddenly the bad guys because they didn't hit an arbitrary deadline can be very distressing and something to avoid at all costs.

If there was just one thing I would wish for in the industry it would be to reform the attitude that project managers, team leads and developers have to estimation. Estimates should be treated by all stakeholders as just that, they are not cast iron commitments written in blood. They should be based on evidence (dude we go through six stages of peer review on every task, why did you give an estimate of 10 minutes?) and when the evidence is demonstrating that our estimates are wrong, then remaining estimates should be re-calibrated. All this is only possible if managers and developers are prepared to create a culture where realistic and responsible estimation, combined with predictable delivery is championed over masochistic estimation and constant failures to hit deadlines.

One final point, when your mate writes "it's a good idea to keep new features *out* of QA for as long as possible", I totally disagree! :-) I'd want feedback from the QA team as soon as possible. The shorter the feedback cycle the better and in an ideal world the QA team would be writing automated acceptance tests in parallel with my development efforts. I'm a firm believer in the agile principle that "Working software is the primary measure of progress" and feedback from QA I'm just guessing that the software I'm writing is actually working.


Thursday 19 November 2015

Guest author: Why good developers write bad code

G'day:
A while back one of my mates said there was a topic they had some thoughts on, and figured they should write a blog article about it. Their problem being they didn't have a blog. This seemed like a good way for me to get someone else to do my work for me, so I raised my hand to host it for them here. I've been waiting for... god... oh about three months for this. But 'ere' 'tis:

So there's a guy on your team, and he's a good developer. He knows the language inside out, he's up-to-date with all the latest technologies, he spots design patterns and code smells, and when you send him your code to review he cuts you a new asshole (and it turns out he's bloody right about everything)

... but then he gets a chunky piece of work and, more often than you'd expect, it goes astray. He's swearing at his screen, he's "almost done" for ages. Because it's late the code reviews are lighter than they should be, and QA raise a ton of bugs. He says they're all small but each one takes longer than the next to fix. When you try and spread out the bugfixing among the team it takes everyone else even longer so he ends up slogging through everything on his own. Eventually it's done and it works, but it's brittle and everyone is afraid to touch it. He blames it on over-complicated requirements and over-ambitious timelines and we all move on.

Maybe it was The Business's fault for coming up with stupid requirements, or just the fact that smart people can make mistakes in ingenious ways. Everyone gets unlucky now and again, but if a competent developer gets bogged down regularly then there's something else going on. What is it?

Good devs are often tinkerers, and are very good at figuring out ways to work around obstacles. The dark side of this is that obstacles can be seen as things-to-be-worked-around rather than problems to be solved.

At the code level this manifests as a reluctance to refactor. Tests make refactoring safer and easier, but even with good tests developers often see existing code as fixed, something to be built upon rather than restructured. The upshot of this is you get layers and layers of code, each layer adding a missing piece or working around a bad design decision in the one below, and the complexity quickly gets out of hand. This gets especially bad when time is tight, and guys don't want to go back to the start, and so treat their own almost-but-not-quite working code from a few days ago as immutable (especially after their feature has been through QA) - essentially they get into trial-and-error mode, and nothing gets re-thought, so underlying issues with their code design don't get fixed.

You also see it one level up from the code - when confronted with an over-complicated set of requirements, a capable developer will often just implement them as they are, logical contradictions and all. He'll bitch and moan, but will very seldom chase down a bad requirement and get it (or his interpretation of it) fixed so that it makes sense. We all bang on about elegance, but the requirements are the soil the code grows in, and without taking some responsibility for them your code cannot ever be consistently good.

There's no easy way to fix this. It's hard enough to find programmers capable of writing good code, harder still to find ones that consistently do so. If this is you, write tests you can trust then refactor refactor refactor. If for some reason you're not doing TDD then refactor anyway. If you've coded yourself into a corner create a new branch and work out a new approach in that. Be bold. It's just code, it's not set in stone, and neither are your requirements, so stop your moaning and take responsibility for them. And stay out of trial-and-error mode as much as you can.

If this is someone on a team you lead, TDD and code reviews (especially if they're done early enough to prevent a dev going too far down the wrong path) will help, also because bugs are often fixed by trial and error it's a good idea to keep new features *out* of QA for as long as possible. Really though there's no magic bullet - you'll just have to know who's susceptible to this kind of thing, and keep an eye out for it.

Yep, very true. I encounter this challenge all the time, and indeed sometimes I self-identify as having this problem too. I suppose we all do/did at some point.