G'day
I'm just building on some thoughts here. Some of the thoughts are from learning from people who know a lot more than me, and from watching my teams through their successes and… "less than successes".
When we work, whether we realise it or not, the intent is to deliver value to some client. The client could be an external user who we would like to hand money over to us as a result of our efforts. Or the client might be the boss and their latest revenue-making idea. Or the client might be our own application that is need of some love because the codebase is not exactly as good as we'd like it to be. But there is always a client, and the end result of our work needs to benefit the client. Sometimes it's hard to get a handle on what the client wants, and what exactly we need to do to solve their problems and to add value.
But all our work needs to add value. And that value need to be able to be both measured, and to be realised.
To do this, when we set-out to do & deliver some work, we need a coupla benchmark points along the way. Most notably before we even agree to start, a "Definition of Ready"; and before we agree it's finished: a "Definition of Done".
My notes below are from an engineer's perspective, and there's no-doubt some nuance I'm missing, or the points are specific to situations I've been in. It's not really a generic template, but a theoretical example of what might go into these documents. But to get our work done, I figure we ought to be considering some super/sub -set of these kind of things. These points represent a statement of obvious, and an aide-mémoire for stuff that might be less obvious.
Terminology
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC-2119. I like including this, because it makes sure these words that might be read as being fairly loaded are just used as a reference vocab. I'm not looking at you when I say this.
Definition of Ready
- A story should represent effort that SHOULD fit in one sprint (ie: from "Ready" to "Done" in one sprint).
- Within the above constraint, some estimation of complexity, risk and effort SHOULD be indicated.
- A Behaviour Driven Development (BDD) phraseology SHOULD be used for User Stories. This is so we actively and deliberately make clear each case that need to be addressed to fulfil the requirement, and to reduce ambiguity.
- Inbound and outbound parameters (eg for a new web page or API end point) incl. route slug, query string params, request method, POST body, headers, cookies, anything else) MUST be clearly defined, where the definition MUST include any validation constraints (eg: "[some date] must be in the past").
- Where relevant, outbound status codes SHOULD also be detailed (200-OK, 201-CREATED, 400-BAD-REQUEST etc). For non-2xx responses, error information (or lack thereof when there are TMI concerns) SHOULD be defined if appropriate.
- Logging requirements of non-2xx statuses SHOULD be defined. EG: 404s probably nothing. Some 400s: maybe? 401 & 403: probably? 5xx: definitely.
- Interactions with external systems - eg DBs - SHOULD be considered.
- If the story represents partial work, consideration MUST be made as to how it will be deployed in an "unavailable state" to external uses (eg: what feature toggles are in place, and the criteria for toggle on/off).
- Similarly, consideration SHOULD be made as to how the partial work can be internally QAed (eg: feature toggle rules include the environment, so that features behave "live" in a QA environment).
- For stories that are investigative rather than development-oriented, BDD phraseology SHOULD still be used.
- For investigative stories, there MUST still be a deliverable ("we have a clear idea how to integrate Varnish on the front end to remove load from the application servers on seldom-changing content"). The deliverable MUST be tangible (eg: written down and filed), not just "oh just google some stuff and get an idea how it works".
- For stories related to bugs: steps to reproduce, expected behaviour and actual behaviour, any error messages and log entries SHOULD be included in the ticket detail, if possible.
- For stories relating to UI changes, an example of how it will look SHOULD be included on the ticket. This MAY NOT be pixel-perfect, just indicative.
- Engineering MUST have an idea of how any legacy code changes will have automated testing. This is because we accept legacy code has not been written with testing in mind, but we still MUST test our work.
One thing to note here is that all this stuff requires collaboration from the client and the squad. Some stuff is business need, some stuff is technical fulfilment. For example one would not expect the client to know about DB rollback minutiae, but it's our job to know that it might need to be done, and how to do it. We are their technical advocates and advisors. And gatekeepers. And backstops. And there might be a toner cartridge needing to be changed. Shrug. You know the deal. But it's an important deal.
But anyway… if some work meets a "Definition of Ready", then a squad MAY consider it to be added into the workflow for an upcoming release.
For some work to be considered completed, then we need to consider if it's been "done".
Definition of Done
- There SHOULD be development guidelines (whatever they are) and they MUST be followed.
- There SHOULD be automated code quality checks, and they MUST pass.
- All new code SHOULD have automated tests, and test coverage MUST pass.
- Code relating to other processes such as DB updates and rollbacks SHOULD be in source control and/or readily to hand.
- In feature-toggled situations, both toggle-on and toggle-off states MUST be tested.
- Work MUST have been QAed and actively signed-off as good-to-go.
- Work MUST be actively accepted by the client (whoever the client is, and that could well be a colleague).
- Work that is under feature toggle MAY be considered "done" even if toggled off, provided it's passed QA in the "off" state.
- Work MUST be in production to be considered "done".
The bottom line here is particularly relevant for devs: you've not finished your work until it's live. It's not done when you've finished typing. It's not done when it's in code review. It's not done when it's in QA. It's not done when yer bored of it and would rather be doing something else. It's only done when the client sees the work and accepts it. You are on-the-hook until then, and it's your responsibility until then. You should be actively chasing whoever is necessary to get your work into production and earning value, because if it's not: there's no point in having done it.
For the squad as a whole, every person has a part in seeing the squad's work seeing the light of day. Make sure you're doing what you can to expedite work getting in front of the client and to help them either sign it off, or kick it back for revision.
There will be more factors on both sides of this table, measuring whether work ought to be done, or is done. My point here is more that it needs to be a series of active checks along the way. We don't undertake work if we can't see the light at the end of the tunnel, and we don't actually know it will benefit our client. And we need to think about this all the way.
Let me know what other considerations there might be in this. I'm very web-app-centric in my programming / business / leadership (such as it is) exposure, and there's probably more - of fewer! - things to think about.
Righto.
--
Adam