Showing posts with label guest author. Show all posts
Showing posts with label guest author. Show all posts

Tuesday, 1 December 2015

Guest author: more on agile practices from Brian Sadler

G'day:
Once again I'm promoting a comment from another article (Guest author: a response to "Why good developers write bad code") to be an article in its own right, cos I reckon people ought to read it. Firstly I'll repost that to which Brian is replying (for context), then on with Brian's reply.

Estragon says:

@Brian - yes to smaller chunks of the elephant at a time, tasks that are too large are a major contributor to this behaviour ... but then what's too large for one dev is fine for another, and it can be hard to judge it right, especially when someone doesn't give early feedback on the requirements and then proves to be "quite reluctant to provide honest progress reports if things aren't going well". I guess getting the task size right for individual developers, and distinguishing legitimate concerns/cries for help from the usual background griping is one of the skills that makes a good team lead. Not easy though

I've actually never had the luxury of having a QA person on a dev team, so while what you say is no doubt true in those kinds of environments, my experience has been that there's no surer way to get a developer to lose sight of the big picture than presenting them with a bug list. At that stage it becomes all about fixing the bugs, not the code design

Brian says:

@Estragon Thanks for the comment! I'm nowhere near as engaged or as prolific as Adam, so apologies for the tardy response!

"what's too large for one dev is fine for another"

Agreed there's always going to be some variation in developer productivity, but I don't think that's hugely significant in most organisations. Rock star developers tend to clump together, as do mere mortals.

I'm going to assume that the developer(s) doing the work provides the estimates - and if that's not the case then run for the hills! :-) If we then impose time-boxing (estimate 1 day max per task) and mandatory progress reports (daily stand ups), then we have a mechanism for checking if the work is progressing as planned. The time-boxing addresses the potential issue of differences in individuals' capabilities. A slower developer might have to break down a large requirement into more one-day and sub-one-day tasks than a faster developer, but whatever speed developers work at, we have a daily report on progress.

"quite reluctant to provide honest progress reports if things aren't going well"

We then have to tackle the thorny question of getting honest progress reports. Again all sorts of things come into play, but in essence we have to:
1. encourage and reward honest reporting (especially when there's bad news to be delivered)
2. penalise dishonest reporting, and
3. enforce transparency.

The first two points are all about the culture you work in - and management must set the tone here. Raising your hand because you're falling behind schedule should result in assistance not admonition. If delays/problems are flagged up as soon as they are known by developers it gives management the largest possible amount of time to take corrective action. Hiding schedule problems and hoping to catch up time on other tasks is a recipe for disaster. The management's opportunity to take corrective action starts to recede and inevitably quality takes a nose dive on the other remaining tasks as we attempt to make up for last time.

Clearly we don't want a never ending stream of schedule problems, but that's where sprints / iterations and retrospectives come in. If we honestly, diligently and faithfully log the effort required to complete each task and, as part of our retrospective activities, compare expended effort against our initial original estimates we'll start to generate a feedback loop that should help us to get better at producing reliable estimates over time.

On the third point, Continuous Integration can play a major role here but we can also do things like define what "done" means to ensure that everyone has a common understanding of what it takes to complete a task. Many devs are quite happy to declare that they're finished when they press the save button on their IDE, more "enterprise" organisations might have a definition of done that includes all sorts of checks - unit testing, code review, functional testing, CI regression tests, performance tests etc. etc. The important point is that if a developer claims to have finished a task, then it must meet the organisation's definition of done, whatever that definition is (in agile circles that usually means, deployable to production).

On top of that, I'd mandate that any developer claiming to have finished a task should be able to demonstrate it. The conversation I've heard many times is,:

Dev: "Yeh I finished Feature A"
Colleague: "Great! Let's have a look!"
Dev: "Oh, actually you can't see it just yet, QA haven't signed it off"
Colleague: "OK... So, has your code been reviewed?"
Dev: "Err... no, not yet"
Colleague: "Have you even merged your code?"
Dev: "Erm... well, you see, I haven't committed my code yet"
Colleague: "So QA hasn't signed it off, the code hasn't been merged or reviewed, indeed you haven't even committed your code, but Feature A is done right?"
Dev: "Yeh!".

At this point I call BS :-) For a task to be finished it has to be in a state where it can at least be demo'ed. We're back to the agile principle of working software being the primary measure of progress. Add in the concept of binary milestones (there is no 90% done, it's either done or it's not done) and we start to concentrate minds.

"I've actually never had the luxury of having a QA person on a dev team"

It's not a luxury! ;-) I think the key here is shortening the feedback cycle. Some people use the analogy that unreleased code is akin to stock inventory in a traditional business, so unreleased code is just filling up space in the warehouse. Having code that is not only unreleased but untested is even worse. What it means is that we've got to manage code that we're not even sure is fit for purpose, and depending on your branching model it's either hanging around going stale on a feature branch or cluttering up your mainline.

Having testers and devs work together enables us to be much leaner, and get close to a kind-of JIT stock control system for coding. We write very small amounts of code which gets reviewed, tested and fixed very quickly, and as soon as possible we release it to production so that the business owners can make a return on their investment. If organisations deliberately choose to lengthen the feedback cycle they're just wasting time, money and effort. In my opinion that sort of waste is a luxury.

"especially when someone doesn't give early feedback on the requirements"

QA can play a key role here as well. If a requirement isn't testable, then there are almost certainly problems with it, so get QA involved before we start to code, maybe even before the devs look at the requirement. If QA has no idea how to test a requirement, how can a developer hope to know when they have finished the task? The only answer is to down tools until the customer / business analyst / project manager can provide clarity.

"there's no surer way to get a developer to lose sight of the big picture than presenting them with a bug list"

To my mind that's a symptom of the tasks being too big. If you keep to one day max per task, how much damage can a half-decent developer do in one day's coding? Another question this raises is, if working at a micro level produces a mountain of bugs, how can our development at a macro level be expected to be any better?

"At that stage it becomes all about fixing the bugs, not the code design"

Whether we're fixing bugs or delivering new features we should always be fixing the design - evolutionary architecture mandates constant refactoring, even when we're fixing bugs.

"Not easy though"

Absolutely - but (to mix my metaphors) Rome wasn't built in a day and there is no Silver Bullet! :-) Sadly, I think many people choose to take a very short-term, static view of how agile methodologies might apply to them or their team. They look at a bunch of agile-like practices and think, meh, that won't make us better, faster, stronger etc. I have some sympathy for such viewpoints, because taken in isolation, and in the very short term, they won't produce dramatic results, but they're wrong!

The last principle listed on the agile manifesto states, "At regular
intervals, the team reflects on how to become more effective, then tunes
and adjusts its behaviour accordingly" and to my mind this is the key
to producing the dramatic improvements and eliminating the problems described in the OP.

If we take a look at how individuals and groups learn and grow over time, I think it's self-evident that regular, frequent opportunities to measure, reflect, take on board feedback and make changes are crucial for improvement to occur. The agile principle above demonstrates that such opportunities are hard-wired into agile, but without such opportunities I'd argue that improvement is virtually impossible.
Yeah, I've highlighted some bits as if I'm a student reading a text book. This is great stuff, and it's as if Mr Sadler should be writing more stuff more often. Anyway, I'm glad to have the content for this blog, so cheers fella.

Time to address my day job...

--
Adam

Thursday, 26 November 2015

Guest author: a response to "Why good developers write bad code"

G'day:
I could get used to this. Another one of my readers has provided my copy for today, in response to my earlier guest-authored article "Why good developers write bad code". This originally was a comment on that article (well: it still is) but it stands on its own merit, and Brian's cleared me to reproduce it as an article. Brian's a mate, one of my colleagues at Hostelworld.com. The quote below is unabridged. I have corrected one typo, and the links are my own.

Brian says:

Really thought provoking post, Adam, and my immediate impression is that your mate seems to be describing an awful lot of dysfunctionality here. I'll get my declaration of interest out in the open straightaway and own up to being a massive Agile/XP fanboy, and consequently I think much of what your mate describes could be cured by the adoption of Agile/XP practices.

The "...but then he gets a chunky piece of work and, more often than you'd expect, it goes astray" quote is very revealing, because it would appear that there's no effort made to break large requirements down into smaller ones and we're on a slippery slope from here on in.

Developers often receive very coarse grained requirements and irrespective of our natural coding ability, we're not naturally adept at managing large pieces of work. That's not to say we can't manage large requirements, but without training, mentoring and support developers who might be first rate coders, can often struggle. In this case, it sounds like the developer in question is trying to eat an elephant in one sitting, and then being left alone to go dark and "fail". Reading between the lines it seems that this is accompanied by a seat of the pants estimate, which becomes a commitment as soon as he's uttered it, and before long we're into death march territory.

Also I wouldn't want to blame the developer in question, because as your mate points out, the same thing keeps happening time and again but there's no intervention or help from the guy's colleagues or his manager. Clearly they're at a loss as to how to fix the problem but unless someone can show him a better way, how is he going to improve?...

Anyway... if I was working with him I'd strongly encourage him to forget about eating the whole elephant, and just start by trying to nibble its trunk. Use TDD religiously from the outset, stick to the red-green-refactor cycle and You Ain't Gonna Need It (even more religiously), avoid Big Design Up Front, and let the application architecture evolve as he uncovers smells in the code that he's writing.

See how far he gets nibbling the elephant's trunk in one day, and if all goes then we'll reconvene and decide on the next bit of the elephant to consume, and carry on in that fashion. If he takes longer than a day - and we'll know that because we have daily stand-ups right? - then we'll pair up and see what the problem is. Invariably if the guy's a first rate coder, it will only be because he's bitten off more than he can chew, so get him to stop, re-plan the work, and decrease the size of the immediate task in front of him so he can actually finish it. In short, KISS, keep tasks small and try to conquer complexity. Writing code is intrinsically hard, we should actively avoid anything that makes it harder.

I think it's also worth bearing in mind that many, if not most developers, are naturally quite reluctant to provide honest progress reports if things aren't going well. Combine that with our tendency to give hopelessly optimistic estimates and we're already in a very bad place. I've found that it's almost endemic in our industry that developers are scared of giving bad news to anyone, let alone their line manager or project manager. Developers are usually clever people who spent much of their academic careers being praised for their intellectual prowess, and to be put in a position where they are suddenly the bad guys because they didn't hit an arbitrary deadline can be very distressing and something to avoid at all costs.

If there was just one thing I would wish for in the industry it would be to reform the attitude that project managers, team leads and developers have to estimation. Estimates should be treated by all stakeholders as just that, they are not cast iron commitments written in blood. They should be based on evidence (dude we go through six stages of peer review on every task, why did you give an estimate of 10 minutes?) and when the evidence is demonstrating that our estimates are wrong, then remaining estimates should be re-calibrated. All this is only possible if managers and developers are prepared to create a culture where realistic and responsible estimation, combined with predictable delivery is championed over masochistic estimation and constant failures to hit deadlines.

One final point, when your mate writes "it's a good idea to keep new features *out* of QA for as long as possible", I totally disagree! :-) I'd want feedback from the QA team as soon as possible. The shorter the feedback cycle the better and in an ideal world the QA team would be writing automated acceptance tests in parallel with my development efforts. I'm a firm believer in the agile principle that "Working software is the primary measure of progress" and feedback from QA I'm just guessing that the software I'm writing is actually working.


Thursday, 19 November 2015

Guest author: Why good developers write bad code

G'day:
A while back one of my mates said there was a topic they had some thoughts on, and figured they should write a blog article about it. Their problem being they didn't have a blog. This seemed like a good way for me to get someone else to do my work for me, so I raised my hand to host it for them here. I've been waiting for... god... oh about three months for this. But 'ere' 'tis:

So there's a guy on your team, and he's a good developer. He knows the language inside out, he's up-to-date with all the latest technologies, he spots design patterns and code smells, and when you send him your code to review he cuts you a new asshole (and it turns out he's bloody right about everything)

... but then he gets a chunky piece of work and, more often than you'd expect, it goes astray. He's swearing at his screen, he's "almost done" for ages. Because it's late the code reviews are lighter than they should be, and QA raise a ton of bugs. He says they're all small but each one takes longer than the next to fix. When you try and spread out the bugfixing among the team it takes everyone else even longer so he ends up slogging through everything on his own. Eventually it's done and it works, but it's brittle and everyone is afraid to touch it. He blames it on over-complicated requirements and over-ambitious timelines and we all move on.

Maybe it was The Business's fault for coming up with stupid requirements, or just the fact that smart people can make mistakes in ingenious ways. Everyone gets unlucky now and again, but if a competent developer gets bogged down regularly then there's something else going on. What is it?

Good devs are often tinkerers, and are very good at figuring out ways to work around obstacles. The dark side of this is that obstacles can be seen as things-to-be-worked-around rather than problems to be solved.

At the code level this manifests as a reluctance to refactor. Tests make refactoring safer and easier, but even with good tests developers often see existing code as fixed, something to be built upon rather than restructured. The upshot of this is you get layers and layers of code, each layer adding a missing piece or working around a bad design decision in the one below, and the complexity quickly gets out of hand. This gets especially bad when time is tight, and guys don't want to go back to the start, and so treat their own almost-but-not-quite working code from a few days ago as immutable (especially after their feature has been through QA) - essentially they get into trial-and-error mode, and nothing gets re-thought, so underlying issues with their code design don't get fixed.

You also see it one level up from the code - when confronted with an over-complicated set of requirements, a capable developer will often just implement them as they are, logical contradictions and all. He'll bitch and moan, but will very seldom chase down a bad requirement and get it (or his interpretation of it) fixed so that it makes sense. We all bang on about elegance, but the requirements are the soil the code grows in, and without taking some responsibility for them your code cannot ever be consistently good.

There's no easy way to fix this. It's hard enough to find programmers capable of writing good code, harder still to find ones that consistently do so. If this is you, write tests you can trust then refactor refactor refactor. If for some reason you're not doing TDD then refactor anyway. If you've coded yourself into a corner create a new branch and work out a new approach in that. Be bold. It's just code, it's not set in stone, and neither are your requirements, so stop your moaning and take responsibility for them. And stay out of trial-and-error mode as much as you can.

If this is someone on a team you lead, TDD and code reviews (especially if they're done early enough to prevent a dev going too far down the wrong path) will help, also because bugs are often fixed by trial and error it's a good idea to keep new features *out* of QA for as long as possible. Really though there's no magic bullet - you'll just have to know who's susceptible to this kind of thing, and keep an eye out for it.

Yep, very true. I encounter this challenge all the time, and indeed sometimes I self-identify as having this problem too. I suppose we all do/did at some point.

Monday, 5 October 2015

Survey results: Jared's CFML text editor usage survey

G'day:

I've got Jared doing the donkey for for me today. Here's his article on the results of the CFML IDE survey he ran last week:

First of all, may I say thank you to Adam Cameron for hosting this post. I've been a ColdFusion developer for too long a time now, and it's only because of people like Adam Cameron and others, that I feel there is a community behind this language.

I have to thank everyone who participated in the survey, 132 of you in about 2 days. That's pretty good going in my opinion. Sadly, those 32 extra responses won't be counted as I have not paid for more than 100 responses. Thank you for your input though.

I wanted to get a sense of what the community were using for their main development purposes. Myself? I started off on Dreamweaver, moved to Eclipse with CfEclipse and about 2 years ago switched to Sublime with community made CF Support. My theory was that many have gone down this path, they started with a heavy IDE and either found their environments were never properly setup to use it, or that they were far too beastly for what they were doing. 

As much as I like IDE's, the environment really has to be setup for it to be useful, mine never were, our servers didn't support step debugging, so switching to a lightweight Advanced Text Editor that has allowed the extensibility to be used much in the same way as an IDE made complete sense.

I should say that an Advanced Text Editor is normally a very light weight program that will contain different types of language support for highlighting and closing of brackets and such, but with a plugin extensibility that allows for things like linters. They don't normally allow for debugging unless the community have gone completely out of their way to add it.

With my theory, I wanted to find out if the community was still using IDE's. Adobe and the CF team have invested a lot into ColdFusion Builder. Personally I think we've moved on past these heavyweight tools… Adobe have been investing in their open source community project Brackets. This already has a lot of support for many languages, but none for ColdFusion. I've argued in the Slack channels that I would prefer Adobe to focus development on, or at least put some first class support into, projects like Brackets over ColdFusion Builder. Many have argued against me, claiming there is still a want and need for ColdFusion Builder.

This survey was far from perfect, as some (Adam…) of you pointed out. Yes it was slightly Yoda-ish, maybe I should have used checkboxes instead of Radio Buttons… But I think it gives a first attempt to find out what the community is using day to day. So let's take a look at the results from the first question.

For the first question, I asked: For general purpose development, including ColdFusion, I use as my IDE?

With this, I was trying to get a sense of what people are using for general purpose development. As the world has moved on, platform specific developers have found themselves become more general purpose developers, not to say Jack of all, Master of none… but that most ColdFusion developers will now be Web Developers, expected to write at least JavaScript and HTML if not some other random language as people find themselves in DevOps roles. Keep in mind that 32 results are missing from these responses.



From these responses, we can see that overwhelmingly, Advanced Text Editors are being used for general purpose development (with a ColdFusion edge). Sublime Text appears to be the community favourite, which I can understand as it has a well supported and developed ecosystem of plugins. 21 of you are using IDE's (Dreamweaver, ColdFusion Builder, Eclipse and Eclipse with CFEclipse) which isn't surprising, ColdFusion is still a Enterprise Language, with Enterprise Pricing, so the features offered by these make sense. 

There was one IDE that I had forgotten about: IntelliJ iDEA, you made it well known by telling me in the "Other" choice. With the responses from there, IDE's are still behind their more lightweight counterparts.

Tuesday, 5 May 2015

This Railo v Lucee thing: a well-balanced reaction (from someone else)

G'day:
I had a response to make to a comment this evening, but Ron Stewart beat me to it, saying pretty much what I'd've said, except better. And more politely. I wasn't going to bother being polite.

'ere' 't'is.

It's in response to Dom's comment, which was a slightly befuddling reaction to my article this morning, "Questions for Railo". Dom's post seemed to take offence at the fact I delivered inconvenient information, and I have an opinion which is contrary to the zeitgeist. And that I have derided some of the less coherent reaction I've been seeing. Which is odd, cos he's read this blog before.

Anyway, I was gonna reply after work, but I've been trumped by Ron who's written what I would have said (like I said above).


@Dominic: I'm not one for putting words in Adam's mouth (seems dangerous to me) but I too was a little surprised by how some people responded either on Twitter or in the comments on the Google Groups posting in a manner that either

(a) seemed (to me) to lose sight of what I thought was the primary basis for the disagreement between 4FTI and the people behind the Lucee project: some sort of contractual agreement regarding IP rights, or

(b) automatically assumed that the entity behind the post was automatically wrong, without grounds for their position, some sort of bad guy, or out specifically to harm the Lucee project (they may be any or all of those things... or they may not be).

There's a great deal about all of this that we on the outside simply don't know at this point in time, and to automatically assume that the entity behind the post on the Railo blog has no basis for their position and/or for addressing what they perceive as a grievance or breach of contract through litigation is probably unwise. The claims made by the entity behind the blog post on the Railo site may be without merit... or they may not be. We just don't know.

As I posted on Twitter, the underlying disagreement between the two sides in this case is troubling to me (I used the word "disturbing" in my tweet) because it, at best, will likely be a distraction for continued development of the Lucee project until it is resolved. I also noted that that the cynical part of me was sort of expecting this given the protracted silence from the Railo side since the Lucee project began... given what we had publically heard about financial backing for and commitment to the long-term future of CFML through Railo, it seemed quite likely to me that the entities backing Railo would be quiet that long if they were trying to figure out what this meant for their investment and their future... maybe I was reading too much into the silence. I think we now have some idea of where they stand.

I saw some of the reaction as a surprising rush to judgment, given how little any of us know about the circumstances behind the principals leaving the Railo side, any contractual agreements they may or may not have had with the Railo side, conversations that may have occurred between the two sides since they left the Railo project as they two sides may or may not have attempted to come to some sort of agreement on points of dispute. I don't think we know enough at this point to start talking about who's right, wrong, good, bad, or anything else in this case.

Does all of this help the CFML community? Or Railo? Or Lucee? No, almost certainly not... but neither does our jumping to conclusions about "right" and "wrong" or "good" and "bad" in situations where we aren't currently--and may never be--privy to all of the relevant facts.

I think Adam is/was doing the same thing to the CFML community that he often does with the vendors: taking them to task for what he sees as questionable reasoning. I don't see his position as "pandering" or even an extreme case of playing the devil's advocate. But that's just my perspective...

I'm curious about the answers to some of the questions he's posed here (particularly the forward-looking ones), and to see how this plays out.


Saturday, 11 October 2014

The received wisdom of TDD [etc]: Sean's feedback

G'day:
During the week I solicited feedback on my assessment of "The received wisdom of TDD and private methods". I got a small amount of good, usefull feedback, but not as much as I was hoping for.

However Sean came to the fore with a very long response, and I figure it's worth posting here so other people spot it and read it.


'ere 'tis, unabridged:

There are broadly two schools of thought on the issue of "private" methods and TDD:

1. private methods are purely an implementation detail that arise as part of the "refactor" portion of the cycle - they're completely irrelevant to the tests and they never need testing (because they only happen as part of refactoring other methods when your tests are already passing).

2. encapsulation is not particularly helpful and it's fine to just make things public if you want to add tests for new behavior within previously private methods.

The former is the classical position: classes are a black box except for their public API, and it's that public API that you test-drive.

The latter is an increasingly popular position that has gained traction as people rethink OOP, start to use languages that don't have "private", or start working in a more FP style. Python doesn't really have private methods (sure, you can use a double underscore prefix to "hide" a function but it's still accessible via the munged name which is '_TheClass__theFunction' for '__theFunction' inside 'TheClass'). Groovy has a 'private' keyword (for compatibility with Java) but completely ignores it. Both languages operate on trust and assume developers aren't idiots and aren't malicious. In FP, there's a tendency toward making everything public because there are fewer side-effects and some helper function you've created to help implement an API function might be useful to users of your code - and it's safe when it has no side-effects!

When I started writing Clojure, coming from a background of C++, Java, and CFML, I was quite meticulous about private vs public... and in Clojure you can still easily access a "private" function by using its fully-qualified name, e.g., `#'some.namespace/private-function` rather than just `private-function` or `some.namespace/private-function`. Using `#'` bypasses the access check. And the idiom in Clojure is generally to just make everything public anyway, possibly dividing code into a "public API" namespace and one or more "implementation" namespaces. The latter contain public functions that are only intended to be used by the former - but, again, the culture of trust means that users _can_ call the implementation functions if they want, on the understanding that an implementation namespace might change (and is likely to be undocumented).

My current position tends to be that if I'm TDD-ing code and want to refactor a function, I'll usually create a new test for the specifics of the helper I want to introduce, and then refactor into the helper to make all the tests pass (the original tests for the existing public function and the new test(s) for the helper function). Only if there's a specific reason for a helper to be private would I go that route (for example, it isn't a "complete" function on its own, or it manages side-effects that I don't want messed with outside of the calling function, etc). And, to be honest, in those cases, I'd probably just make it a local function inside the original calling function if it was that critical to hide it.

Google for `encapsulation harmful` and you'll see there's quite a body of opinion that "private by default" - long held to be a worthy goal in OOP - is an impediment to good software design these days (getters and setters considered harmful is another opinion you'll find out there).

That was longer than I intended!

Yeah Sean but it was bloody good. It all makes sense, and also goes a way to make me think I'm not a lunatic (at least not in this specific context).

And I have a bunch of reading to do on this whole "encapsulation harmful" idea. I'd heard it mentioned, screwed my nose up a bit, but didn't follow up. Now I will. Well: when I have a moment.

Anyway, there you go. Cheers for the effort put in to writing this, Sean. I owe you a beer or two.

Righto.

--
Adam

Wednesday, 8 May 2013

An Architect's View: Sean's feedback on my recent article about ColdFusion interfaces

G'day:
It's a good day for this blog when someone like Sean Corfield reacts to something I write with an article-sized response in the comments. I've asked him if it's OK to reproduce this as a "guest-author" article, which he's agreed to. I did this because I freely admit I'm making up my opinion regarding ColdFusion's interface implementation as I go along, whereas Sean knows an awful lot about the subject - and had a hand in their genesis - as far as CFML goes. So if I'm discussing this topic, his opinions are important ones to reflect upon.

Below is a reproduction of his comment posted against my article entitled "Interfaces in CFML. What are they for?". I have some opinions on this - actually most are "yeah, good point" - but I'll cover them in a separate article.

Thanks, Sean, for taking the time to write this:

Monday, 21 January 2013

Railo website on Raspberry Pi with a public facing domain hosted at home

G'day:

As promised on Twitter last week, here's Chris's article on getting Railo running on Rasperry Pi.

Over to Chris...



So you are a lucky owner of a sweet Raspberry Pi, you have a typical broadband connection, some ColdFusion language skills and you would like to take advantage of all these things and roll out a little Web site hosted at home, independently from any hosting provider, and make it available to your family, friends and the rest of the world? That's exactly what I've done and I thought I'd share a step-by-step guide on how my stack has been set up so you can give it a try, amend it and/or improve it.


The guide is inspired by an excellent article by Glyn Jackson on running Railo on Raspberry Pi that I found very useful when I started this exciting journey. However, the set-up described below differs slightly as the current version of Raspbian does not allow to install the official Oracle Java JDK and for that reason I'm going to opt for OpenJDK instead. And also, we don't need to worry about the SSH configuration as it's enabled by default in more recent versions of Raspbian.
Another deviation from Glyn's config is using a wired Internet connection rather than WiFi.

Prerequisites

  1. Standard broadband connection with a dynamically allocated IP address
    If you are in the UK, it can be BT, Virgin or any other popular async broadband option (and if you are lucky enough to have an Internet connection with a static IP address, you might want to skip the section covering Dynamic DNS).
  2. A router configurable to forward traffic on port 80 to a machine on the local network
    I tested it with an Apple TimeCapsule and a BT HomeHub but other routers should provide that functionality as well.
  3. Raspberry Pi (Model B) with 512MB of RAM
    I haven't tested it on the 256 model but you might like to do it at your own risk.
  4. An SD card with the Raspian GNU/Linux distribution installed on it
    A step-by-step guide covering this topic can be found on the eLinux site. You will need direct or SSH access to the box with root permissions.

Setup summary

Possible alternatives
HardwareRasbperry Pi 512MB RAM
Operating SystemGNU/Linux Raspbian
JREOpenJDK 1.6Oracle Java provided you are running a soft-float ABI distro (e.g. Debian) instead of Raspbian
Web serverLighttpdNginx
Application serverTomcat 7Jetty, Resin etc.
CFML serverRailo 4 jarsEarlier version of Railo, Open BlueDragon (not confirmed). Adobe ColdFusion is unlikely to run on such limited hardware.
Internet domain providerDNS DynamicThere is a number of alternative free and commercial services

Power

One of the great things about Raspberry Pi is its low power consumption which means having it always on should not ruin your home budget. I'm no expert on the electricity stuff but I'm powering my Pi through a USB cable hooked to the Apple TimeCapsule which acts as our home router and is always on anyway so that means there are two devices plugged to one wall socket. Does this configuration mean a decreased power consumption? I honestly don't know but it's certainly nice to have one extra power socket available.
In my case, the Pi is connected to TimeCapsule with two cables as the wired Internet connection is also coming from it (see the picture below).