G'day:
A question cropped up on the CFML Slack channel the other day. My answer was fairly long-winded so I decided to post it here as well. I asked the original questioner, and they are OK with me reproducing their question.
Here's my answer. I've tidied up the English in some places, but have not changed any detail of what I said.
Eventually there will be a meaningful overhead cost of creating a lot of objects.
Note that your comment about "behave like include files that would come with a certain overhead because of physical file request" isn't really accurate because the files are only read once, and after that they're in memory. Same applies with includes, for that matter. The process is (sort of):
- code calls for an object to be created
- if implementation for object is not found in memory, its code is loaded from disk
- object is created
- object is used
- object is at some point cleaned up once it's not being referenced any more (at the discretion of the garbage collector)
That second step is only performed if needed, and all things being equal, will only be needed once in the lifetime of yer app in the JVM.
So don't worry about file system overhead; it's not going to be significant here.
Creating objects does come at a cost, and neither CFML engine has traditionally been particularly efficient at doing so (Lucee is better I believe; and CF is not as slow as it used to be). This could be a consideration at some point.
However performance considerations like this shouldn't be worried about until they start becoming an issue.
Design your application in a way that best represents the data and behaviour of your business domain. Make it modular, employing a sense of reusability and following the Single Responsibility Principle.
Keep an eye on your JVM. Use FusionReactor or something else similar. I suspect FR is the only game in town for CFML code; but there are other general JVM profiling tools out there as well which will do as good a job, but be Java-centric. If you see performance spikes: sort them out.
Load test your application with real-world load. This doesn't mean looping over object-creation one million times and doing "tada! It took x seconds". This means close to nothing and is not really a meaningful test. Use a load testing tool to load test your application, not your code. Back when I uses to do such things, There was tooling that could re-run a web server log, so one could easily test with real-world traffic. This is important because concurrency issues which might cause locking bottlenecks, and application slow-downs.
[I forgot to say this bit in my original answer]. Irrespective of the overhead of creating objects, these will be (orders of magnitude more ~) trivial compared to the overhad of a poorly-written DB query, or bad indexing, or bad locking of code, heavy (and possibly badly-designed) string processing etc. There's stacks of things I'd be worrying about before I wondered if I was calling new Thing() too often.
[conted…]
That said, don't go crazy with decomposing your domain models. A Car doesn't intrinsically need to have a collection of Wheel objects. It might just need to know the number "4" (number of wheels). Wait until there is behaviour or data needed for the wheels, and make sure to keep those elements separate in your Car class. At some point if there's a non-trivial proportion of data and/or behaviour around the Wheel implementation, or you need another sort of Car that has Wheels with different (but similar) properties: then extract the Wheels into a separate class.
Making sure you have all your domain requirements tested makes this sort of refactoring much safer, so that one can continually engineer and improve one's domain design without worrying too much about breaking our client's requirements.
Andreas followed up with an observation of falling into the trap of using bad practices when writing ones code, due to not knowing what the good practices are (my wording, not his). To this, I've just responded:
I think doing stuff that is "bad practice" when you don't know any better is fine; it's when you do know better and still follow bad practice for invented reasons (fictitious deadlines, mythical complexity, general "CBA") that's an issue.
That said one can mitigate some of this by actively improving one's knowledge of what is good and bad practice. This is why I advocate all devs should - at a minimum - have read these books:
- Clean Code - Martin
- Head First Design Patterns (second ed) - various
- Test-Driven Development by Example - Beck
- Refactoring - Fowler
There's other ones like Code Complete (I couldn't wade through it I'm afraid) and The Pragmatic Programmer (am about 20% of the way through it and it's only slightly engaging me so far) which other people will recommend.
One should make sure one is comfortable with the testing framework of choice for one's environment, and testing one's code. Either before via TDD or even just afterwards is essential to writing good stable, scalable, maintainable code.
I'm pretty sure there's not an original thought here, but hey: most of my writing is like that, eh? Anyway, there you go.
Righto.
--
Adam