Over the past few months I’ve been teaching a friend to program. He has no experience and didn’t know where to start. After asking him some soul-searching questions we decided to see where things would go. We committed that I would mentor him every few weeks, for a few hours, and give him assignments to do between sessions. We decided that we would focus his exploration in JavaScript, due to it’s flexibility and general purpose.
In our first few sessions we went over the basic operations, data types, and structure of JavaScript. As we explored these ideas I built a fairly rudimentary game of black jack and he followed along.
In our latest session my goal was to begin to transition from procedural programming to introducing object-orientation. Somewhere between his blank stares and my ramblings I got an idea to focus on what he knows and challenge him to write an app that’s simple yet interesting to him.
Before he left I explained that his application, like the ones before it, should be a simple console based JavaScript app. I can’t remember if he asked it or if I just recognized the elephant in the room. When do we make the GUI?
Do we make the GUI first? Can we make it later? Do we make it last?
I generally answer these questions by saying that a good application architecture allows you defer any infrastructure decision for as long as is necessary. We build software to solve problems and we should build our apps to focus on the problem it’s solving not on the devices it’s run on. By building our application logic separately and making all of our presentation specific technology plug-in to our application (or invert the dependency) we gain a huge advantage later on if and whenever we want to expand our app to different platforms or different presentation frameworks/technologies. This immediately increases our code’s longevity and reuse ten-fold. An added bonus is that the practice of inverting our dependencies inward towards our applications logic seems to always produce obvious component boundaries.