I’ve been thinking a lot over the last year or so about the differences between beginner developers and experienced developers - what it means to be a professional developer, and what differences will you see in professionally written code.

A lot of people have talked about the importance of writing maintainable code - with extensive commentary about the importance of coding standards, naming conventions and the like. These things are important - but I think there is something more fundamental to consider.

My own (still evolving) opinion is that the key differences lie less in what the code is, and more in how the code can be changed.

When someone comes along to make changes to an existing piece of code, their goal is to make the required changes and then go on and do something else. They won’t want to spend six months learning the structure of the system - they probably don’t want to spend six minutes learning the system - but they will be much more likely to make the smaller investment than the large.

Perhaps the best way to see what I’m getting at is through some examples.

Example #1 - File Exchanging

You have a program that implements an interface between two systems - exchanging files in some kind of standard format.

Now you need to make a change - a new exchange point needs to be created for an entirely new flow of information.

How easy is it to identify where the change(s) need to be made?

In the first case, assume the system is composed of extremely large, monolithic methods that each fully support a separate kind of transfer. While there are some apparently generic methods available, it turns out that these are peppered with special case code specific to particular situations.

In the second case, assume the system is composed of a number of small methods that make extensive use of a well designed generic toolbox.

Which of these two systems is going to be easier to comprehend and modify?

Example #2 - Database Design

You have a database containing information used by a core-business system. By core-business, I mean a critical system that has to run all day, every day - if the system goes down, your business stops dead.

You need to add new information to a database, a new attribute about one of your core objects.

Can you simply add the attribute directly to an existing database table, or might that have flow on effects that you can’t predict, or even find? How many different pieces of code need to be changed to support the new attribute?

In the first case, you find that your system is making assumpting about the number, nature and order of fields in your database tables. With SQL statements like Insert Into X Select * from Y, the effect of any database change could be catastrophic.

In the second case, you find that all the SQL in the system is quite explicit - listing all the required fields in the appropriate order, rather than relying on any accident of implementation.

Which of these two systems is going to be easier to modify and test?

What’s my Point?

My key point with these examples is this:

If you forget about the future, if you forget about the person who is going to maintain the code, it becomes easy to litter any system with structures that act as landmines for future developers. They may be little things, but if they’re not predictable - if a future developer doesn’t expect them, they’re landmines.

It is far better to keep the future of the code in mind and develop the system in a way that will be predictable and comfortable for any future developer. After all, that developer might be you!

Some people refer to this as the Principle of Least Surprise - but I think that it is much more about being professional.

[Image Credit: illustir @ Flickr]


blog comments powered by Disqus