Know the problem.

“A problem well stated is a problem half solved.” –Charles Kettering

Charles Kettering was the head of research at General Motors from 1920 to 1947. He was an inventor, engineer, businessman, and holder of over 100 patents. And he had an excellent point when talking about having your problems well stated.

Sometimes, the biggest challenge in software engineering is understanding what your problem truly is. Over the past several years, I have found this to be true more often than in the early years of my career. This is probably because software applications are a lot more complicated now, with many more features.

When the people requesting new features state their requirements poorly, the impact on development schedules is dramatic. Weaknesses in problem definition are likely to cause ten times as many delays compared to bugs found in testing. This is why business analysis is critical, and everyone needs to spend more energy in this direction.

Unfortunately, many business managers assume their vague requests are sufficient and refuse to do the legwork in working out the details, leaving it to the developers to guess the best course of action, which almost always ends in a suboptimal outcome.

Be independent.

Watch out for the weak link.

An application typically depends on many libraries and packages to get its job done. Unfortunately, your software will only be as strong as the weakest link in the chain of dependencies that you inherit. And the more of these that you depend on, the more likely it will be that one of them will cause you trouble in the future.

So you should endeavor to reduce the number of libraries and packages that your code will need. Some of these may not be necessary; perhaps they were only used by a section of code that has since been removed. Others may do trivial things that should be implemented in your code.

Be aware that every package you bring in will be maintained at a different level of effort. It is not wise to include a package that is not properly maintained; it may become an attack vector for hackers or cause other problems. Each has to be tracked and checked for known vulnerabilities regularly.

Dependency chains in your code will increase the complexity and may force significant upgrades at inconvenient times. For example, it can be very frustrating when you want to use A, but it requires B, which requires C, which includes minor thing D, but then D is upgraded. As a result, the only way to keep using A is to a major upgrade of A, B, C, and D. And the more dependencies you have, the more of these chains will exist.

Watch your signatures.

Try to avoid methods that have arguments of the same type.

It is easy to have a bug if people call the arguments in the wrong order.

For example, suppose you have a calculator method that takes in arguments running total, amount, discount, and tax. So when called with the arguments, for example, (100.00, 12.00, 0.20, 0.05), one might expect to take 20% off of the $12 amount, add 5% tax to it, add that to the current running total of $100, and return the new running total. So it would return, in this case, 110.50. But if running_total and amount are both of type money, then another developer might mistakenly call this method using arguments in a different order, such as (12.00, 100.00, 0.20, 0.05). This set of arguments would compute differently and give a result of 92.40.

If each argument had its own type, then passing in variables out of error would give a type error, avoiding this problem. For example, instead of a generic money type, maybe running total is of type SubtotalMoney, and the amount is AmountMoney.

Another solution is to avoid passing in the running total and instead of making that its own object so that it would already know its running total. Thus you could call running_total.add(amount, discount, tax).

Alternatively, one could use named arguments, which is another good practice.

The main point is to make it hard for other developers to make a mistake, so you reduce the chance of bugs occurring down the line.

Control your test data.

Don’t let it control you.

When you’re using data in your tests, you should avoid using data that is a copy of production. Sure, it may seem like a good idea, as it will more closely simulate the production environment. The problem, however, is that the production data may change. At that point, the tests will no longer be simulating production. They may, in fact, represent things that are no longer true.

Instead, you should use test data that you can completely control. The test data should support the business rules and examples. It should cover all of the edge cases and special circumstances dictated by the business rules. Most important, it should not depend on what any particular customer has for data, as that can always change.

Also, data in test cases should be fully defined and not generated randomly. Random data will cause tests to fail randomly, which will make the developer’s job very difficult and frustrating.

Don’t cheat.

Take your code coverage seriously.

Every line of code should serve a legitimate purpose. The code should either service the business logic as the implementation or demonstrate examples through tests.

In the quest to achieve 100% code coverage or to satisfy a code coverage tool, you need to ensure that all the tests serve a purpose. Never use dummy code to increase code coverage.

If code doesn’t belong, delete it. Any unnecessary code will only cause problems later.

Avoid any temptation to cheat the tools by adding fake tests. If you feel the need to cheat, you’re doing it wrong and should just go home.

Check your coverage.

Test all conditions.


It’s not hard to understand how having a codebase covered by only a few tests is much weaker than having a lot more coverage. And, of course, having no tests is the worst thing of all. So, one can extrapolate from here that having 100% code coverage would be best of all.

Getting there can be a bit of a game. Practicing TDD should lead to 100% coverage. There are tools available to measure the coverage and locate sections of code that don’t have tests.

The game doesn’t stop there. Having 100% coverage does not mean you have handled all conditions. Sometimes the outputs of a routine may exhibit different behavior in various situations.

When writing your examples, try to look for nulls, exceptions, and boundary conditions. Be sure to borrow from real-world examples. Try to imagine how the system may perform when users do strange things. And indeed, if any defects are reported, make sure everything around their cases gets covered as well.

Do a little extra.

Exceed expectations.

“The difference between ordinary and extraordinary is that little extra.” — Jimmy Johnson.

Don’t stop at the minimum. No one is going to remember your performance if all you did was the bare minimum. The rewards and notice go to the people who deliver well above expectations. And the funny thing is, it usually doesn’t require much more effort to do this.

Be your own point of difference. It’s easy to stand out from the crowd when you’re the one who is truly passionate about their job. While others are just skating by, those who excel are driven to do more. This extra effort results in additional productivity that is easily noticed by others.

Don’t confuse this with YAGNI. YAGNI is a principle that keeps you from delivering features that were not requested. Exceeding expectations is about delivering more of the stuff that people want.

You have to deliver.

Perfect is the enemy of good enough.

Voltaire

Popularized by Voltaire, the concept of the perfect being the enemy of the good enough serves us well in software engineering. Too many projects run late or fail to deliver at all because the developers are constantly trying to make things “perfect”.

The problem is, there’s no way that they’re right. Until you deliver a product and get it in front of real users, you don’t really know all of the ways that it can be improved. The user base may want the product to do totally unexpected things or be used in novel ways. Features you thought were important may be trivial to them, while other aspects you considered unimportant may come to have a high value.

But most importantly, you have to deliver. Without a release, your business value is a big, fat ZERO.

Working software is the measure of our business value. No software has no business value. Stop futzing around and get stuff out there. Once it’s out you can iterate on it and make it better.

Practice TDD.

Write your tests first.

Begin with the end in mind. It’s one of the 7 habits of highly effective people. When you are working towards a definite goal, you can formulate a plan on how to get there. One should undoubtedly design a solution before programming the implementation.

Start with the expected behavior. This should come straight from the story; as a role, given some conditions, when I do something, I should see this result. If you wrote the story into a behavior test, you would already be a long way along. Tools like cucumber and gherkin stress this idea, by translating human-readable language into test details.

Moving from the test of behavior to the test of units is the tricky part. That is where you are forced to make some decisions about the technical design of the application. You need to determine what objects you are going to have in play and how they will interact with each other. This, in turn, leads to the definition of interfaces.

As you develop your collection of objects, interactions, interfaces, and methods, you can start to create tests for how these methods should perform. The first step is to simply identify all of these tests, by giving them good names. With that, you can tell at a glance if you’re covering all the necessary scenarios.

Only after the test is fully fleshed out will you create any code that is part of the actual implementation. This may seem like a lot of prep work to get code written, but when you do it this way, the time spent in implementation is a lot shorter and usually requires much less rework.

Make the robots talk.

Your unit tests should convey more information.

It’s one thing to have a healthy suite of unit tests covering your codebase. But with hundreds if not thousands of tests in play, tracking down errors can be a little frustrating at times. To assist with debugging, your unit tests should have meaningful names that describe in plain English not only what they are testing, but which module is being tested. This way, it will be much easier to track down issues.

Unit tests should also give verbose error messages that make it immediately obvious what is wrong. The error messages can include details, context, and information about the example that failed.

Also, don’t be afraid to add comments in the test cases. It can be extremely helpful to know why certain tests and examples exist.