... and the software architect says: I found two legacy projects that I think we could use as the foundation for our new project. The first one has a million lines of code, but there are no test cases. The second project has a really nice API and a well-documented set of test cases but no source code. We can only find the object code. Which direction should we take?
Which approach do you think would most likely result in a good outcome? Which one would be cheaper? Faster?
This is not a strictly hypothetical question. Every day software managers are faced with the challenge of building applications faster. Leveraging existing code, either open source, or proprietary is critically important. When code bases were smaller there may have been a third choice for our Software Manager: build everything custom so that we know what it does. With the size and complexity of modern software applications, this is seldom a choice any longer. So what would you do in this situation? Option 1 is attractive, we have something that works, and if our changes are minimal, there is no question that we can be "done" more quickly. But what will our maintenance costs be? How likely is it that existing functionality will get broken as we add new features? How long will it take to fix bugs when there are no tests that document the existing functionality?
One of the critical things to understand when adopting legacy code is the life-cycle costs of maintaining the code over time. Obviously code bases with inadequate tests and documentation have much less value than well-documented and tested code bases. This means that the real value is in the API and the tests, not so much the code. With complete tests I can refactor and enhance a code base with confidence. If you have questions about how best to leverage a legacy code base, talk to us about how we can help.