I'm working on a project to relpace the funtionality of an existing system that is shared by two business units, so that each unit can continue development in differing directions. The customer asked me about how to make the decision of whether to build a change or new feature into the existing system and then later into the new system, or to build it only in the new system and force the business to wait until the next release.
It reminded me of the Decision Tree problem I faced before, so I set to work on answering the question. But I ran into a stumbling block. The two solutions are as follows:
a) build now in existing system, and then again in the new system
b) force the business to wait, and build it later in the new system
While the former option might not cost double because of the learning curve of the first implementation, the latter option might take a very long time if the prerequisite features for the change have not already been implemented in the new system. So both options have their own complexities. But in theory we can put costs down, and calculate the answer to the question:
a) = 50k + 52.5k = 102.5k (assuming 5% inflation over the year)
b) = X + 52.5k = ?
Hmmm... how do you put a price on the cost of forcing the business to wait? Well, I struggled with that, because its probably very specific to the features of the software and the particular problem at hand. But I asked my friend Mark who has an MBA and he said that he has faced this very decision from the customers point of view. His advice was to always force the business to wait. The reason is two fold. Firstly it intrinsically makes the business put more pressure on the development, causing it to occur faster - not necessarily with the usual cutting of features or quality, but often with the injection of more cash. Secondly, if you develop two systems, the chances of them becoming misaligned and to grow apart is increased, which can cost a lot to fix in the long run.
So, he suggests always forging ahead with the new system, and forgetting the old one. Saying that, I'm not sure I would simply follow that rule without at least doing some minor financial analysis - after all, business decisions should not be made from the gut, they should be calculated.
Anyway, this problem reminded me of where I originally read about this type of problem and its solution. I think the book was by Joel Spolsky (see www.joelonsoftware.com) but I'm not too sure. Anyway, the author was discussing whether to build functionality into software if the requirement is not actually a real one, just that it may become a reality in the future. Like if you are building a Zoo object, should you build the option to bathe all the animals, if its not a solid requirement at this time. Anyway, his options were:
a) build now, but 50% chance its not needed
b) build later, at increased cost due to inflation and having to understand problem and code again
He used values like:
a) = 50k for building now
b) = 55k (5% inflation, plus 5% ramp up) * 50% = 27.5k
So he calculated that its always better to wait until the requirements become a reality, unless the chances of them becoming a reality are very high (91% in this case). Sorry, the animals will have to stay dirty for the time being!
It's for this same reason that we choose to worry about performance when it becomes an issue, rather than during the design and implementation stages. This is especially true with modern Architectures like JEE because they are designed for scalability and performance from the outset.
Finally, the promised picture to go with it:
As part of the Architecture definition for a project I'm working on, I have been preparing a cost estimate. The idea is that the client wants to know if they need to tender the project out to third parties, or whether it can be developed in house by the shared central IT department, used by several business units.
In a meeting with the client today I presented my initial estimates.
Me: "So as you can see, the upper cost will be 820 thousand CHF."
Business Unit Project Manager: "Hold it there. You do realise there is a cost threshold at 600 thousand right?! Above that, we have to tender the project out, and that will add a delay of two months!"
Me: "Sure. In the pre study, there is a project plan showing that an initial estimate places the work at around a million CHF. And in that study, it allows time for the tendering process. But good news, it shouldn't cost as much as a million..."
Business Unit Project Manager: "Can't you somehow reduce the costs? You need to bring it down a little more...We are already running late."
At this point, my unprofessional answer (which I managed to hold in) was along the lines of "OK. Instead of me wasting my time doing a real estimate, just tell me how much you want it to cost, or rather how much less you want it to cost than your boss already thinks it will, based on the pre study. That way, I save time, and you look good when you tell your boss how little its going to cost... I mean what's the real point in doing an estimate anyway? All projects run over anyway right?"
Instead, my colleage interrupted: Colleague: "But if we give a low estimate, and the project runs over, then we will be to blame... And who will pay for the extra development?"
Business Unit Project Manager: "Not at all. Once they spent that money, it will be easier to get more funds. Then we can also do all this extra stuff I want to do because it would look really cool (even though the Customer has spent the last three meetings telling me it will not benefit the business at all). The important thing is to keep the cost to below 600 thousand."
Colleague: "Erm... no."
OK, I admit, the project manager isn't doing anything any other business representative wouldn't also do. But that's the problem. Have you ever had work done on your house? Bet you asked for a fixed price quote? And if the work wasn't done, you threatened to start legal proceedings, and magically the work gets completed? The reason is individual people don't want to spend all their hard earned cash in an unpredictable manner. So contracts ensure they don't get screwed. In fact, fixed price contracts between companies for software development always include clauses stating that over runs are at the cost of the developer. Erm, that's why they are fixed price... So why does this guy think it's different when he is getting an internal shared service department to do the development? Is he magically somehow not spending real money?
So the point is like I've stated before. If you want to do an honest job and one that will benefit the company paying your bills, spend a little time thinking about the real costs involved in software. If you were paying the bill, would you tolerate what's going on? That's how to do a good estimate. Imagine first that you are doing the development fixed price. Would the estimate cover your costs? Then look at it from the business point of view. If it were your money paying for it, would you be happy paying for it? What's the pay back period? Is there a return on the investment?
While working on a previous project, before I had a blog, I remember an incident with a particular boss at a company which thrived on a blame culture. No one cared if software was late or crap. All they cared about was whether another department was to blame.
So once we screwed up on a project and his response was "Can't you find an email showing its the customers fault?". In this case, the answer was "No, it was our fault. I screwed up and misread the requirements." Being a good boss, he didn't want to shit on me. "OK, but surely you can find an email that shows its the customers fault? Thats all I need to copy to his boss." He didn't care about the impact on the business. Didn't want me working over time to fix the problem. Didn't want me to accept responsibility for the problem. He just wanted to blame some other department for the problem. And this was the way he worked project after project. Find evidence. Copy the head of the other department. Threaten to escalate...
He wasn't alone either. Every department worked in the same way. No one had accountability. It lead to people being scared to make mistakes. Perhaps that was why the company was a market leader? But it wasn't a very nice place to work.
SLOC is "Software lines of code" and is an old measure of productivity when taken as lines of code written per day. There is lots of debate about its use, whether its good etc. But when comparing like for like, it should give valid results.
I once worked out that there were something like 20 lines of code a day developed in EAI at my last client (with say 300,000 lines of code in total). We said it wasn't too bad, compared to the old days of OS development quoted as around 5 LOC per day...
I'm reviewing a project here with around 160,000 LOC (thats an assumption that the GUI is the same as the back end, which is around 82,000 LOC). It was developed in 5 months over 820 man days. That makes around 200 LOC a day, so ten times more quickly developed than on the EAI project. It includes generated code, but probably as much as the EAI tool gives you, showing that if you want to build an SOA, that old EAI tool is slow to develop with.
I then took a look at the maxant demo I did. It has around 36,000 LOC (again an assumption that the GUI has the same amount of code as the back end which is around 18,000 lines), and I did that in around 70 man days. So that's around 500 LOC a day.
Both the project I'm now reviewing and the maxant demo have 150 lines per class helping to show that they are comparible (as well as them both being SOA, J2EE and three tiered).
What is also noteworthy here is that the EAI project did not have much budget for testing, but did have a large budget (forced on the customer!) due to bug fixes, changes and production support (due to poor error handling that was never fixed). The other two projects had large amounts of testing before release, and hence little production support and few changes or bug fixes. To me, this indicates that doing lots of testing certainly increases the productivity when measured as LOC per day, compared to a little amount of testing which is known to cause more effort at the end for bug fixes and unplanned changes.
Finally, the reason for maxant Demo being more productive has less to do with me being clever, than it being developed using a single developer. It is well known that the fewer lines of communication there are, the more efficient projects run, and it cant get more efficient than with one developer who is the architect, project manager and customer as well! So the lesson here is to design your systems as components which are as independant as possible. This is exactly the reason OO was invented and "design by contract" became popular.