Your Tax Dollars At Work

It seems that the FBI is going to throw out a $170 million software system and start over.

A new FBI (news – web sites) computer program designed to help agents share information to ward off terrorist attacks may have to be scrapped, the agency has concluded, forcing a further delay in a four-year, half-billion-dollar overhaul of its antiquated computer system.

The bureau is so convinced that the software, known as Virtual Case File, will not work as planned that it has taken steps to begin soliciting proposals from outside contractors for new software, officials said.

A prototype of the Virtual Case File was delivered to the FBI last month by Science Applications International Corp. of San Diego. But bureau officials consider it inadequate and already outdated, and are using it mainly on a trial basis to glean information from users that will be incorporated in a new design.

Science Applications has received about $170 million from the FBI for its work on the project. Sources said about $100 million of that would be essentially lost if the FBI were to scrap the software.

As someone who works in the field of building software systems, I can speculate a little about the possible reasons for this problem.  One common failing we see in the computer industry is that the software as delivered is not accepted by the customer.  Usually, this comes down to a fundamental misunderstanding of the requirements, either by the designers or sometimes by the customer (that’s not as weird as it sounds, more later on this) and a failure to properly involve stakeholders in design and development.  Further, the fact that they had to spend $170 million to get the system and only then find it isn’t suitable suggests they’re using a waterfall software development model, which while a fairly standard practice (especially in government), is especially prone to this type of failure, since the customer doesn’t really get to see anything until very late in the process (e.x. at system test or even at the user acceptance test stage).

I mentioned above that sometimes systems fail because the customer didn’t understand the requirements.  Some people are probably wondering how this is possible.  What often happens is that a customer sees a “pain point” in their current way of doing things and wants to “fix” it.  Often, you’ll also get conflicting viewpoints on this from different elements within the customer’s organization.  The job of a good designer is to ask questions and ferret out the real core requirements from the customer and get buy-in from all of the stakeholders.  The other side of this coin is the gung-ho, get-it-done-now type of customer who sets requirements that are really too low (i.e. what hardware or software to use, even if it doesn’t really fit the solution) and the designers don’t go back up the chain to find the real system requirements.  It can be difficult to get this kind of customer to reveal their true requirements, since asking questions is seen as either a challenge or a waste of time (e.x. Why aren’t you doing something productive?).  There is also a third problem in some organizations, where requirements are funneled through another group, so the primary stakeholders are not directly involved in requirements reviews between the designers and the customer.  Frequently, these third parties “filter” the requirements based on their own organizational biases so that the real requirements may or may not make it to the designer.

This is the area where I work—I have to understand the customer’s requirements and turn them into a system design.  This is a huge area of work in academia and is of great concern to all IT organizations.  I’ve had some training in the Systems Engineering and Architecture methodology from the SDOE Program offered through the Stevens Institute of Technology (specifically, the 625 and 650 courses).  My employer paid for this training because they’re trying to bring some order to the process of software development and I’ve found that while a bit tedious at first it offers some advantages.  Primarily, the proper management of requirements, while it takes longer up front, is shown to reduce the overall cost of the project as well as reducing defects and their associated costs.  A defect in requirements that is caught during a requirements review may only cost a few hours to fix, whereas a defect that makes it to the system test stage could cost hundreds or thousands of hours to fix (and programming time isn’t cheap), with corresponding delays in schedule as well as reduced customer satisfaction.  The goal of SE&A is to design the system the customer wants and be able to prove it when done (i.e. be able to show objectively that the requirements were met, rather than arguing over what the system is supposed to have done at the end).

While I don’t know what methodology the FBI and SAIC used, I would expect that for this level of failure to occur, the requirements were either not properly documented or understood or they were wrong to begin with and no one questioned them.  If the requirements changed during development (9/11?), then if they were using this type of process they could have evaluated the change in requirements and determined fairly quickly how they would ripple through the system as well as the impact in cost (SE&A promotes traceability of requirements from business/stakeholder needs to System Requirements down to components, so a change to the business requirements immediately tells you which components are likely to be affected). 

The waterfall method I mentioned above is simply one where you do requirements analysis, design, coding, testing, and delivery in that order and it’s all preplanned and laid out at the beginning.  Each step has to finish before the next one starts and the customer doesn’t see anything until testing or delivery.  If there is any doubt about requirements, this type of method is likely doomed to failure or will require heroic efforts on the part of the development team to save, since any misunderstandings can’t be fixed until afterwards and only with great effort and cost.

Since we saw such a spectacular failure with this project, perhaps the FBI should consider an iterative approach.  In this approach, once the requirements are analyzed and agreed to by the stakeholders, design and coding begins on some core part of the system and it is presented to the customer as a prototype.  The customer evaluates the prototype and any feedback is incorporated into the next iteration, which adds some new features as well.  This process repeats as needed until the system is complete. 

In addition to the iterative approach, they should also consider involving the real stakeholders in the requirements gathering and review process.  This includes real, live field agents in addition to their chain of command and the IT staff.  Everyone who touches the system in their jobs must be considered and there is no better way to do that than to actually talk to them.  This may be difficult in the government procurement arena, where these things are typically sent out as requests for bids with the requirements predetermined by some government functionary.  But if the FBI could break through this type of mentality, they may have a chance of delivering a system that actually works.

Link via Slashdot.

Comments are closed.