How We Reduce Risk in Our Software Projects

Publish Date: 10 November 2022

By Rob Lewis, CEO, Radical Imaging

“How much will it cost” and “when will it be done”? Sometimes potential clients ask us these questions even before we have a basic understanding of what their software project is supposed to accomplish. How could they expect us to estimate the effort required without first knowing what we need to build?

This would never happen in construction. Consider the following hypothetical exchange between a homeowner and a builder:

Homeowner: I want you to make an addition to my house. It should go here and be 20’x40’. How much will it cost and when will it be done?

Builder: What kind of addition did you have in mind? What will you be using it for?

Homeowner: I already told you how big it should be, why can’t you just tell me approximately how much it will cost?

Builder: Because I don’t know yet. First, we’ll need to design your addition; otherwise, we can’t know how much materials and labor will be involved. Let’s spend some time talking about what you want.

Homeowner: Haven’t you done this before? I’ll find someone else who knows what they are doing.

As absurd as that sounds, it dawned on me that something important was going on. Decision makers weren’t being difficult, they were just really concerned that their project would be late and over budget; and they had good reason to be, since software projects are notorious for that. It happens so often that it’s almost expected. I bet they even had it happen to them in the past; probably more than once.

Software’s flexibility is its most useful attribute. It can shift and evolve to respond to the changing needs of business in a fast-moving dynamic environment. However, this ability to change anything at any time exists in tension with the need to minimize risk by planning carefully and constructing with discipline. I struggled to find a good balance between the competing concerns of business agility and conservative engineering. Popular software methodologies didn’t help much.

Waterfall-like processes rely too much on up-front documentation of detailed requirements. That’s usually the wrong balance because it makes the whole project one deliverable, pushing the final result out too far, which is risky. As requirements unavoidably drift, you end up in endless discovery / design cycles which can take too long, cost too much, and might result in missing the market opportunity, killing the project with nothing to show for it.

Agile processes prioritize frequent stakeholder involvement and the delivery of working software over thorough discovery and “ceremony” which feels great at the beginning because stakeholders see quick results while system complexity is still low, allowing for unimpeded development progress. But as the system grows, the problem soon becomes apparent: the system design is “discovered” and evolves throughout construction (emergent design) resulting in unavoidable technical debt that requires endless refactoring, or worse - unmanageable complexity. Either way, you wind up losing control over time and cost parameters of your project because it’s nearly impossible to predict and quantify what it will take to deal with technical debt. That said, at least a killed agile project leaves the stakeholders with some working software which hopefully is of some value to the business. Agile can be a good option when the best possible product must be had at all costs, but few businesses operate in such a time / cost vacuum.

Being the optimizer that I am, I wanted a methodology that could strike a better balance for real-world projects; one that supported on-time, on-budget delivery of maintainable software solutions while simultaneously enabling the business to respond to its competitive environment with the required lightness of foot. To make a long story short, I wound up attending some trainings with a company called IDesign. The founder, a recognized leader in modern software architecture and engineering, spent a lot of time thinking about this problem and had come up with his own methodology to solve it. I have since sent many of our staff to these trainings. We have incorporated the ideas into our software development process resulting in great outcomes. I will spend the remainder of this post describing a few highlights of what’s different about the process we use when planning and delivering software projects.

Discovery and System Design

We begin every software project with a discovery, design, and planning phase. While that is similar to what happens in a waterfall process, there are some key differences in how we approach it that allow us to avoid the common pitfalls.

Core Use Cases Instead of Detailed Requirements

During the design and planning phase, we focus on learning about the business, its future and the project’s overall goals. This allows us to zero in on the system’s core use cases, i.e. the main things the system is used for. 

We use core use cases instead of requirements for expressing desired system functionality because they tie the individual requirements and features together into user-centric workflows.  This helps us focus on what really matters to the business and keeps stakeholders engaged so we can get meaningful input from them.

Just-In-Time Detailed Design

After having done some discovery, we begin designing the system as a set of components that communicate with each other to deliver end-to-end system functionality.  But we don’t decide yet about all of the behavior inside each component in detail, nor do we specify all of the interfaces and data payloads at this time. Just as with detailed requirements, it would be wasteful to do detailed design at this stage.

By focusing at this level of granularity, our design process can be iterative and nimble while avoiding wasted effort on detailing low-level requirements and design elements that are likely to go through changes before construction. Iterative design is faster and cheaper because it reduces the need for expensive, disruptive design changes during construction.

Encapsulate Change

During system design, we seek to understand how the system will need to evolve over time. No one can accurately predict the future, but if we can proactively identify areas that are likely to undergo relatively rapid change, then we should do whatever we can to isolate them from other areas of the system. While not always straightforward, this sort of volatility and dependency analysis is highly valuable because it protects the investment in the system from the need for future large-scale rework. Put another way, it should be easy to modify the system to meet future needs without having small changes take weeks or months; or worse - needing to scrap the whole system and start over.

Validate the Design With Core Use Cases

Once we have a candidate design, we refer back to our core use cases, diagramming selected execution paths to illustrate how the system’s components will interact to meet the needs of each use case. At this time, it’s useful to explore variants of core use cases to see if they can be expressed as user interactions, service calls, data exchange and ultimately system behavior that produces the expected result. If the design is valid, then each use case can be shown this way, with a specific path through the system’s components that we know will work in advance of having to construct the system to prove it. This is the time to discover design flaws and make adjustments as it’s far more costly to do it later.

Estimate Thoughtfully

Careful planning depends on accurate estimation, which is sort of an oxymoron. Rather than pretend we can “nail it”, we respond to the inherent uncertainty by using various techniques to minimize its effect on our overall planning accuracy.

For one thing, we use multiple estimation methods in combination, comparing the results to each other allowing us to spot and question obvious outliers. Senior developers generally have a good idea of how long things take based on their experience. Broadband estimation (soliciting estimates from many team members) is a totally different way to approach it and can often reveal new information as well as assumptions that should be challenged. Estimation tools like cocomo can help by making sure that something’s not off by an order of magnitude.

We estimate in weeks instead of days or hours. At first glance, that can sound like just plain guessing. In fact, the opposite is true. For one thing, it’s not possible to know how long a complex task will take to the resolution of an hour without getting into detailed design. We also benefit from the law of big numbers in that some tasks will be underestimated this way, others will be overestimated by just as much, evening things out. Besides, we’re not just looking to understand cost, but also to maximize efficiency. Work has a natural week-long cadence. Planning this way allows us to minimize the need for ad-hoc meetings and provides a sense of order and rhythm, making the best use of the entire week.

Project Planning

Now that we have a valid system design and a clear sense of how much effort each component will take to create, we need to start thinking about the best options for organizing a team to construct the system.

Assign Tasks According to Dependencies

By starting at the end result and working our way backwards, we can identify the antecedents required for each step, breaking the development tasks into independent work streams so that each path can be assigned to one developer. Grouping dependent tasks allows us to maximize development efficiency by allowing each developer to work independently. We explore the effect of splitting sequential work streams into parallel ones, looking at the balance between speeding up delivery vs. adding to cost and risk.

Plan Against the Critical Path

Once we have diagrammed the network of tasks having looked at their dependencies, we add in our estimates, which reveals the critical path: i.e. the longest work stream. Adding up all tasks along the critical path tells us how long the project will take. We make sure to assign the strongest developer to the critical path, prioritizing any and all support that individual needs over any other activity since anything that delays the critical path will delay the whole project.

Quantify Risk

While we try to minimize risk, there is no way to eliminate it completely. While planning, we look at various factors that could cause the project to be delivered late or over budget, quantifying the various risks so that any changes to the plan can be evaluated through this lens. For example, by looking at things like how many parallel development activities there are, and how many dependencies exist between them, we can see how much risk is tied to activities potentially being delayed. By looking at slack available in various tasks and along development paths, we can see how much delay can be tolerated before the project is delayed. By converting these into numbers, we can evaluate proposed plan optimizations, making sure they don’t introduce unacceptable project risks when attempting to reduce time or cost.

Provide Multiple Viable Execution Options

The whole process, from initial discovery to plan delivery, typically only takes us a few weeks. At the end of it, we provide our clients with several plan options that balance time, cost and risk. Having done our analysis thoughtfully, we are confident in our ability to successfully execute each of them. Is time to market more important or is budget the main limitation? Which plan to choose becomes strictly a business decision, and one that can be made without guessing or gambling.

Project Execution

“No plan survives first contact with the enemy”. While we have set the project up for success through thoughtful design and planning, it’s equally important to monitor progress against the plan on a weekly basis. By keeping close tabs on how well the team is tracking to the plan, we can identify, diagnose, and correct problems while they are still easy to fix. 

Track Progress Based on Earned Value

How can we track progress accurately when developers are in the thick of it? How often do you get the answer “it’s almost done”? Attempts to estimate this (i.e. “it’s 80% done” or “it’ll be done in 2 weeks”) fail because they are often wrong, pushing diagnosis and corrective action too far down the road to where they become project-level risks. By making task completeness a binary condition, it’s done or it’s not done, we can get a true picture of project progress; a snapshot of how the project’s current state compares against where we are supposed to be at this time according to the plan.

Embrace Change, But Adjust the Plan

With all of the effort put into designing and planning for a low-risk software project, how do we accommodate situations when the client wants to change the project’s scope midway through its execution? Rather than dread what seems like a fly in our perfect ointment, we treat these as opportunities to demonstrate the value of our approach. If we’ve done a good job designing the system, the requested changes won’t require modifications to the architecture, but what about the plan? Can we keep to the original agreed-upon time and cost? Probably not.

The only way to properly understand the impact of a proposed scope change is to redo the plan and see how things change. From there, the new plan must be approved by the project stakeholders before becoming the new plan. This way, we can stand behind the promises we make, 100%.

Radically Engineered Custom Viewers and Workflows

If you find our process appealing and have an upcoming medical imaging software project, let us know how we can help! Learn more about our Managed Outcome Service here or schedule a Discovery Call with us here.

Radical Imaging

Get Notified When We Add New Medical Imaging Content

Subscribe to our email list below and we'll let you know when we publish new content related to OHIF, Cornerstone, our work, events, and company announcements.

© 2024 Radical Imaging LLC