BPM users are familiar with the process lifecycle. It starts with process modeling. Business analysts document the as-is process, and analyze its shortcomings – handoff inefficiencies, exception handling, inconsistent application of policies and rules. From the model they can define KPIs for the process and estimate their value using simulation analysis. Then they model proposed process improvements, and project the expected benefit using simulation, before committing resources to the implementation.
The next step in the standard lifecycle is process design. Unlike traditional software development, the model is not just a business requirements document; it generates implementation artifacts that are completed by developers in the process design environment. The completed design is then deployed to a process engine, which automates the process implementation – routes tasks to human participants, executes business rules, and invokes services on remote business systems. And as each activity completes, the engine is continuously generating events representing snapshots of instance data.
Next, the BPM suite’s BAM component that uses that data to compute instance-level performance in real time, and then aggregates those metrics into process-level KPIs – counts and means, volume and velocity – including user-defined business measures. The KPIs are displayed graphically in management dashboards, and BAM continuously monitors them with rules – again, user-defined – that trigger alerts or even automated escalation actions if the KPIs drift out of their target range.
BAM lets users take corrective action quickly, and provides actual data that can be used to iteratively improve process performance with another round of modeling and design.
At least that’s the way it’s supposed to work.
But for many shops, particularly in large enterprises, the prospect of modeling their processes end-to-end is just too high a mountain to climb. Which processes? There are hundreds. Which one offers the biggest potential return from BPM? User-defined metrics and rules? How would you even know where to start? If you need to model everything first to get going… well, as they say in Maine, you just can’t get there from here.
Recently I’ve begun hearing from BPMS vendors about an alternative BPM lifecycle that provides an easier path for such companies: Measure the existing process first, and then model after that. When I first heard about this, it made no sense. How can you generate and analyze process metrics if you don’t have an executable process design? And with most BPMS offerings, you probably can’t. But others, notably webMethods and Lombardi, let you instrument process activities in your current as-is process – without executing anything on the BPMS process engine – and capture process metrics end-to-end.
For some companies who have taken this path to BPM, the results of that initial measurement are a revelation. They’ve never before had the ability to track cycle times, costs, or errors at the end-to-end process level, since each backend system involved only sees its own little piece of the process. Once they have the performance data, they suddenly have a basis for discussion. Is the baseline performance good enough? What is the cause of the outlying data points? Are these problems that need to be addressed. From the measurement, these organizations are able to focus on specific processes and specific performance improvement goals. From there, the mountain of modeling seems much more climbable.
If your existing processes are largely automated but still out of control, measure-then-model might be the best way for your company to break into the BPM lifecycle.