October 12, 2021 By Harley Davis 13 min read

Process mining gives you the tools and methodologies that you need to unlock the data that shows how processes work, how people do their job and where problems are coming from. 

Covered in this chapter

  • What is process mining?
  • Data collection
  • Analytics
  • Taking action

The Art of Automation: Table of Contents

Imagine this:

  • You are the back-office manager in a bank, responsible for the teams that open, manage and close customers’ accounts. Your SLAs have been sliding for several months, with times for opening and closing accounts outside regulatory boundaries and customer complaints piling up. You need to figure out what is happening and do something about it quickly.
  • You are CIO in a major corporation, with a large IT help desk team responsible for ensuring that tickets are dispatched and solved expeditiously. The number of tickets keeps growing as your company uses an ever-growing number of complex applications — but your budget is stable. The NPS score for the help desk is going down. How do you make sure you can keep up with the growth with a team that stays the same size?    Where do the issues come from that are causing delays?
  • You are responsible for a purchasing and accounts payable department. Despite all your efforts, you keep hearing reports of maverick buying outside company guidelines. And many of your suppliers are being paid late and the penalties are starting to take a toll.   Where do the problems come from? How can you solve them systematically?

For all these problems, it seems like increased automation could help. After all, you can’t ask your teams to keep doing more in the same amount of time, and more streamlined automated processing could solve some of the issues. Automation can handle the routine issues and thus help free up peoples’ time to work on harder issues and improve customer service.

Maybe you have brought in external consultants who have told you about the wonders of robotic process automation (RPA), AI-powered decision management, and business process management – and those things seem to work elsewhere. Maybe you have used RPA on a specific task, but it did not improve the overall process. Your business analysts have drawn up process maps and interviewed employees and have some ideas on where the issues stem from. But the investment in automation needs a clear business case for your management to agree to spend money ahead of seeing results — and so far, the business case is a bit wishy-washy. You are not sure the ROI will really be there.

What is missing in this scenario?  

In all the examples above, management has a good intuition that something needs to be fixed, and the KPIs used to measure the business are showing there is a problem. But discovering the true roots of the problem and ensuring that proposed solutions will have the right impact requires something else — actual data about how business processes are being executed today, root cause analysis of this data and simulation of the impact of automations on an accurate model of how the business works.  

Process mining brings that missing link to the table.

Process mining gives you the tools and methodology that you need to unlock the data that shows how processes work, how people do their job and where problems are coming from. It gives you analytics to dig deeper into the business and uncover where automation and other process changes can have the biggest impact. It can then simulate how the business would operate with the automations in place, letting you focus on the solutions that maximize expected results, radically increasing your confidence that your investment in automation will really pay off. Then you can use the same tools to measure the improvement against the estimates and help you continue your journey to automation.

In this chapter, we will describe the various steps in process mining and some of the analyses available to help you create your own data-driven roadmap to automation. We will be using IBM Process Mining, the IBM-enhanced version of myInvenio, as the tool of choice. IBM Process Mining has an especially rich set of analytics and simulation capabilities, with links to the rest of the IBM Automation portfolio. It includes capabilities like business rule mining, task mining, multilevel process mining, reference model comparisons and the ability to create simulation data for a process model to get insights independently of historical data.

What is process mining?

The overall paradigm of process mining is straightforward. Look at this diagram:

The idea is to go from process execution data (found in either system logs or recorded from peoples’ desktops), to analyses of that data to help understand how the process works, to discovering where there are meaningful opportunities for automation, to simulating the impact of the proposed changes using the model created during the analysis. Then you can create the automations that will have the most impact using RPA, decision management and the other technologies in the automation toolbox. To close the loop, you can then measure the impact of the changes by gathering new data from the updated process and repeating the cycle again.

Let’s look at each of these areas in a little more detail, then dive into some of the analyses and tools you can use to dig into your business and IT processes.

Data collection

The first step in doing process mining is to gather the data that will be needed for the analysis.   This is usually the most time-consuming part of a process-mining project. You will need to figure out where the data sits, how to access it and how to format it in a way that the process mining tool can use.

We are looking for data that shows how people execute processes. There are two primary sources of this data – system logs from systems that people interact with and records of the actions that people do on their desktops when engaged in executing a process.  

System logs

Systems that people use include ERP systems, CRM systems, IT ticketing systems, accounting systems and so on. You first need to do an inventory of the systems people use in the process you are trying to analyze and the kinds of information these systems store. Often, these systems put an entry in a log or database whenever someone executes a transaction or change with them. We are looking for these transaction or event logs. The data should include the action executed (which task was being done), an ID of a process being executed (typically a contract number, client number, ticket number or similar), a timestamp for when the event occurred, who executed the action (the user ID), and maybe other information that is interesting for the analysis – how long something took, the outcome of the event, and other interesting information.

This phase of the project needs the involvement of the folks in IT who understand how these systems work, where the data can be accessed and what its format is so that it can be read and transformed into the right format for the process-mining tool. For some systems, such as SAP and others, the process-mining tool helps by providing predefined connectors that do a lot of this work.

Task mining

The other main source of useful data is watching what people do on their desktops when executing processes. We can install recorders on the desktops and configure the recorder to store events whenever a user does something related to their process execution job (and ignore everything else they do to ensure privacy for unrelated activities). Then, we can send these event logs to a central server where they are consolidated with all the other peoples’ records; when enough data has been gathered, it can then be fed to the process-mining tool for analysis.

System logs and task mining are complementary ways of getting historical process execution data and are often used for different purposes. System logs are good for doing an overall process analysis and seeing the big picture, especially for processes that are centered on modifying data in one system or a small number of systems (e.g., ERP systems for accounting, CRM systems for sales and marketing processes or IT ticketing systems for help desks). Task mining is good for getting down to the details – exactly what actions does a person take to execute a task, under what circumstances, with what variations and so on. This is very helpful when you are considering automating these actions using RPA tools that focus on that fine-grained detail.

Analytics

Once the event data has been prepared and fed into the process-mining tool, it can analyze the data to produce a set of visualizations that can be used to pinpoint problems.

One basic analytics visualization is the process map, which shows the set of tasks executed during the process and how they are connected — which ones follow which other ones, the order in which they are executed, etc. Because the process may follow a different sequence depending on the different types of cases, the process map shows the different “process variants” that have been used. In addition to showing the process map, the analytics can show which tasks and which variants are executed most often, which ones take the most time or which ones cost the most. This is your first clue to finding issues — the tasks and sequence variants that are seen most often, that take the most time or that cost the most are good candidates for further investigation.

The image below shows how this works. Each task in the process is a labeled box. The darker the color of the box, the more often it is executed in the data set provided. The number in the box shows the number of times the task was executed. For example, the task “Authorization Requested” was executed 46,415 times. The arrows indicate which tasks follow which others. In our case, the “Authorization Requested” task led to the “BO Service Closure” task 44,560 times (presumably, the remaining 1,855 cases were either rejected or still pending when the data was analyzed):

This starts to become even more interesting when we look at the time spent between various tasks:

The darker colors indicate that more time is spent on that task. We can see a couple that pop out as problematic — “Pending Request for Network Information” and “Network Adjustment Requested” seem to take a lot of time. Perhaps there is something we can do to automate the network information and network adjustment requests that would speed them up? You can see how powerful this kind of information is.

You can also see which paths through the tasks are happening with what frequency, which is another clue to finding issues:

In this example, the highlighted path is the most frequent way this process is executed – used 27.8% of the time and taking 19 days and 13 hours, on the average. If we can improve this variant first, it will probably have the biggest impact on the overall process. In this variant, there is no Network Information Requested or Network Adjustment Requested, so maybe our previous idea was not, in fact, the right place to look. If those tasks are not executed very often, improving them won’t help the overall KPIs, even if we improve them by a lot.  

But how can we see these KPIs? The tool allows us to compute KPIs based on the data available, and how those KPIs respond to the different areas at which we are looking. See this image:

Here, we can see how a KPI (namely, maverick buying) can be displayed in the context of a process variant. In this example, we can see both the amount of maverick buying over time and a breakdown of maverick buying by vendor. This could point us to some products and vendors to focus on to reduce maverick buying, which might lead us to introduce automatic alerts or process redirects when purchase orders for those vendors are detected.

Another clue to finding process irregularities comes from conformance checking — finding process variants that do not correspond to a predefined process map that shows how the process should be working. In IBM Process Mining, you can upload a “reference model” that indicates the prescribed way a business process should be executed; for instance, by using a process map made with BlueWorks Live. The analytics can then compare that reference model to the actual data and point out inconsistencies. These are typically good places to start looking for errors and problems.

The image below shows how the conformance checking is displayed in the tool. In this example, we can see that 39,300 cases are non-conformant and the red tasks indicate which ones are unexpected for particular process variants. In this case, we can also see that the tool has calculated that $3,439 is spent per non-conformant case, based on time spent and the cost of people’s time:

The tool has many other ways to display information and drill down into the data. A couple of these include the following:

  • Business rule induction to indicate how the process moves from task to task: The tool can figure out the business rules that determine how the process moves along. These insights are valuable for finding the set of circumstances that determine how your business takes process decisions and can be the basis for automated process decisions using IBM Automation Decision Services (and improved decision-making overall).
  • Role-based analysis: The tool can slice the data according to roles, helping you understand how different people in different roles execute their parts of the process differently and where there are bottlenecks. This can indicate where particular departments might have issues and help you adjust staffing and expertise levels in your organization.

There are many more. As you gain expertise with the tool you will discover many new ways of gaining insight into your business.

Taking action

Once you have understood your process better, it is time to figure out what to do to improve the process. There are many things you can do, but let’s focus on one area that is particularly useful with process mining — how to use RPA bots to automate bottleneck tasks to improve the overall process flow.

RPA bots, for the most part, replicate repetitive human actions on the desktop so they can be done more easily, thus freeing up people to spend their time on activities that require deeper thinking. Good candidates for RPA bots are tasks that are done often, that are repetitive, where you can save time by automating them and that have a positive overall impact on the business KPIs (for instance, overall process resolution time that can lead to better customer service or alignment with regulatory deadlines).

Once you create an RPA bot, you can replicate it and execute it on different virtual desktops, and you can mix human activity with automated activity, depending on how much you want to scale the RPA solution.

When you use task mining with IBM Process Mining, the tool does a lot of the work for you in helping determine the best task candidates for RPA. Take a look at this screenshot:

Here, the process-mining tool has pointed out two activities whose automation with a bot might have a big positive impact on the overall process: Network Service Closure and BO Service Closure. Furthermore, there are parameters that can be adjusted to estimate the overall impact depending on what percentage of these tasks are automated with a bot, and how many variant versions of these tasks are included in the bot. With the current parameters, the estimated savings per process instance is $369.36, and the overall savings in terms of human labor available for other activities is $43,054.76 for the 116,566 cases in this data set. These are definitely viable candidates for effective automation.

Now, we can also simulate the overall process execution in this RPA scenario. See the image below:

Here, we have run a simulation with a certain percentage of human tasks in the original data replaced with the RPA bots that execute the tasks much more quickly. We can see that in this simulation run, the overall average process execution time for this account closure example went down from 18 days and 16 hours to 16 days and 6 hours — a 10% overall time savings that has direct impact on customer satisfaction and regulatory compliance. The ROI for automation is clear and well-defined in terms of both financial savings and KPI improvement.

The next steps are to create and deploy the RPA bot, and then start investigating the real impact of the change using the same process-mining tool and setup. This will let us both determine if the ROI was actually met and point out the next set of automations to improve the business process. This repeated process monitoring, measurement and automation is the motor that will drive ongoing business improvement — and that is one of the main drivers behind the excitement around process mining as the next technology advance in automation.

Conclusion

We have seen how we can take business process execution data, create detailed analytics for drill-down to understand how the business really operates and then use those analytics to create automations that drive significant improvements to our business, both in terms of cost savings and customer satisfaction improvements.

But we have really just scratched the surface of what we can do with process mining — here are some other areas where process mining can help:

  • We looked at an example from banking operations with account management, but the potential of process mining is much vaster. It can be used for all processes with human activities. IT management is a typical example — managing the processes around help desk tickets, finding categories of tickets that can be better automated, finding manual steps in IT systems management that will make a real impact on operations. Process mining can be used to identify points of potential integration between different systems by analyzing where data gets held up today. It can be used in software development to identify inefficiencies in development, test and deployment of software.
  • We focused on RPA bots as an automation — process mining can also identify other types of automation. A few examples:
    • Decisions: Often, processes are inefficient because we rely on people to make decisions that could be automated using a combination of machine learning and business rules. We can see in the process-mining analytics where there are loops in a process — tasks that are redone multiple times due to human error. We can spot approvals and other decisions that are taking a long time. These are all signs that automated decisions may improve the overall process.
    • Business process management: Many processes would benefit from a better-coordinated and choreographed business process, with clear tracking of progress, task inboxes for employees and flexible but coordinated case management. Process mining can create the outlines for implementing BPM — the process maps generated by the analytics engine can be exported as BPMN diagrams that serve as the basis for a business process management project.
    • System-level integration: RPA is great for integrating systems that have only a UI (and no underlying APIs) and for simulating human interaction on a desktop.   But a deeper integration of systems using their APIs and exchanging data directly with no human intervention at all is even better, when it is possible. Fortunately, process mining can also pinpoint where such system-level integration would make a difference and can lead to more successful projects.
  • We started from available data to understand process behavior, but it is also possible to use IBM Process Mining’s powerful simulation engine, along with the ability to import process models to do full process simulation in the absence of any actual data. By importing a model from, for example, IBM BlueWorks Live and providing some basic information about anticipated task execution times, IBM Process Mining can estimate how processes are likely to behave and where bottlenecks will occur, leading to opportunities for process optimization prior to putting the processes into production in your organization.

You can now understand why process mining is generating such excitement in the market and how it is really the best first step in your journey to automating your business.

Learn more

Make sure you check out The Art of Automation podcast, especially Episode 17, from which this chapter came.

 

The Art of Automation: Landing Page

Was this article helpful?
YesNo

More from Automation

EclipseStore enables high performance and saves 96% data storage costs with WebSphere Liberty InstantOn

5 min read - As AI technology advances, the need for high-performance, cost-effective and easily deployable solutions reached unprecedented levels. EclipseStore, a groundbreaking data storage platform from MicroStream, is revolutionizing the development of cutting-edge software applications. IBM® collaborated with MicroStream to integrate the IBM WebSphere® Liberty InstantOn feature within EclipseStore. This combination empowers developers to build highly scalable Java applications that deliver unparalleled performance, while substantially minimizing infrastructure expenses. Exciting new innovations such as advanced robotics, real-world gaming, neuronal interface technology and AI require…

Driving quality assurance through the IBM Ignite Quality Platform

4 min read - Quality Assurance (QA) is a critical component of the software development lifecycle, aiming to ensure that software products meet specified quality standards before release. QA encompasses a systematic and strategic approach to identifying, preventing and resolving issues throughout the development process. However, various challenges arise in the QA domain that affect test case inventory, test case automation and defect volume. Managing test case inventory can become problematic due to the sheer volume of cases, which lead to inefficiencies and resource…

Migrate and modernize enterprise integration using IBM Cloud Pak for Integration with Red Hat OpenShift Service on AWS (ROSA)

5 min read - Integration is essential to every business. As businesses consider the core of their IT infrastructure, their focus might be on their data and applications. But without integration, the data would be locked into siloes; and the applications would be isolated and overloaded with complexity as fragile, tightly coupled connections were added to allow applications to work together and share information. This impacts business agility—slowing both actions—and the ability to change.  Businesses are trying to reduce these data exchange barriers through…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters