UiPath is a very flexible software suite, so there are many ways to reorganize and reuse your code. In this guide, we will look at a fictitious business process for creating and changing suppliers in an example web app in order to learn about the different ways in which it could be improved.
To make it more digestible, the guide itself will be split up into several different patterns that you can read at your leisure. This is the top-level document that serves as an introduction to the usecase and contains a general discussion.Also, I couldn’t resist the pun
If you’re reading this for the first time, I would recommended to start here and then go through the different patterns in order as they build upon one another. Feel free to skim some of them if they feel trivial to you.
You may be asking, “Why should I split things up? My process runs just fine with 2000 activities in Main.xaml.” We will tackle that question a little later, let’s delve into the usecase first.
So for our usecase, which I will call STU for short (figure it out), Maggie from procurement — who has hitherto performed the process manually —I am unsure why Maggie would perform such a menial task on a demo app that will soon forget all suppliers again, but, hey, we’re not here to judge! asked us to remove a few tedious clicks from her life and automate her daily tasks of creating and modifying suppliers.
Maggie told us that she receives an Excel workbook to tell her which changes need to be performed (an export from another system). This comes in quite handy as an interface for the robot. The sheet contains a table with some rows containing new supplier data and others containing changes to old suppliers.
To differentiate the two different options, Maggie told us that she searches for the External Name of the supplier in the system. If found, we are dealing with a supplier change, otherwise with a new supplier creation.
We can conclude that the first line (Magic Industries) represents a supplier creation, while the second (ENEL) must be a supplier change.
We had some time during the lunch break so we’ve already finished developing this simple workflow. Maggie marvels at our… er… marvelous! results. Below is a video of the workflow in action, with Studio’s highlight feature so you can see what’s going on.This is not embedded due to GDPR shenanigans from YouTube
But there’s a problem here. Did you spot it? Perhaps having a look at the implementation will help. It would make sense to download the workflow and investigate yourself. But if you’re just here for passive reading (for shame), have a look at the exported image for now:
The problem can be seen in the video at about the 1 minute mark when the robot performs the ENEL update: it changes the industry, even though there is no entry in the Excel sheet. Blasphemy!
This points us to a bigger problem, though: you see, we developed the new supplier logic first and then furiously copy&pasted the relevant parts to the change supplier part of the workflow.You may be a wondering why this works for the other fields but not for industry: TypeInto doesn’t overwrite the value in the field if it receives an empty/null string (and EmptyField is false or we are using SimulateType) so the other fields are safe. But if the entry in Excel is empty, the robot should skip the field — which we forgot.
We need to fix this or Maggie will be disappointed with us. And we wouldn’t want to risk disappointing such a lovely lady, now, would we?
At this point, I would strongly recommend cloning or downloading the repository from github to follow along with what’s happening. Trust me, you’ll have much more fun that way and it will make the lesson stick better, too. Win-win.
To fix our self-inflicted mess, we have to modify the workflow so that it does not interact with the industry combo box if the column has no value in our Excel table. This is very close to a typical maintenance task — such as when a selector for a field changes.
Expand the following section to see my solution, but give it a try yourself first.
The change is quite simple: add an If check to determine if the row actually contains data.
If you tried to make the change yourself, you will probably have found that every time you want to test a change, no matter how small, you have to run the whole workflow again from the start. While that is mostly an annoyance for such a small workflow, once you get to a point where you have to fix problems near the end of a 2000 activity, 20 minute per run monster, this becomes prohibitively time-consuming pretty quickly. It’s also completely unnecessary — that is what this guide is all about.
Listen, dear, mommy and daddy still love each other, and we especially still love you very, very much. But we decided that we each need our own space, and… wait, wrong type of splitting up you say? Oh, sorry. The writer must be asleep at the keyboard. Hey! Wake up, you!
That’s better. Now can we look into why splitting things into small pieces is a good idea?
As our little toy example shows (and this is orders of magnitude worse in real workflows), testing things in one big scary workflow does not make anyone happy — especially when loops are involved. Thus, making things testable in isolation is perhaps the biggest advantage of dividing your workflow.
The main pattern discussing this is Sub-Workflows, so be sure to check it out if you want to learn more about this.
I would be remiss if I did not mention StudioPro at this point, even though we will not go into detail. Test Suite allows creating test cases for workflows that can be run automatically on an unattended-like testing robot. It can also be used to perform application GUI testing via the RPA driver, but that is a separate topic.
That is particularly handy for regression testing, e.g. in the context of a CI/CD pipeline, or if you have no control over when a target application changes (such as in cloud web apps) and want to run some tests daily to make sure everything still works.
StudioPro also offers features for defining certain test scenarios apart from end-to-end process tests, such as unit tests, and provides a code coverage view. The benefits of automated testing are far greater if you split your workflows up because it makes seeing what went wrong in case a test fails much, much easier.
This almost seems obvious, but when you spin off a certain functionality into its own workflow — or even its own process or library — you can reuse it in different RPA processes. This is a big advantage for two reasons: first, if you have to change anything, doing it in just one place is very much preferable to looking for the same issue in every process that might conceivably contain it.
Second, creating reusable components lets your best developers create well-tested solutions for certain problems, and share them with the rest of the company (or the world through the marketplace). In particular, a center of excellence can share libraries of what I like to call atomic business tasks — for example, creating a new vendor.
This, in turn, allows citizen developers to quickly implement the business process at hand without worrying too much about how it could be implemented. As a bonus, this also gives the CoE the option to replace the implementation with a different one — such as switching from a GUI-based automation to an API-based one — if it judges this to be useful (with at most minor changes on the citizen dev side).
A tried and true approach to solve problems that seem too big to take on, is to separate them into smaller chunks. The individual parts often seem less insurmountable, and it may even possible to dispatch with them relatively quickly once they are disentangled from their complex web of interdependencies.
Thinking about your business process in this manner also helps in many other ways: it encourages standardization, provides well-defined process steps (and interfaces between them) that can be optimized even if not automated, makes the whole process easier to understand, simplifies mapping the process to process diagrams, etc.
Note, however, that while BPMN and similar approaches can give you some of these splits for free, for technical reasons, they very rarely map 1:1 onto a composite RPA process.
Spinning off parts of a process into independent packages provides more fine-grained control over where and how these parts run.
For example, you might start an invoice entry process in an unattended robot by reading an e-mail attachment and digitizing it, but need some input or validation from a human operator, which is handled in Action Center, then return the data to the unattended robot to enter into a backend system.
Similarly, a user might start by preparing some files with the help of an attended robot, which uploads them into a storage bucket and creates a queue item for an unattended robot to pick them up at a later point in time.
Log entries in Orchestrator contain information about which workflow file emitted them. When used together with unique display names, this makes tracking bugs down much easier, especially if all you have available are the logs — which is unfortunately a fairly common situation.
Similarly, having queues or job calls between different process steps is not only useful for load balancing, but also makes the input data for a job more transparent.
This is not a free lunch, however, as splitting your workflow into multiple sub-processes will complicate logging of complex composite processes. To help alleviate this issue, I strongly recommend to look into custom log fields to provide a common correlation ID across sub-processes.
Too many cooks ruin the porridge, as we say in Germany. If you find yourself in a situation whereThe original sounds better: “Zu viele Köche verderben den Brei” coordinating the team effort on an RPA project starts to severely impact agility, assigning each team or individual a partial process — with well-specified interfaces — can significantly improve your speed of implementation.
This is particularly helpful where geographical and time zone separation is involved, for example when some members of your team are in India, others in the US. It is far easier to achieve a common understanding of an interface between sub-processes than negotiating every possible hand-off situation within a single project.
Last, merging conflicting process changes by different team members with each other can be a gruesome experience, and a special hell-on-earth scenario unfolds if you don’t use source control. Separating responsibilities minimizes the chances of this happening.
If your process contains computation-intensive actions, such as OCR, it may be preferable to offload these to a different machine dedicated to this purpose. This is particularly true in connection with AI Center, but can also be a good idea to avoid interfering with other responsibilities of someone using an attended robot.
Another consideration is design-time performance. Studio can become sluggish when dealing with very large workflows or projects, and it also becomes harder and harder to find what you’re looking for, so splitting things up helps save some development time in that way, as well.
Be aware, though, that to reap the benefits here, you need a good enough documentation to find out which sub-process or file things are in or navigation overhead will negate that advantage and then some.
Let’s briefly go through the different approaches you can take to splitting up your business process. These aren’t mutually exclusive, and many of them will be described in more detail in their own Patterns. They are sorted roughly by how frequently you will likely be able to apply them.
The easiest, and by far most often-encountered, choice is to extract sub-workflows while still staying inside the same project. This is something you should do for every project and is discussed in excruciating detail in Sub-Workflows
Next, we have the option to slice our process into multiple parts that can then also be re-composed into other processes. You can call subprocesses locally (Invoke Process or inter-process communication), by starting new jobs directly, or by interfacing via queues. These approaches are discussed in Sub-Processes
Once you split your process into parts, the question is: how can they communicate with each other? The most common solution for this are Queues.
In addition to keeping things tidy and helping you monitor processes more easily, queues automatically provide some load-balancing between the workflow that fills the queue (often called producer or dispatcher) and the workflow that consumes the queue items.
In a more advanced scenario, it’s also possible to design systems to spin up more robots on-demand if you have a high transaction load and shut them down when it is low. Queues are an excellent piece of the puzzle for such a solution, providing a natural mechanism to communicate demand and back pressure.To avoid confusion, “it’s possible” means you can create scripts, etc. to do so, not that there is a product for it
Libraries are the go-to approach if you want to outsource often-recurring combinations of activities. For example, you could have a library that packages solutions for interacting with the elements of a difficult target application, or one that provides very common cross-cutting concerns such as custom logging via an API call, or asking an attended robot user if it’s ok to take a screenshot when an error happens.
A pattern on Libraries will follow.
The newest addition to the list, UI Libraries let you define a collection of reusable target elements and screens for an application (version). This is quite useful if your target application forces you to edit selectors often or is updated very frequently.
This is, perhaps, UiPath’s answer to the approach BluePrism has taken from the start: first describe the elements that belong to your application, then build an automation on top of that abstract description. This approach is advantageous if you can reuse the UI elements, but sacrifices development speed for one-off automations and ease of experimentation.
A pattern on this will follow.
Long-running workflows, also called Orchestration Processes, are a special case of Sub-Processes that allow you to store their execution data in Orchestrator at a specific point and resume process execution later at the same point (on the same or another robot).
The most common application of this is in so-called human-in-the-loop scenarios, i.e. when the robot needs some input that can only be given by a person (think approvals, validation, etc.). The robot runtime can perform other jobs in the meantime and will resume after staff finished the required action at a time of their choosing.
Long-running workflows are rather complicated and therefore have their own pattern, Long-running workflows
If you can find (or write) an API or a small script to do the job you want, but it is not integrated out-of-the-box, you can still use them in your RPA workflows. This works by using generic activities to perform HTTP calls, run PowerShell scripts, and so on.
Compared to the other options we’ve seen so far, this one is definitely more difficult to pull off, especially when it means writing your own programs to handle things. But if you have that capability, it’s an excellent option that allows you to use RPA only when you really need it.
If you know how to write .NET code, developing custom activities might also be a good option. This allows you to extend UiPath’s capabilities in any way you choose, as long as you can write a .NET wrapper around it.
As this requires some understanding of Windows Workflow Foundation, it is one of the most challenging, and least-used, capabilities. Insofar as companies do it at all, creating activities will usually be limited to a Center of Excellence that builds and packages them for the rest of the company.
And, of course, there’s the Marketplace where you can download activities provided by other developers, or share your own.
AI Center is another fairly new product. It allows you to run machine-learning (ML) models on a separate server or in the cloud and can enhance your RPA workflows by providing easy-to-use AI support.
The most common use of this is for Document Understanding with the so-called Machine Learning Extractor, but if you know how to retrain an out-of-the-box model or even create your own models, you can support far more scenarios, such as e-mail classification, churn prediction, fraud detection, and many more.The ML extractor allows you to extract fields from semi-structured documents with a large number of variants, such as invoices.
The difficulty level of using AI Center ranges from easy (just using a finished out-of-the-box model) to the most challenging thing on this list (creating, training and deploying your own ML models). Document Understanding is a bit of a special case that sits somewhere in between — easy to start, hard to master.
After all of the above, you might be wondering how I would recommend approaching designing an automation solution. In general, there are two ways to start a project: top-down and bottom-up, differentiated by the size of the business process you wish to automate.
In either case, but especially in the top-down case, you should start by understanding the business process and its connection to other processes. Automation doesn’t work for tasks that include any “gut decisions” or things employees “just know”. It may be worth spending the time to design a new, more standardized, to-be process along with the automation.Process Mining can be gainfully applied here
The first approach is to create an end-to-end automation from the get-go in a top-down manner. In my opinion, this is only feasible for small- to medium-sized processes that you understand very well. Start with a single project first, making use of any previously existing reusable components you have. Inside that project, split things into sub-workflows rather granularly.
If you later find that another project needs something from this first project, copy&paste it at first. Only when you need the same functionality a third time, start taking this as evidence that it might be a useful reusable component and spin the common parts off into either sub-processes or libraries, as appropriate.
While it is rarely a good idea to presuppose what you’ll reuse later, sometimes you might make a partial exception for utility libraries if you can clearly envision multiple business processes that will need the functionality, or if solving a problem is hard enough that you want to provide the solution to others.
The second approach is iteratively automating parts of a business process. This bottom-up approach is most appropriate for large processes but can be used for smaller ones. In contrast to the top-down approach, you can start automating parts of a complex business process before you fully understand the whole, to get something on the road quickly.Approach this with an understanding that some rework will be required later
Still, a good process understanding will help you define logical sub-processes and give you an idea of which of these are bottlenecks and, therefore, good automation candidates. It also helps you decide whether your aim is to create an end-to-end automation, or only a partial automation, now rather than just letting the decision happen along the way.
In an ideal world, the first process you automate you would focus on the most binding constraint. Most business processes are “chain-like” systems where the strength (speed) of the whole is determined by its weakest (lowest-throughput) member. In practice, unfortunately, the first process if often chosen very arbitrarily, especially if no-one with a good process understanding is around There may be overriding considerations, however, that dictate speeding up certain parts of a process is more important than others, in which case you should probably start there (payment terms for invoices or first response speed in customer service come to mind).
After that first process is done, start iterating. Tackle the next most binding constraint, with a slight bias towards sub-processes that are adjacent to already automated ones (as human-robot hand-off typically adds overhead). Rinse and repeat.
Once you’ve finished automating enough parts, you should start thinking about integrating them into a cohesive whole with long-running workflows and the other patterns mentioned here, especially if your final aim is an end-to-end automation.
While sub-workflows, at least, are almost always a good idea, there is such a thing as splitting up prematurely, and there are also some counter-indications that might push you to stick with just a single process. Let’s have a brief tour:
StudioX is a simplified and streamlined version of Studio aimed at less technical citizen developers. It does not allow splitting a process into sub-workflows. Personally, I think that is a bad design decision and the concept of sub-workflows isn’t that difficult to understand, but things are as they are.
These have a limitation that only allows you to use the Wait for… activities in the main workflow. See Long-running workflows for more details on this.
Unfortunately, development features for sub-processes and libraries are very under-developed. See the different patterns for more deailed discussions, but the gist of it is that, before committing to using sub-processes or libraries, make sure to carefully consider what you’re getting into.
The main trade-offs revolve around development time vs code cleanliness. Splitting things means you spend more time navigating between different workflows and projects, publishing and updating packages, creating documentation and similar sources of overhead.
A special case of the above is that complex composite processes are hard to visualize and monitor. While custom log fields can be used to provide a correlation id, this can only be leveraged in custom dashboards based on either ElasticSearch or the Orchestrator database, as neither Orchestrator nor Insights allow you to view logs for a process hierarchy, only for single processes.
Process Mining could theoretically alleviate this to some extent and provide an end-to-end view, but no out-of-the-box support for ingesting robot logs is available.
If you’re working alone on a process, the additional overhead of creating sub-processes, etc. may not be worth it. You know your project and where to find stuff.
That said, personally I’ve found that business processes with crisp sub-process delineations are much easier to understand when you suddenly have to do time-critical maintenance a couple of months later.
Of course there are always approaches that are rarely a good idea. Let’s delve into them, shall we?
There used to be a time when this was the only way to re-use different activities. Basically, it’s a library of workflows you can copy&paste into your current workflow. Unfortunately, copy&paste is not a good approach to code reuse, because, whenever you need to change anything, you need to remember all the places you pasted it and change it in all of them one at a time.
As such, I can’t really recommend snippets beyond providing examples, or very simple things that aren’t really reusable anyways (e.g. specific preconfigured Delays or Retry Scopes).
Templates are great for facilitating a common style or giving people examples to build upon, and therefore definitely have their place in any organization that uses RPA on a larger scale. However, you should resist the urge to treat them as a vehicle for reusability. The same reasoning that applies to snippets also applies to templates as they are essentially another variant of copy&paste.
Don’t do it. While putting a file on a shared drive and then invoking it all over the place has some allure (because you only need to change it once and all your processes will “update” immediately), it is a sirens’ call that will lead you to ruin. This is one of the few things I would be willing call an anti-pattern with a straight face as I have seen it cause huge headaches to customers several times.
The problem is that, while this seems to work great initially, it often fails a few months down the line. It will only work correctly if your Robot and Studio installations have the same version on all machines and all your RPA processes have the same package dependency versions.Primarily because UiPath sometimes makes non-backward-compatible changes to the XAML structure
It is extremely rare to be able to consistently fulfill these conditions, as it basically requires updating and testing all unrelated processes whenever you update dependencies for any of them (because you cannot see which process invokes which XAML files easily).
Due to bugs in different dependency versions affecting processes differently, you might not be able to make it work at all without runnign multiple copies combining sets of “invokes”.
Libraries or Sub-Processes are always a better choice.
You might conceive of the idea that, if sharing files is a bad idea, perhaps downloading a specific version for each robot, controlled by using a config file or a database, might be ok? Congratulations, you’ve just re-invented the wheel by replicating the existing library functionality in what is probably an inferior way.