The Law of Unintended Consequences

Or: what to do when you don't reap what you sow.

February 21, 2021
998 words (5 min read)
RPA Program

Unintended Consequences

Anything we do in this world has effects. Sometimes we even get what we are looking for, but perhaps even more often, our environment reacts to our actions in different ways than we expect. Luckily, it’s possible to get better at predicting these, turning unintended into intended consequences — allowing us to prepare better for them.

While this is, of course, something that is by no means exclusive to RPA, and there are many generic pitfalls that you might wish to avoid which go beyond the scope of a blog post like this, I thought it would be interesting to look into some typical examples how RPA programs can get sidetracked or rub people the wrong way.

Unfortunately, most of them aren’t easily solved, or solutions must take the specific situation on the ground into account to work, so I won’t be able to give you cookie-cutter recipes that make them magically go away. But perhaps at least knowing about them will allow you to come up with some good options for your company.


Organisms try to maintain their internal environments within a very narrow range of expected parameters (such as temperature, pH, hormone levels, etc.). This effect is called homeostasis and has analogues that happen in organizations, too. This has the unintended consequence of creating a general resistance to change.

In RPA projects, this can show up as all kinds of friction, including:

  • Stakeholders demanding all sorts of process exceptions for corner cases that happen once in a blue moon (especially executives)
  • What feels like obstructionism from IT (for example tight security policies that are meant for employees being applied to unattended robots, or trying to use the same slow development methodology they’ve always used)
  • Unions and other employee representatives starting a smear campaign against the job-destroying robots (sometimes justified, often not)
  • Unreasonably high regulatory standards that force your legal or privacy department to keep you from doing useful work (e.g. GDPR)

I’m sure you can think of many more examples. Unfortunately, these aren’t easy to solve. One of the main weapons you should definitely try to have in your toolbox, however, is a powerful executive sponsor who can help convince other departments that this automation thing is worthwhile.

Higher-order effects

Sometimes you will achieve the effect you wish for, but that effect can cause knock-on effects itself. If you don’t analyze these before you act, it can have dire runaway consequences.

To give you a real-world example: imagine you introduce a foreign species into your country because it’s good eating or make good pets (or both). Next thing you know, these animals are breeding like crazy and have taken over all the niches that used to be filled by native species and are a general nuisance and threat to the native animals. There are many examples of this, so let’s pick rabbits in Australia because they are a fluffy kind of danger.

In RPA, higher-order effects can cause your program to be quite successful in some ways yet still miss KPIs, following the maxim that work expands to fill the available space. For example, let’s say a junior employee uses 1 hour to prepare a report for management every week.

You automate the process so that it now only takes them 10 minutes instead of an hour — great time saving! But since it’s now so much faster to create a report, management is now asking for the report every day of a six-hour workweek instead of just once a week: so overall it still takes an hour per week!

This is good news for management but bad news for you if you are measured by how much time you saved (unless you game the statistics, which is quite possible in this scenario).

Another higher-order effect is that automations can become single points of failure. Once a certain threshold of reliability is reached (anecdotally somewhere around 95%), people start trusting the automation and get mad if it fails. They may also have been reassigned to other responsibilities, so if your automation breaks and they now have to go back to what they were previously doing, this can cause some nasty operational problems.

Demanding demand

One unintended consequence that speaks to the attractiveness of RPA and to how it can spur on peoples’ imagination is that CoEs can quickly be overwhelmed by demand for automation. Seeing robots do the tedious parts of your job has something magical to it and it often creates huge demand for more.

While this is generally what you want, it can lead to disappointment with the RPA program, which can mean that the pipeline dries out after the initial burst. This problem can be countered in several ways, such as encouraging citizen developers, starting with automation only in certain teams/departments, increasing manpower, or simply giving employees visibility into what is being worked on and how fast that goes to reduce frustration.


Not all unintended consequences are bad! Having a reputation for getting rid of the more boring parts of peoples’ jobs can lead to being perceived as a more attractive employer and incite high-performing employees who don’t like doing bullshit jobs to apply to your company, all the while making existing employees happier, too.

If you participate in things like Meetups or UiPath’s “coffee chats” (although I know that’s not nearly as effective in 2020/2021 as it used to be), this can also lead to meeting interesting people from other companies and opportunities to partner up with them or share insights.


That’s it for today. I’m sure there’s much more to write about the topic, and maybe I will at some point, but this is yet another one of the “weekly” blog posts while I get the mobile layout, etc. in order.

© 2021, Stefan Reutter