In statistics, we love to calculate things such as “means” and “standard deviations” — for good reason, as they allow us to capture the essence of a complicated reality in just one or two numbers. While this works in a surprisingly large number of situations due to the law of large numbers, reality is, unfortunately, not always in a mood to comply with our desire for simplification.
In most cases where this blows up in our collective faces, the reason is a very small probability for very high-impact effects. Disruptive new innovations, wealth distribution, asteroid strikes, financial crises and nuclear chain reactions are just a few examples for this sort of unobliging behavior. This goes by many names. I’ll use “fat tails” but “long tails”, “Pareto principle” or “80:20 rule” have gained some mainstream recognition, as well
Of course, project management is no exception to this. Most of the time spent on any given project is spent on only a small number of problems that are very hard to solve, while the rest is smooth sailing. Predictability varies, but as a general rule of thumb, the more creative problem solving is required, the more likely fat tails seem to be.
The area where this is probably most pronounced is research & development. More relevantly for us, software development is a notorious repeat offender, often fooling us into a false sense of security. Things go fine at the start, the PoC looks very promising, but you only find out later that all the “small” issues you initially glossed over are actually really hard to solve reliably.
In RPA, the effect is exacerbated further as you not only have to deal with the issues in your own code, but also with all the ugly hacks and shortcuts the GUI developers of your target applications used. Target applications that are meant for humans, not robots, and therefore often do things such as “removing” inputs by making them invisible or even just pushing them off screen.
You might be wondering what the most common sources of fat tail effects are in robotic process automation and if it’s possible to mitigate or at least foresee them. While there will be always unknown unknowns, there may yet be hope. Let’s delve into what I’ve seen so far, but perhaps I will revisit the list later in a more detailed guide.“Unknown unknowns” is one of a very few good ideas that came out of the mouth of Donald Rumsfeld, former US Secretary of State
One general mitigation strategy for fat tails (but by no means a silver bullet) is aiming for a good enough solution: rather than trying to automate the full business process, select only the parts of the process that seem most promising and make sure you have a way to hand off the more complicated to human operators.
During discovery, our subject-matter experts often show us only a small part of the actual process, either because they only deal with that small part, because they want to take the opportunity to improve the process while you’re at it, or because they’re not used to thinking in rules and exceptions and simply forgot.
No matter the reason, scope creep is perhaps the most egregious source of project delays, especially when you are hired as implementation support from the outside but dealing with internal SMEs with little experience in process design.
It is also hard to counteract, particularly if you don’t have the business process know-how yourself. Warning signs to look out for are long discussions about what to do in certain situations while the SME is supposed to just show you the process and frequent new ideas about what else needs to be implemented.
Sadly, this often occurs when development is already well under way and you will have to make a decision of either bulling through, reducing scope, or cutting your sunk-cost losses. A hard decision to make, especially when relying on external implementation support.
Already mentioned, but it bears repeating: sometimes (more often than not?) your target applications are simply stinking pieces of shoddily-built refuge. This can lead to all kinds of issues, ranging from weird and unpredictable behavior such as sometimes not registering a click to extreme cases such as an infinite loop with a memory leak when you try to access certain elements via the accessibility interface.The latter happened to me with DATEV, a German accounting software
I get it, writing software that even kinda, sorta works is hard. Writing software that doesn’t crash for no reason and has reasonable code quality is even harder. For budding RPA developers, you’d best get used to it, as cooperation by vendors in fixing bugs is often low to non-existent. So you bow your head in resignation and try to find an ugly, hacky workaround, all the while hoping that the junior devs will actually read your comments and not remove your unnecessary retry loops everywhere.
To ware against the wonky targets throwing you a curveball as a program manager, the only somewhat promising mitigation strategy is to get experience with your target applications. If you are dealing with a new application, run an exploratory PoC and try to see the writing on the wall: if you find more than 1 or 2 ugly problems, expect dragons to lurk somewhere in the dark corners of the dungeon. Hard to get funding for, but necessary.
This was last week’s topic in my post about stakeholders, so check that out. The more people affected by your automation, the more likely you are to face interference (well-meaning or malicious) that slows down your progress.
Unfortunately, there’s really no way around this one. You have to bite the bullet and ideally get all the stakeholders involved as early as possible. On the positive side, if you’re lucky, this may reveal so many process variants that it will ruin the business case. The best code is no code.
You just wanna get it right. There must be an elegant way to solve this conundrum. You have to automate the full process or no meaningful savings will accrue.
It’s a trap! Don’t listen to your inner Plato. RPA is not a technology where perfect should be in your vocabulary.Look up Platonic Idealism if you don’t get the reference The simple fact that you’re automating a GUI automatically makes your solution imperfect. Embrace it! If you have a choice between 1 additional day of development and 2 days of work a year for your users, it’s usually not worth automating in the first place.
But also don’t become complacent — an intrinsic lack of perfection doesn’t excuse bad code quality and terrible reliability. It should also be mentioned that, if you have a 95% solution, users will start expecting it to always work and get frustrated when it doesn’t, so in these cases overinvesting in the last 5% might be worth it.
I’m sure you have your own tales of delay to tell and learning experiences to share. Let us know in the comments… oh, wait, we don’t have comments! Never mind, then. One day, perhaps.