Digital Transformation demands a new Automation Framework. Forrester came with this approach:
Digital Transformation demands a new Automation Framework. Forrester came with this approach:
In order to implement RPA “the smart way” and take the most advantage of it, you should be aware of the robotic process automation pitfalls from the very beginning.
We must all acknowledge and take seriously the fallibility of our endeavours, and, consequently, we should not allow ourselves to indulge in surreal expectations. The simple truth is that sometimes projects fail, for a very large number of reasons. According to IBM Systems Magazine, up to 25% of technological projects fail downright, while up to half of them require extensive revisions by the time they are set to go.
Bernard Marr writes in Forbes adds that more than half of the technological failures are in fact due to poor management, and only 3% are caused by technical problems. DIfficulties may also arise from not choosing the right processes to automate.Here are 8 questions to ask that should ease decision making in this regard.
So let us now delve a little deeper into potential robotic process automation pitfalls and corresponding means to avoid them in the course of implementing RPA.
Forethought is definitely needed for good results and successful RPA implementation. Of course you should first spell out what ‘successful’ means; but for now, let us tackle the question about what could go wrong during the implementation of your software robots. Here is a list of 7 aspects that ought to be considered and/or avoided if you want to stay safe from robotic process automation pitfalls.
1. Not choosing the right processes to automate in the beginning
This refers to picking the process that is most appropriate for an effective start of using automation in your business. By no means should you neglect a thorough, exhaustive and, of course, realistic evaluation of the tasks that may be passed on to robots. You do not want to start automating the wrong things, resulting in difficult to manage workflows.
A piece of advice courtesy of Cem Dilmegani, CEO at appliedAI, is that you should consider features like, for example, the process complexity and its business impact. Briefly put, you should perform a cost – benefit analysis of automating the candidate processes, based on what you consider to be your top goals.
2. Trying to implement robotic process automation on your own
You probably know this by now, otherwise you wouldn’t be reading this: RPA provides highly technical ways to carry out faster and more efficiently the dull jobs that would cause your employees unnecessary distress, boredom and fatigue. Precisely because of the high level of technicality, it is not at all advisable that you attempt to carry out the implementation process on your own.
Division of labour is with us for good reasons, so you must not forget to delegate the responsibility of implementation to the specialists who can best handle it. Tony Warren, executive vice president, head of strategy and solutions management at FIS, mentions things like “technical maintenance, operational monitoring and the appropriate change management procedures” among the RPA features that call for the right level of expertise, which specialist implementation navigators possess.
3. Not setting clear objectives for your automation strategy
This is a more general rule of thumb: it is vital that your business objectives, as well as the role that you expect RPA to play in getting there, are crystal clear.
What do you need RPA for?
Relatedly, which software provider is likely to do the best job for what you need?
While uncertainties in these respects are likely to be burdensome, definite answers to such questions will facilitate a smooth transition to delegating the tedious, repetitive tasks in your business to software robots.
4. Not having a “bird’s eye view” over the implementation process
As you probably know by now, RPA implementation is a complex enterprise. In fact, this comes as no surprise for an activity meant to take such deep effect on your business. So in order to achieve your goals, you need to ensure proper executive control.
This requires a group or an individual who can watch over the whole process from the top, so to say. Some call this essential aspect “operational oversight”, others – “governance of accretion” or simply “governance”, while others emphasize how important it is to include in the responsible team not only domain-specific specialists but also someone to take over the executive role of “central process unit”. In the long run, this can take the form of a robotic process automation centre of excellence that warrants a strategic maintenance of the system.
5. Not ensuring the scalability potential of your software robots
Scalability is one hidden gem that is certainly responsible for the larger-scale adoption of RPA. Which means that you really should not allow anything to stand in the way of scalable bots that can ensure consistent, across-the-board use of RPA in individual departments of your business.
6. Relying solely on the IT department
You certainly do not want to condition the smooth running of your automated processes to the IT department. Of course, it goes without saying that IT assistance is necessary for automation, but the idea is that you should not overdo it.
The bottom line is something along the lines of the phrase ‘render unto IT the things that are intrinsically IT-related (e.g. automation codes), and unto other departments the things that are better dealt with by other departments’. As Schultz puts it, “finance cannot depend on IT for RPA; it needs to be owned by the business side.”
7. Not testing your software robots thoroughly
Even if you may not like the phrase ‘haste makes waste’ after having heard it one million times, you have to admit there is some truth to it. And since you do not want to waste the effort, time, money and hope that you invested in RPA, you also do not want to stumble at the threshold.
As our own Daniel Pullen puts it, you need to test processes in production prior to full go-live to ensure there is a like-for-like behaviour between Dev and Production. This includes ensuring the applications are the same version, testing applications under normal and peak loads throughout the day, servers & applications in a server farm all behave identically (both operation and speed), etc.
We believe that you are now better prepared to embark on a successful RPA journey. Failure anticipation is not meant to alarm you, rather to motivate you to have a realistic view over what might happen so that you can prevent the pitfalls.
Anticipating and planning pro-actively should take you a step closer to gloriously passing the finish line. Although the word ‘finish’ is not perfectly fit here, since what you aim for with robotic process automation is a long-term sustainable development of your enterprise. As UiPath puts it, with “a comprehensive understanding of your company’s automation needs and the value proposition RPA provides, you can ensure a successful RPA implementation scheme that is both cost-effective and timely”.
Such extensive understanding can lead you to make use of the best practices for robotic process automation implementation. Wisely selecting the processes, a plain understanding of the required human resources or reliance on an ‘RPA sponsor’ are some of those practices, on which you can read more here.
Cashiers–74% of whom are women–are likely to be the first to be overtaken by the automation wave.
As retailers install self-checkout systems, proximity beacons that flash offers to shoppers’ phones, and invest in robots that replenish shelves, they’re likely to need fewer and fewer workers in the coming decade. A new analysis finds that up to 7.5 million jobs are at risk in U.S. retail, with women and rural areas particularly affected.
The last two years has seen a string of retail bankruptcies and store closures, with once storied names like J.C. Penney, RadioShack, Macy’s, and Sears under pressure as never before. Now analysts say retailers are likely to turn to automation as they try to end the so-called “Great Retail Apocalypse.”
“Labor productivity has been stagnant in the retail industry for a long time and now we’re seeing minimum wage increases around the country and a tight labor market that’s forcing up wages,” says John Wilson, head of research at Cornerstone Capital, a financial services firm that focuses on sustainable investing. “That’s putting pressure on companies to solve these problems at a time when a lot of these technologies are coming into play.”
The research was commissioned by Investor Responsibility Research Center Institute, a nonprofit group, and prepared by Cornerstone Capital. The job loss estimates are based on well-known research from Oxford University and figures from the U.S. Bureau of Labor Statistics. The U.S. retail industry employs about 10% of the total workforce.
Wilson says cashiers–74% of whom are women–are likely to the first overtaken by the automation wave. Also likely to be affected are retail salespeople, who may not be needed as shoppers increasingly consult their phones for information about sizes, colors, and availability. “Smartphones have all kinds of information about the products you want to buy, so the need for salespeople is considerably less,” he says.
For example, Bloomingdale’s has tested smart fitting rooms with wall-mounted tablets allowing customers to scan items and view other colors and sizes and receive recommendations to “complete the look.” Home Depot says four self-checkout systems occupy the space of three normal aisles and obviate the need for two human cashiers. Amazon’s Go concept stores have no cashiers at all, enabling shoppers to pay for everything through their phones.
Worryingly, the report says automation could affect areas where unemployment is already higher than the national average. “WalMart and other large retailers have greater market share in communities with less than 500,000 people,” it says. “If employment trends correlate to market share location, retail automation by retailers could disproportionately impact these smaller communities.”
Wilson cautions retailers against going all in for convenience at the expense of retail experience, lest they simply become higher cost versions of online stores. “If the technology simply allows you to reduce costs by reducing the number of employees, that may not be a winning strategy,” he says. “They [may need] to create an experience. You go into the store and it’s fun. You have a relationship with the people who work there and you’re discovering new products. Most companies are headed in that direction and that requires an investment in both labor and technology.”
Source: Fast Company
On December 2nd, 1942, a team of scientists led by Enrico Fermi came back from lunch and watched as humanity created the first self-sustaining nuclear reaction inside a pile of bricks and wood underneath a football field at the University of Chicago. Known to history as Chicago Pile-1, it was celebrated in silence with a single bottle of Chianti, for those who were there understood exactly what it meant for humankind, without any need for words.
Now, something new has occurred that, again, quietly changed the world forever. Like a whispered word in a foreign language, it was quiet in that you may have heard it, but its full meaning may not have been comprehended. However, it’s vital we understand this new language, and what it’s increasingly telling us, for the ramifications are set to alter everything we take for granted about the way our globalized economy functions, and the ways in which we as humans exist within it.
The language is a new class of machine learning known as deep learning, and the “whispered word” was a computer’s use of it to seemingly out of nowhere defeat three-time European Go champion Fan Hui, not once but five times in a row without defeat. Many who read this news, considered that as impressive, but in no way comparable to a match against Lee Se-dol instead, who many consider to be one of the world’s best living Go players, if not the best. Imagining such a grand duel of man versus machine, China’s top Go player predicted that Lee would not lose a single game, and Lee himself confidently expected to possibly lose one at the most.
What actually ended up happening when they faced off? Lee went on to lose all but one of their match’s five games. An AI named AlphaGo is now a better Go player than any human and has been granted the “divine” rank of 9 dan. In other words, its level of play borders on godlike. Go has officially fallen to machine, just as Jeopardy did before it to Watson, and chess before that to Deep Blue.
So, what is Go? Very simply, think of Go as Super Ultra Mega Chess. This may still sound like a small accomplishment, another feather in the cap of machines as they continue to prove themselves superior in the fun games we play, but it is no small accomplishment, and what’s happening is no game.
AlphaGo’s historic victory is a clear signal that we’ve gone from linear to parabolic. Advances in technology are now so visibly exponential in nature that we can expect to see a lot more milestones being crossed long before we would otherwise expect. These exponential advances, most notably in forms of artificial intelligence limited to specific tasks, we are entirely unprepared for as long as we continue to insist upon employment as our primary source of income.
This may all sound like exaggeration, so let’s take a few decade steps back, and look at what computer technology has been actively doing to human employment so far:
Let the above chart sink in. Do not be fooled into thinking this conversation about the automation of labor is set in the future. It’s already here. Computer technology is already eating jobs and has been since 1990.
All work can be divided into four types: routine and nonroutine, cognitive and manual. Routine work is the same stuff day in and day out, while nonroutine work varies. Within these two varieties, is the work that requires mostly our brains (cognitive) and the work that requires mostly our bodies (manual). Where once all four types saw growth, the stuff that is routine stagnated back in 1990. This happened because routine labor is easiest for technology to shoulder. Rules can be written for work that doesn’t change, and that work can be better handled by machines.
Distressingly, it’s exactly routine work that once formed the basis of the American middle class. It’s routine manual work that Henry Ford transformed by paying people middle class wages to perform, and it’s routine cognitive work that once filled US office spaces. Such jobs are now increasingly unavailable, leaving only two kinds of jobs with rosy outlooks: jobs that require so little thought, we pay people little to do them, and jobs that require so much thought, we pay people well to do them.
If we can now imagine our economy as a plane with four engines, where it can still fly on only two of them as long as they both keep roaring, we can avoid concerning ourselves with crashing. But what happens when our two remaining engines also fail? That’s what the advancing fields of robotics and AI represent to those final two engines, because for the first time, we are successfully teaching machines to learn.
I’m a writer at heart, but my educational background happens to be in psychology and physics. I’m fascinated by both of them so my undergraduate focus ended up being in the physics of the human brain, otherwise known as cognitive neuroscience. I think once you start to look into how the human brain works, how our mass of interconnected neurons somehow results in what we describe as the mind, everything changes. At least it did for me.
As a quick primer in the way our brains function, they’re a giant network of interconnected cells. Some of these connections are short, and some are long. Some cells are only connected to one other, and some are connected to many. Electrical signals then pass through these connections, at various rates, and subsequent neural firings happen in turn. It’s all kind of like falling dominoes, but far faster, larger, and more complex. The result amazingly is us, and what we’ve been learning about how we work, we’ve now begun applying to the way machines work.
One of these applications is the creation of deep neural networks – kind of like pared-down virtual brains. They provide an avenue to machine learning that’s made incredible leaps that were previously thought to be much further down the road, if even possible at all. How? It’s not just the obvious growing capability of our computers and our expanding knowledge in the neurosciences, but the vastly growing expanse of our collective data, aka big data.
Big data isn’t just some buzzword. It’s information, and when it comes to information, we’re creating more and more of it every day. In fact we’re creating so much that a 2013 report by SINTEF estimated that 90% of all information in the world had been created in the prior two years. This incredible rate of data creation is even doubling every 1.5 years thanks to the Internet, where in 2015 every minute we were liking 4.2 million things on Facebook, uploading 300 hours of video to YouTube, and sending 350,000 tweets. Everything we do is generating data like never before, and lots of data is exactly what machines need in order to learn to learn. Why?
Imagine programming a computer to recognize a chair. You’d need to enter a ton of instructions, and the result would still be a program detecting chairs that aren’t, and not detecting chairs that are. So how did we learn to detect chairs? Our parents pointed at a chair and said, “chair.” Then we thought we had that whole chair thing all figured out, so we pointed at a table and said “chair”, which is when our parents told us that was “table.” This is called reinforcement learning. The label “chair” gets connected to every chair we see, such that certain neural pathways are weighted and others aren’t. For “chair” to fire in our brains, what we perceive has to be close enough to our previous chair encounters. Essentially, our lives are big data filtered through our brains.
The power of deep learning is that it’s a way of using massive amounts of data to get machines to operate more like we do without giving them explicit instructions. Instead of describing “chairness” to a computer, we instead just plug it into the Internet and feed it millions of pictures of chairs. It can then have a general idea of “chairness.” Next we test it with even more images. Where it’s wrong, we correct it, which further improves its “chairness” detection. Repetition of this process results in a computer that knows what a chair is when it sees it, for the most part as well as we can. The important difference though is that unlike us, it can then sort through millions of images within a matter of seconds.
This combination of deep learning and big data has resulted in astounding accomplishments just in the past year. Aside from the incredible accomplishment of AlphaGo, Google’s DeepMind AI learned how to read and comprehend what it read through hundreds of thousands of annotated news articles. DeepMind also taught itself to play dozens of Atari 2600 video games better than humans, just by looking at the screen and its score, and playing games repeatedly. An AI named Giraffe taught itself how to play chess in a similar manner using a dataset of 175 million chess positions, attaining International Master level status in just 72 hours by repeatedly playing itself. In 2015, an AI even passed a visual Turing test by learning to learn in a way that enabled it to be shown an unknown character in a fictional alphabet, then instantly reproduce that letter in a way that was entirely indistinguishable from a human given the same task. These are all major milestones in AI.
However, despite all these milestones, when asked to estimate when a computer would defeat a prominent Go player, the answer even just months prior to the announcement by Google of AlphaGo’s victory, was by experts essentially, “Maybe in another ten years.” A decade was considered a fair guess because Go is a game so complex I’ll just let Ken Jennings of Jeopardy fame, another former champion human defeated by AI, describe it:
Go is famously a more complex game than chess, with its larger board, longer games, and many more pieces. Google’s DeepMind artificial intelligence team likes to say that there are more possible Go boards than atoms in the known universe, but that vastly understates the computational problem. There are about 10¹⁷⁰ board positions in Go, and only 10⁸⁰ atoms in the universe. That means that if there were as many parallel universes as there are atoms in our universe (!), then the total number of atoms in all those universes combined would be close to the possibilities on a single Go board.
Such confounding complexity makes impossible any brute-force approach to scan every possible move to determine the next best move. But deep neural networks get around that barrier in the same way our own minds do, by learning to estimate what feels like the best move. We do this through observation and practice, and so did AlphaGo, by analyzing millions of professional games and playing itself millions of times. So the answer to when the game of Go would fall to machines wasn’t even close to ten years. The correct answer ended up being, “Any time now.”
Any time now. That’s the new go-to response in the 21st century for any question involving something new machines can do better than humans, and we need to try to wrap our heads around it.
We need to recognize what it means for exponential technological change to be entering the labor market space for nonroutine jobs for the first time ever. Machines that can learn mean nothing humans do as a job is uniquely safe anymore. From hamburgers to healthcare, machines can be created to successfully perform such tasks with no need or less need for humans, and at lower costs than humans.
Amelia is just one AI out there currently being beta-tested in companies right now. Created by IPsoft over the past 16 years, she’s learned how to perform the work of call center employees. She can learn in seconds what takes us months, and she can do it in 20 languages. Because she’s able to learn, she’s able to do more over time. In one company putting her through the paces, she successfully handled one of every ten calls in the first week, and by the end of the second month, she could resolve six of ten calls. Because of this, it’s been estimated that she can put 250 million people out of a job, worldwide.
Viv is an AI coming soon from the creators of Siri who’ll be our own personal assistant. She’ll perform tasks online for us, and even function as a Facebook News Feed on steroids by suggesting we consume the media she’ll know we’ll like best. In doing all of this for us, we’ll see far fewer ads, and that means the entire advertising industry — that industry the entire Internet is built upon — stands to be hugely disrupted.
A world with Amelia and Viv — and the countless other AI counterparts coming online soon — in combination with robots like Boston Dynamics’ next generation Atlas portends, is a world where machines can do all four types of jobs and that means serious societal reconsiderations. If a machine can do a job instead of a human, should any human be forced at the threat of destitution to perform that job? Should income itself remain coupled to employment, such that having a job is the only way to obtain income, when jobs for many are entirely unobtainable? If machines are performing an increasing percentage of our jobs for us, and not getting paid to do them, where does that money go instead? And what does it no longer buy? Is it even possible that many of the jobs we’re creating don’t need to exist at all, and only do because of the incomes they provide? These are questions we need to start asking, and fast.
Decoupling Income From Work
Fortunately, people are beginning to ask these questions, and there’s an answer that’s building up momentum. The idea is to put machines to work for us, but empower ourselves to seek out the forms of remaining work we as humans find most valuable, by simply providing everyone a monthly paycheck independent of work. This paycheck would be granted to all citizens unconditionally, and its name is universal basic income. By adopting UBI, aside from immunizing against the negative effects of automation, we’d also be decreasing the risks inherent in entrepreneurship, and the sizes of bureaucracies necessary to boost incomes. It’s for these reasons, it has cross-partisan support, and is even now in the beginning stages of possible implementation in countries like Switzerland, Finland, the Netherlands, and Canada.
The future is a place of accelerating changes. It seems unwise to continue looking at the future as if it were the past, where just because new jobs have historically appeared, they always will. The WEF started 2016 off by estimating the creation by 2020 of 2 million new jobs alongside the elimination of 7 million. That’s a net loss, not a net gain of 5 million jobs. In a frequently cited paper, an Oxford study estimated the automation of about half of all existing jobs by 2033. Meanwhile self-driving vehicles, again thanks to machine learning, have the capability of drastically impacting all economies — especially the US economy as I wrote last year about automating truck driving — by eliminating millions of jobs within a short span of time.
And now even the White House, in a stunning report to Congress, has put the probability at 83 percent that a worker making less than $20 an hour in 2010 will eventually lose their job to a machine. Even workers making as much as $40 an hour face odds of 31 percent. To ignore odds like these is tantamount to our now laughable “duck and cover” strategies for avoiding nuclear blasts during the Cold War.
All of this is why it’s those most knowledgeable in the AI field who are now actively sounding the alarm for basic income. During a panel discussion at the end of 2015 at Singularity University, prominent data scientist Jeremy Howard asked “Do you want half of people to starve because they literally can’t add economic value, or not?” before going on to suggest, ”If the answer is not, then the smartest way to distribute the wealth is by implementing a universal basic income.”
AI pioneer Chris Eliasmith, director of the Centre for Theoretical Neuroscience, warned about the immediate impacts of AI on society in an interview with Futurism, “AI is already having a big impact on our economies… My suspicion is that more countries will have to follow Finland’s lead in exploring basic income guarantees for people.”
Moshe Vardi expressed the same sentiment after speaking at the 2016 annual meeting of the American Association for the Advancement of Science about the emergence of intelligent machines, “we need to rethink the very basic structure of our economic system… we may have to consider instituting a basic income guarantee.”
Even Baidu’s chief scientist and founder of Google’s “Google Brain” deep learning project, Andrew Ng, during an onstage interview at this year’s Deep Learning Summit, expressed the shared notion that basic income must be “seriously considered” by governments, citing “a high chance that AI will create massive labor displacement.”
When those building the tools begin warning about the implications of their use, shouldn’t those wishing to use those tools listen with the utmost attention, especially when it’s the very livelihoods of millions of people at stake? If not then, what about when Nobel prize winning economists begin agreeing with them in increasing numbers?
No nation is yet ready for the changes ahead. High labor force non-participation leads to social instability, and a lack of consumers within consumer economies leads to economic instability. So let’s ask ourselves, what’s the purpose of the technologies we’re creating? What’s the purpose of a car that can drive for us, or artificial intelligence that can shoulder 60% of our workload? Is it to allow us to work more hours for even less pay? Or is it to enable us to choose how we work, and to decline any pay/hours we deem insufficient because we’re already earning the incomes that machines aren’t?
What’s the big lesson to learn, in a century when machines can learn?
I offer it’s that jobs are for machines, and life is for people.
“Looking to the future, the next big step will be for the very concept of the “device” to fade away. Over time, the computer itself—whatever its form factor—will be an intelligent assistant helping you through your day. We will move from mobile first to an AI first world.” — Sundar Pichai, CEO Google
•A global oil and gas company has trained software robots to help provide a prompt and more efficient way of answering invoicing queries from its suppliers.
•A large US-based media services organization taught software robots how to support first line agents in order to raise the bar for customer service.
Software agents or Robotic process automation (RPA) is becoming a mainstream topic at leading corporations. I have seen a massive uptick in corporate strategy work in this area as C-Suite execs look at new ways to do more with less.
Software robots ∼ Conversational-AI products like Apple Siri, Microsoft Cortana, IBM Watson, Google Home, Alexa, drones and driverless cars ∼ are now mainstream. What most people are not aware of is the rapidly advancing area of enterprise robots to create a “virtual FTE workforce” and transform business processes by enabling automation of manual, rules based, back office administrative processes.
This emerging process re-engineering area is called Robotic process automation (RPA).
Machine Learning (ML) and graph processing are becoming foundations for the next wave of advanced analytics use cases. Speech recognition, image processing, language translation have gone from a demo tech to everyday use in part because of machine learning. Machine learning models, e.g., in driverless cars, teaches itself how to discover relevant things like a stop sign with snow partially obscuring the sign.
The market opportunity of artificial intelligence has been expanding rapidly, with analyst firm IDC predicting that the worldwide content analytics, discovery and cognitive systems software market will grow from US$4.5 billion in 2014 to US$9.2 billion in 2019, with others citing these systems as catalyst to have a US$5 trillion – US$7 trillion potential economic impact by 2025.
RPA – What?
“Robotic automation refers to a style of automation where a machine, or computer, mimics a human’s action in completing rules based tasks.” – Blue Prism
RPA is the application of analytics, machine learning and rules based software to capture and interpret existing data input streams for processing a transaction, manipulating data, triggering responses and driving business process automation around enterprise applications (ERP, HRMS, SCM, SFA, CRM etc.).
RPA is not a question of “if” anymore but a question of “when.” This is truly the next frontier of business process automation, enterprise cognitive computing, predictive analytics and machine learning. To make a prediction, you need an equation and parameters that might be involved.
Industrial robots are remaking blue-collar factory and warehouse automation by creating higher production rates and improved quality. RPA, simple robots and complex learning robots, are revolutionizing white-collar business processes (e.g. customer service), workflow processes (e.g., order to cash), IT support processes (e.g., auditing and monitoring), and back-office work (e.g., data entry).
I strongly believe that as cognitive computing slowly but surely takes off, RPA is going to impact process outsourcers (e.g., call center agents) and labor intensive white collar jobs (e.g., compliance monitoring) in a big way over the next decade. Any company that uses labor on a large scale for general knowledge process work, where workers are performing high-volume, highly transactional process functions, will save money and time with robotic process automation software.