7 Robotic Process Automation Pitfalls & How to Avoid Them

AnalyticsAnywhere

In order to implement RPA “the smart way” and take the most advantage of it, you should be aware of the robotic process automation pitfalls from the very beginning.

We must all acknowledge and take seriously the fallibility of our endeavours, and, consequently, we should not allow ourselves to indulge in surreal expectations. The simple truth is that sometimes projects fail, for a very large number of reasons. According to IBM Systems Magazine, up to 25% of technological projects fail downright, while up to half of them require extensive revisions by the time they are set to go.

Bernard Marr writes in Forbes adds that more than half of the technological failures are in fact due to poor management, and only 3% are caused by technical problems. DIfficulties may also arise from not choosing the right processes to automate.Here are 8 questions to ask that should ease decision making in this regard.

So let us now delve a little deeper into potential robotic process automation pitfalls and corresponding means to avoid them in the course of implementing RPA.

Forethought is definitely needed for good results and successful RPA implementation. Of course you should first spell out what ‘successful’ means; but for now, let us tackle the question about what could go wrong during the implementation of your software robots. Here is a list of 7 aspects that ought to be considered and/or avoided if you want to stay safe from robotic process automation pitfalls.

1. Not choosing the right processes to automate in the beginning

This refers to picking the process that is most appropriate for an effective start of using automation in your business. By no means should you neglect a thorough, exhaustive and, of course, realistic evaluation of the tasks that may be passed on to robots. You do not want to start automating the wrong things, resulting in difficult to manage workflows.

A piece of advice courtesy of Cem Dilmegani, CEO at appliedAI, is that you should consider features like, for example, the process complexity and its business impact. Briefly put, you should perform a cost – benefit analysis of automating the candidate processes, based on what you consider to be your top goals.

2. Trying to implement robotic process automation on your own

You probably know this by now, otherwise you wouldn’t be reading this: RPA provides highly technical ways to carry out faster and more efficiently the dull jobs that would cause your employees unnecessary distress, boredom and fatigue. Precisely because of the high level of technicality, it is not at all advisable that you attempt to carry out the implementation process on your own.

Division of labour is with us for good reasons, so you must not forget to delegate the responsibility of implementation to the specialists who can best handle it. Tony Warren, executive vice president, head of strategy and solutions management at FIS, mentions things like “technical maintenance, operational monitoring and the appropriate change management procedures” among the RPA features that call for the right level of expertise, which specialist implementation navigators possess.

3. Not setting clear objectives for your automation strategy

This is a more general rule of thumb: it is vital that your business objectives, as well as the role that you expect RPA to play in getting there, are crystal clear.

What do you need RPA for?

Relatedly, which software provider is likely to do the best job for what you need?

While uncertainties in these respects are likely to be burdensome, definite answers to such questions will facilitate a smooth transition to delegating the tedious, repetitive tasks in your business to software robots.

4. Not having a “bird’s eye view” over the implementation process

As you probably know by now, RPA implementation is a complex enterprise. In fact, this comes as no surprise for an activity meant to take such deep effect on your business. So in order to achieve your goals, you need to ensure proper executive control.

This requires a group or an individual who can watch over the whole process from the top, so to say. Some call this essential aspect “operational oversight”, others – “governance of accretion” or simply “governance”, while others emphasize how important it is to include in the responsible team not only domain-specific specialists but also someone to take over the executive role of “central process unit”. In the long run, this can take the form of a robotic process automation centre of excellence that warrants a strategic maintenance of the system.

5. Not ensuring the scalability potential of your software robots

Scalability is one hidden gem that is certainly responsible for the larger-scale adoption of RPA. Which means that you really should not allow anything to stand in the way of scalable bots that can ensure consistent, across-the-board use of RPA in individual departments of your business.

6. Relying solely on the IT department

You certainly do not want to condition the smooth running of your automated processes to the IT department. Of course, it goes without saying that IT assistance is necessary for automation, but the idea is that you should not overdo it.

The bottom line is something along the lines of the phrase ‘render unto IT the things that are intrinsically IT-related (e.g. automation codes), and unto other departments the things that are better dealt with by other departments’. As Schultz puts it, “finance cannot depend on IT for RPA; it needs to be owned by the business side.”

7. Not testing your software robots thoroughly

Even if you may not like the phrase ‘haste makes waste’ after having heard it one million times, you have to admit there is some truth to it. And since you do not want to waste the effort, time, money and hope that you invested in RPA, you also do not want to stumble at the threshold.

As our own Daniel Pullen puts it, you need to test processes in production prior to full go-live to ensure there is a like-for-like behaviour between Dev and Production. This includes ensuring the applications are the same version, testing applications under normal and peak loads throughout the day, servers & applications in a server farm all behave identically (both operation and speed), etc.

Conclusion

We believe that you are now better prepared to embark on a successful RPA journey. Failure anticipation is not meant to alarm you, rather to motivate you to have a realistic view over what might happen so that you can prevent the pitfalls.

Anticipating and planning pro-actively should take you a step closer to gloriously passing the finish line. Although the word ‘finish’ is not perfectly fit here, since what you aim for with robotic process automation is a long-term sustainable development of your enterprise. As UiPath puts it, with “a comprehensive understanding of your company’s automation needs and the value proposition RPA provides, you can ensure a successful RPA implementation scheme that is both cost-effective and timely”.

Such extensive understanding can lead you to make use of the best practices for robotic process automation implementation. Wisely selecting the processes, a plain understanding of the required human resources or reliance on an ‘RPA sponsor’ are some of those practices, on which you can read more here.

Source: cigen.com.au

Advertisements

International Chart Day

Congressman Mark Takano, from California, has announced the first International Chart Day in collaboration with Tumblr and the Society for News Design.

Takano has also introduced a resolution in the House officially declaring April 26 as International Chart Day. According to his press release, he will “deliver a speech on the House floor about the importance and history of charts. Other members of Congress on both sides of the aisle will be encouraged to participate.”

 

Source: thefunctionalart.com

Stanford is Using Machine Learning on Satellite Images to Predict Poverty

•Stanford’s machine learning model can predict poverty
•It uses satellite imagery to gather data and runs it through the algorithm
•Night time images are cross checked with day time images to predict the economic status of the region
•It’s open source, code is available on GitHub for both R and python

AnalyticsAnywhere

Eliminating poverty is the number one goal of most countries around the world. However, the process of going around rural areas and manually tracking census data is time consuming, labor intensive and expensive.

Considering that, a group of researchers at Stanford have pioneered an approach that combines machine learning with satellite images to make predicting poverty quicker, easier and less expensive.

How does the algorithm work?

Using this machine learning algorithm, the model is able to predict per capita consumption expenditure of a particular location when provided with it’s satellite images. The algorithm runs through millions of images of rural regions throughout the world. It then compares the presence of light in a region during the day and at night to predict it’s economic activity. This approach is called transfer learning.

Using the images captured during the night, the algorithm cross references it with the day time images to gauge the infrastructure there. In general, a brightly lit area means it is powered by electricity and must be better off than the alternative.

Before making it’s predictions, the algorithm has been made to cross check it’s results with actual survey data in order to improve it’s accuracy.

So far, this study was performed for regions in 5 countries – Nigeria, Uganda, Tanzania, Rwanda and Malawi. Check out a small video on this study below:

Our take on this

Anything that helps eliminate poverty is good in our books and when it comes to machine learning doing the work, even better. Stanford claims that it’s model predicts poverty almost as well as the manually collected data so that makes it a feasible option for the survey administrators.

It’s also an open-sourced project and they have made their code available on GitHub here. It’s available both in R and python so anyone with an interest in the subject can try it on their own systems.

Apart from Stanford, researchers at the University of Buffalo are also using machine learning and satellite images to predict poverty. Their approach differs from Stanford’s as they have added cell phone data to their model. The Pentagon is also offering $100,000 to anyone who can read the data from satellite images in the same way that Stanford’s model does.

Source: analyticsvidhya.com

We Need To Invite More Disruption and Messiness Into Our Lives — Here’s Why:

In 1993, advertising legend Jay Chiat announced his radical plans for the office of the future. His agency, Chiat/Day, was already a paragon of creativity — its legendary campaigns included Apple’s “1984” and “Think Different” campaigns — and its new LA office, designed by Frank Gehry was to be its monument.

The space was engineered to be playful; with decorations that included pieces from fairground rides and a four-story sized set of binoculars. Chiat also banished the traditional office cubicles and desks in favor of public spaces where executives could meet in impromptu places and brainstorm ideas.

It was a disaster. As Tim Harford explains in his book Messy, our desire for engineered spaces — even creative ones — can kill productivity and innovation. At the same time, disorder and disruption can help us to do our very best work. While this defies conventional wisdom, decades of research suggests that a messy desk may very well be a mark of genius.

Champion

The Tidiness Temptation

Kyocera, the Japanese technology giant, strictly adheres to the 5S workplace philosophy (Sort, Set in order, Shine, Standardize and Sustain). Employees are discouraged from cluttering up their desks or hanging personal items on the walls. Inspectors routinely patrol to enforce compliance.

This type of uniformity may be great for the factory floor — some believe 5S was originally derived from Henry Ford’s CANDO system (Cleaning up, Arranging, Neatness, Discipline and Ongoing improvement) — where efficiency is the primary goal, but there is ample evidence that it may seriously harm productivity when creativity and problem solving are required.

In 2010, Alexander Haslam and Craig Knight, both researchers at the University of Exeter, set out to understand how office environments affect productivity. They set up four office layouts and asked subjects to perform simple tasks. They found that when workers were able to clutter up the space with personal knickknacks they got 30% more done than in the 5S environment.

Yet the issue goes far beyond a bit of clutter. Harford points to a number of examples, from musicians to software engineers to daily commuters — that suggest that we often produce our best work amidst some kind of disruption. As it turns out, being thrown off our game can actually bring it to a whole new level.

Why Messy Works

To illustrate why disorder can lead to better outcomes Harford offers a simple hill climbing analogy. Imagine if you had to design an algorithm to find the highest point on earth. The simplest way to do it would be to pick a point at random and simply move to the next highest point. With each move, you would go higher and higher until you reached a peak.

Your performance on the task, however, would greatly depend on where you started. You might do better selecting a number of different points, but here again, you would basically be relying on luck. You’d be just as likely to end up in the lowlands of Holland as you would to find yourself in the Himalayas or the Andes.

The best approach would be to combine the two strategies by picking a limited set of random points and then hill climbing. That would allow you to avoid getting stuck in lowlands and still benefit from steady improvement. It wouldn’t guarantee that you would end up on the top of Mount Everest, but it would outperform either strategy alone.

There is evidence that the hybrId strategy produces better results in the real world. In fact, a team of researchers analyzing 17.9 million scientific papers found that the most highly cited work is far more likely to come from a team of experts in one field that borrowed a small piece of insight from another. Injecting a little bit of randomness can work wonders.

The Two Sides Of Diversity

Steve Jobs is renowned for his attention to order and detail. A micromanager of the highest order, he even insisted that the insides of his computers look elegant and streamlined. It was, in part, this meticulous approach that allowed him to make some of the most successful products ever.

Yet when designing workspaces, he did just the opposite. Both Pixar’s office and Apple’s new “spaceship” building feature central atriums where employees are bound to run into people they ordinarily wouldn’t. The legendary Bell Labs was set up with the same idea in mind, almost forcing researchers with widely divergent expertise to cross in the halls.

Once again, there is ample empirical evidence that backs up the this idea. A variety of studies going back decades suggest the diverse teams perform better, even when compared with ones that objectively have more ability. Giving yourself more hills to climb increases the chances that you’ll land on a high peak.

However, research also shows that being exposed to diverse perspectives is challenging and often uncomfortable, giving rise to tension and uncertainty. That’s why the best teams often function as part of a larger small world network, with tight-knit groups connected to and interacting with other tight-knit groups, combining stability with diversity.

Sharing Purpose

Clearly, the most effective work environments have a healthy mix of order and disorder. The strict conformity of 5S workplaces can feel oppressive, but so can the imposed craziness of the Chiat/Day offices. In both cases, our own personal sense of autonomy is violated. More subtle prodding, such as the run-ins catalyzed by Pixar’s atrium seem to get better results.

Still, every workplace has its own tribes and cliques. Marketing teams clash with engineering and sales teams, while everyone chafes under the watchful gaze of finance and admin. We all have an instinctive need to form our own cohesive groups and to protect them from the incursions of outsiders.

However, those tensions can be overcome if diverse and competing tribes share a greater purpose. In a classic study done back in the 1950s with boys at a summer camp, it was shown that intense conflict would break out when teams were given competing goals, but that tension gave way to cooperation when they were given a common objective.

Many managers today go to great efforts to design innovative workplaces and they take a variety of different approaches. Yet what seems most important isn’t the actual specifics of the architecture, but whether it’s designed to empower or to dictate. If we feel we have power over our environment, we tend to be much more productive and collaborative.

Of course, when everyone gets to make their own decisions things can get a little messy, but that’s what often produces better results.

Source: DigitalTonto.com

Top 10 Technology Trends for 2018: IEEE Computer Society Predicts the Future of Tech

Tech experts at the IEEE Computer Society (IEEE-CS) annually predict the “Future of Tech” and have revealed what they believe will be the biggest trends in technology for 2018. The forecast by the world’s premier organization of computing professionals is among its most anticipated announcements.

“The Computer Society’s predictions, based on a deep-dive analysis by a team of leading technology experts, identify top-trending technologies that hold extensive disruptive potential for 2018,” said Jean-Luc Gaudiot, IEEE Computer Society President. “The vast computing community depends on the Computer Society as the provider for relevant technology news and information, and our predictions directly align with our commitment to keeping our community well-informed and prepared for the changing technological landscape of the future.”

Dejan Milojicic, Hewlett Packard Enterprise Distinguished Technologist and IEEE Computer Society past president, said “The following year we will witness some of the most intriguing dilemmas in the future of technology. Will deep learning and AI indeed expand deployment domains or remain within the realms of neural networks? Will cryptocurrency technologies keep their extraordinary evolution or experience a bubble burst? Will new computing and memory technologies finally disrupt the extended life of Moore’s law? We’ve made our bets on our 2018 predictions.”

The top 10 technology trends predicted to reach adoption in 2018 are:

1. Deep learning (DL). Machine learning (ML) and more specifically DL are already on the cusp of revolution. They are widely adopted in datacenters (Amazon making graphical processing units [GPUs] available for DL, Google running DL on tensor processing units [TPUs], Microsoft using field programmable gate arrays [FPGAs], etc.), and DL is being explored at the edge of the network to reduce the amount of data propagated back to datacenters. Applications such as image, video, and audio recognition are already being deployed for a variety of verticals. DL heavily depends on accelerators (see #9 below) and is used for a variety of assistive functions (#s 6, 7, and 10).

2. Digital currencies. Bitcoin, Ethereum, and newcomers Litecoin, Dash, and Ripple have become commonly traded currencies. They will continue to become a more widely adopted means of trading. This will trigger improved cybersecurity (see #10) because the stakes will be ever higher as their values rise. In addition, digital currencies will continue to enable and be enabled by other technologies, such as storage (see #3), cloud computing (see B in the list of already adopted technologies), the Internet of Things (IoT), edge computing, and more.

3. Blockchain. The use of Bitcoin and the revitalization of peer-to-peer computing have been essential for the adoption of blockchain technology in a broader sense. We predict increased expansion of companies delivering blockchain products and even IT heavyweights entering the market and consolidating the products.

4. Industrial IoT. Empowered by DL at the edge, industrial IoT continues to be the most widely adopted use case for edge computing. It is driven by real needs and requirements. We anticipate that it will continue to be adopted with a broader set of technical offerings enabled by DL, as well as other uses of IoT (see C and E).

5. Robotics. Even though robotics research has been performed for many decades, robotics adoption has not flourished. However, the past few years have seen increased market availability of consumer robots, as well as more sophisticated military and industrial robots. We predict that this will trigger wider adoption of robotics in the medical space for caregiving and other healthcare uses. Combined with DL (#1) and AI (#10), robotics will further advance in 2018. Robotics will also motivate further evolution of ethics (see #8).

6. Assisted transportation. While the promise of fully autonomous vehicles has slowed down due to numerous obstacles, a limited use of automated assistance has continued to grow, such as parking assistance, video recognition, and alerts for leaving the lane or identifying sudden obstacles. We anticipate that vehicle assistance will develop further as automation and ML/DL are deployed in the automotive industry.

7. Assisted reality and virtual reality (AR/VR). Gaming and AR/VR gadgets have grown in adoption in the past year. We anticipate that this trend will grow with modern user interfaces such as 3D projections and movement detection. This will allow for associating individuals with metadata that can be viewed subject to privacy configurations, which will continue to drive international policies for cybersecurity and privacy (see #10).

8. Ethics, laws, and policies for privacy, security, and liability. With the increasing advancement of DL (#1), robotics (#5), technological assistance (#s 6 and 7), and applications of AI (#10), technology has moved beyond society’s ability to control it easily. Mandatory guidance has already been deeply analyzed and rolled out in various aspects of design (see the IEEE standards association document), and it is further being applied to autonomous and intelligent systems and in cybersecurity. But adoption of ethical considerations will speed up in many vertical industries and horizontal technologies.

9. Accelerators and 3D. With the end of power scaling and Moore’s law and the shift to 3D, accelerators are emerging as a way to continue improving hardware performance and energy efficiency and to reduce costs. There are a number of existing technologies (FPGAs and ASICs) and new ones (such as memristor-based DPE) that hold a lot of promise for accelerating application domains (such as matrix multiplication for the use of DL algorithms). We predict wider diversity and broader applicability of accelerators, leading to more widespread use in 2018.

10. Cybersecurity and AI. Cybersecurity is becoming essential to everyday life and business, yet it is increasingly hard to manage. Exploits have become extremely sophisticated and it is hard for IT to keep up. Pure automation no longer suffices and AI is required to enhance data analytics and automated scripts. It is expected that humans will still be in the loop of taking actions; hence, the relationship to ethics (#8). But AI itself is not immune to cyberattacks. We will need to make AI/DL techniques more robust in the presence of adversarial traffic in any application area.

Existing Technologies: We did not include the following technologies in our top 10 list as we assume that they have already experienced broad adoption:
A. Data science
B. “Cloudification”
C. Smart cities
D. Sustainability
E. IoT/edge computing

Source: computer.org

Machine learning and the five vectors of progress

What’s keeping leaders from adopting machine learning? Well, tools are still evolving, practitioners are scarce, and the technology is a bit inscrutable for comfort. But five vectors of progress are making it easier, faster, and cheaper to deploy machine learning and could bring it into the mainstream.

AnalyticsAnywhere

Though nearly every industry is finding applications for machine learning—the artificial intelligence technology that feeds on data to automatically discover patterns and anomalies and make predictions—most companies are not yet taking advantage. However, five vectors of progress are making it easier, faster, and cheaper to deploy machine learning and could eventually help to bring the technology into the mainstream. With barriers to use beginning to fall, every enterprise can begin exploring applications of this transformative technology.

Signals
•Tech vendors claim they can reduce the need for training data by several orders of magnitude, using a technique called transfer learning.
•Specialized chips dramatically accelerate the training of machine learning models; at Microsoft, they cut the time to develop a speech recognition system by 80 percent.
•Researchers at MIT have demonstrated a method of training a neural network that delivered both accurate predictions and the rationales for those predictions.
•Major technology vendors are finding ways to cram powerful machine learning models onto mobile devices.
•New tools aim to automate tasks that occupy up to 80 percent of data scientists’ time.

Use of machine learning faces obstacles

Machine learning is one of the most powerful and versatile information technologies available today. But most companies have not begun to put it to use. One recent survey of 3,100 executives in small, medium, and large companies across 17 countries found that fewer than 10 percent were investing in machine learning.

A number of factors are restraining the adoption of machine learning. Qualified practitioners are in short supply. Tools and frameworks for doing machine learning work are immature and still evolving. It can be difficult, time-consuming, and costly to obtain the large datasets that some machine learning model-development techniques require.

Then there is the black-box problem. Even when machine learning models appear to generate valuable information, many executives seem reluctant to deploy them in production. Why? In part, because their inner workings are inscrutable, and some people are uncomfortable with the idea of running their operations on logic they don’t understand and can’t clearly describe. Others may be constrained by regulations that require businesses to offer explanations for their decisions or to prove that decisions do not discriminate against protected classes of people. In such situations, it’s hard to deploy black-box models, no matter how accurate or useful their outputs.

Progress in five areas can help overcome barriers to adoption

These barriers are beginning to fall. Deloitte has identified five key vectors of progress that should help foster significantly greater adoption of machine learning in the enterprise. Three of these advancements—automation, data reduction, and training acceleration—make machine learning easier, cheaper, and/or faster. The others—model interpretability and local machine learning—open up applications in new areas.

The five vectors of progress, ordered by breadth of application, with the widest first:

Automating data science. Developing machine learning solutions requires skills from the discipline of data science, an often-misunderstood field practiced by specialists in high demand but short supply. Data science is a mix of art and science—and digital grunt work. The reality is that as much as 80 percent of the work on which data scientists spend their time can be fully or partially automated. This work might include data wrangling—preprocessing and normalizing data, filling in missing values, for instance, or determining whether to interpret the data in a column as a number or a date; exploratory data analysis—seeking to understand the broad characteristics of the data to help formulate hypotheses about it; feature engineering and selection—selecting the variables in the data that are most likely correlated with what the model is supposed to predict; and algorithm selection and evaluation—testing potentially thousands of algorithms in order to choose those that produce the most accurate results.

Automating these tasks can make data scientists not only more productive but more effective. For instance, while building customer lifetime value models for guests and hosts, data scientists at Airbnb used an automation platform to test multiple algorithms and design approaches, which they would not have otherwise had the time to do. This enabled them to discover changes they could make to their algorithm that increased its accuracy by more than 5 percent, resulting in a material impact.

A growing number of tools and techniques for data science automation, some offered by established companies and others by venture-backed start-ups, can help reduce the time required to execute a machine learning proof of concept from months to days. And automating data science means augmenting data scientists’ productivity, so even in the face of severe talent shortages, enterprises that employ data science automation technologies should be able to significantly expand their machine learning activities.

Reducing need for training data. Training a machine learning model might require up to millions of data elements. This can be a major barrier: Acquiring and labeling data can be time-consuming and costly. Consider, as an example, a medical diagnosis project that requires MRI images labeled with a diagnosis. It might cost over $30,000 to hire a radiologist to review and label 1,000 images at six images an hour. Privacy and confidentiality concerns can also make it difficult to obtain data to work with.

A number of promising techniques for reducing the amount of training data required for machine learning are emerging. One involves the use of synthetic data, generated algorithmically to mimic the characteristics of the real data. This can work surprisingly well. A Deloitte LLP team tested a tool that made it possible to build an accurate model with only a fifth of the training data previously required, by synthesizing the remaining 80 percent.

Synthetic data can not only make it easier to get training data—it may make it easier for organizations to tap into outside data science talent. A number of organizations have successfully engaged third parties, or used crowdsourcing, to devise machine learning models, posting their datasets online for outside data scientists to work with. But this may not be an option if the datasets are proprietary. Researchers at MIT demonstrated a workaround to this conundrum, using synthetic data: They used a real dataset to create a synthetic alternative that they shared with an external data science community. Data scientists within the community created machine learning models using this synthetic data. In 11 out of 15 tests, the models developed from the synthetic data performed as well as those trained on real data.

Another technique that could reduce the need for training data is transfer learning. With this approach, a machine learning model is pre-trained on one dataset as a shortcut to learning a new dataset in a similar domain such as language translation or image recognition. Some vendors offering machine learning tools claim their use of transfer learning can cut the number of training examples that customers need to provide by several orders of magnitude.

Accelerating training. Because of the large volumes of data and complex algorithms involved, the computational process of training a machine learning model can take a long time: hours, days, even weeks to run. Only then can the model be tested and refined. But now, semiconductor and computer manufacturers—both established companies and start-ups—are developing specialized processors such as graphics processing units (GPUs), field-programmable gate arrays, and application-specific integrated circuit to slash the time required to train machine learning models by accelerating the calculations and by speeding the transfer of data within the chip.

These dedicated processors help companies speed up machine learning training and execution multifold, which in turn brings down the associated costs. For instance, a Microsoft research team—in one year, using GPUs—completed a system to recognize conversational speech as capably as humans. Had the team used only CPUs instead, according to one of the researchers, it would have taken five years. Google stated that its own AI chip, the Tensor Processing Unit (TPU), incorporated into a computing system that also includes CPUs and GPUs, provided such a performance boost that it helped avoid the cost of building of a dozen extra data centers.

Early adopters of these specialized AI chips include major technology vendors and research institutions in data science and machine learning, but adoption is spreading to sectors such as retail, financial services, and telecom. With every major cloud provider—including IBM, Microsoft, Google, and Amazon Web Services—offering GPU cloud computing, accelerated training will become available to data science teams in any organization, making it possible to increase their productivity and multiplying the number of applications enterprises choose to undertake.

Explaining results. Machine learning models often suffer from a critical weakness: Many are black boxes, meaning it is impossible to explain with confidence how they made their decisions. This can make them unsuitable or unpalatable for many applications. Physicians and business leaders, for instance, may not accept a medical diagnosis or investment decision without a credible explanation for the decision. In some cases, regulations mandate such explanations. For example, the US banking industry adheres to SR 11-7, guidance published by the Federal Reserve, which among other things requires that model behavior be explained.

But techniques are emerging that help shine light inside the black box of certain machine learning models, making them more interpretable and accurate. MIT researchers, for instance, have demonstrated a method of training a neural network that delivers both accurate predictions and the rationales for those predictions. Some of these techniques are already appearing in commercial data science products.

As it becomes possible to build interpretable machine learning models, companies in highly regulated industries such as financial services, life sciences, and health care will find attractive opportunities to use machine learning. Some of the potential application areas include credit scoring, recommendation engines, customer churn management, fraud detection, and disease diagnosis and treatment.

Deploying locally. The adoption of machine learning will grow along with the ability to deploy the technology where it can improve efficiency and outcomes. Advances in both software and hardware are making it increasingly viable to use the technology on mobile devices and smart sensors. On the software side, technology vendors such as Apple Inc., Facebook, Google, and Microsoft are creating compact machine learning models that require relatively little memory but can still handle tasks such as image recognition and language translation on mobile devices. Microsoft Research Lab’s compression efforts resulted in models that were 10 to 100 times smaller.

On the hardware end, semiconductor vendors such as Intel, Nvidia, and Qualcomm, as well as Google and Microsoft, have developed or are developing their own power-efficient AI chips to bring machine learning to mobile devices.

The emergence of mobile devices as a machine learning platform is expanding the number of potential applications of the technology and inducing companies to develop applications in areas such as smart homes and cities, autonomous vehicles, wearable technology, and the industrial Internet of Things.

Prepare for the mainstreaming of machine learning

Collectively, the five vectors of machine learning progress can help reduce the friction that is preventing some companies from investing in machine learning. And they can help those already using the technology to intensify their use of it. These advancements can also enable new applications across industries and help overcome the constraints of limited resources including talent, infrastructure, or data to train the models.

Companies should look for opportunities to automate some of the work of their oversubscribed data scientists—and ask consultants how they use data science automation. They should keep an eye on emerging techniques such as data synthesis and transfer learning that could ease the challenge of acquiring training data. They should learn what computing resources optimized for machine learning their cloud providers offer. If they are running workloads in their own data centers, they may want to investigate adding specialized hardware into the mix.

Though interpretability of machine learning is still in its early days, companies contemplating high-value applications may want to explore state-of-the-art techniques for improving interpretability. Finally, organizations considering mobile- or device-based machine learning applications should track the performance benchmarks being reported by makers of next-generation chips so they are ready when on-device deployment becomes feasible.

Machine learning has already shown itself to be a valuable technology in many applications. Progress along the five vectors can help overcome some of the obstacles to mainstream adoption.

Source: Deloitte