Top 10 Technology Trends for 2018: IEEE Computer Society Predicts the Future of Tech

Tech experts at the IEEE Computer Society (IEEE-CS) annually predict the “Future of Tech” and have revealed what they believe will be the biggest trends in technology for 2018. The forecast by the world’s premier organization of computing professionals is among its most anticipated announcements.

“The Computer Society’s predictions, based on a deep-dive analysis by a team of leading technology experts, identify top-trending technologies that hold extensive disruptive potential for 2018,” said Jean-Luc Gaudiot, IEEE Computer Society President. “The vast computing community depends on the Computer Society as the provider for relevant technology news and information, and our predictions directly align with our commitment to keeping our community well-informed and prepared for the changing technological landscape of the future.”

Dejan Milojicic, Hewlett Packard Enterprise Distinguished Technologist and IEEE Computer Society past president, said “The following year we will witness some of the most intriguing dilemmas in the future of technology. Will deep learning and AI indeed expand deployment domains or remain within the realms of neural networks? Will cryptocurrency technologies keep their extraordinary evolution or experience a bubble burst? Will new computing and memory technologies finally disrupt the extended life of Moore’s law? We’ve made our bets on our 2018 predictions.”

The top 10 technology trends predicted to reach adoption in 2018 are:

1. Deep learning (DL). Machine learning (ML) and more specifically DL are already on the cusp of revolution. They are widely adopted in datacenters (Amazon making graphical processing units [GPUs] available for DL, Google running DL on tensor processing units [TPUs], Microsoft using field programmable gate arrays [FPGAs], etc.), and DL is being explored at the edge of the network to reduce the amount of data propagated back to datacenters. Applications such as image, video, and audio recognition are already being deployed for a variety of verticals. DL heavily depends on accelerators (see #9 below) and is used for a variety of assistive functions (#s 6, 7, and 10).

2. Digital currencies. Bitcoin, Ethereum, and newcomers Litecoin, Dash, and Ripple have become commonly traded currencies. They will continue to become a more widely adopted means of trading. This will trigger improved cybersecurity (see #10) because the stakes will be ever higher as their values rise. In addition, digital currencies will continue to enable and be enabled by other technologies, such as storage (see #3), cloud computing (see B in the list of already adopted technologies), the Internet of Things (IoT), edge computing, and more.

3. Blockchain. The use of Bitcoin and the revitalization of peer-to-peer computing have been essential for the adoption of blockchain technology in a broader sense. We predict increased expansion of companies delivering blockchain products and even IT heavyweights entering the market and consolidating the products.

4. Industrial IoT. Empowered by DL at the edge, industrial IoT continues to be the most widely adopted use case for edge computing. It is driven by real needs and requirements. We anticipate that it will continue to be adopted with a broader set of technical offerings enabled by DL, as well as other uses of IoT (see C and E).

5. Robotics. Even though robotics research has been performed for many decades, robotics adoption has not flourished. However, the past few years have seen increased market availability of consumer robots, as well as more sophisticated military and industrial robots. We predict that this will trigger wider adoption of robotics in the medical space for caregiving and other healthcare uses. Combined with DL (#1) and AI (#10), robotics will further advance in 2018. Robotics will also motivate further evolution of ethics (see #8).

6. Assisted transportation. While the promise of fully autonomous vehicles has slowed down due to numerous obstacles, a limited use of automated assistance has continued to grow, such as parking assistance, video recognition, and alerts for leaving the lane or identifying sudden obstacles. We anticipate that vehicle assistance will develop further as automation and ML/DL are deployed in the automotive industry.

7. Assisted reality and virtual reality (AR/VR). Gaming and AR/VR gadgets have grown in adoption in the past year. We anticipate that this trend will grow with modern user interfaces such as 3D projections and movement detection. This will allow for associating individuals with metadata that can be viewed subject to privacy configurations, which will continue to drive international policies for cybersecurity and privacy (see #10).

8. Ethics, laws, and policies for privacy, security, and liability. With the increasing advancement of DL (#1), robotics (#5), technological assistance (#s 6 and 7), and applications of AI (#10), technology has moved beyond society’s ability to control it easily. Mandatory guidance has already been deeply analyzed and rolled out in various aspects of design (see the IEEE standards association document), and it is further being applied to autonomous and intelligent systems and in cybersecurity. But adoption of ethical considerations will speed up in many vertical industries and horizontal technologies.

9. Accelerators and 3D. With the end of power scaling and Moore’s law and the shift to 3D, accelerators are emerging as a way to continue improving hardware performance and energy efficiency and to reduce costs. There are a number of existing technologies (FPGAs and ASICs) and new ones (such as memristor-based DPE) that hold a lot of promise for accelerating application domains (such as matrix multiplication for the use of DL algorithms). We predict wider diversity and broader applicability of accelerators, leading to more widespread use in 2018.

10. Cybersecurity and AI. Cybersecurity is becoming essential to everyday life and business, yet it is increasingly hard to manage. Exploits have become extremely sophisticated and it is hard for IT to keep up. Pure automation no longer suffices and AI is required to enhance data analytics and automated scripts. It is expected that humans will still be in the loop of taking actions; hence, the relationship to ethics (#8). But AI itself is not immune to cyberattacks. We will need to make AI/DL techniques more robust in the presence of adversarial traffic in any application area.

Existing Technologies: We did not include the following technologies in our top 10 list as we assume that they have already experienced broad adoption:
A. Data science
B. “Cloudification”
C. Smart cities
D. Sustainability
E. IoT/edge computing

Source: computer.org

Advertisements

Machine learning and the five vectors of progress

What’s keeping leaders from adopting machine learning? Well, tools are still evolving, practitioners are scarce, and the technology is a bit inscrutable for comfort. But five vectors of progress are making it easier, faster, and cheaper to deploy machine learning and could bring it into the mainstream.

AnalyticsAnywhere

Though nearly every industry is finding applications for machine learning—the artificial intelligence technology that feeds on data to automatically discover patterns and anomalies and make predictions—most companies are not yet taking advantage. However, five vectors of progress are making it easier, faster, and cheaper to deploy machine learning and could eventually help to bring the technology into the mainstream. With barriers to use beginning to fall, every enterprise can begin exploring applications of this transformative technology.

Signals
•Tech vendors claim they can reduce the need for training data by several orders of magnitude, using a technique called transfer learning.
•Specialized chips dramatically accelerate the training of machine learning models; at Microsoft, they cut the time to develop a speech recognition system by 80 percent.
•Researchers at MIT have demonstrated a method of training a neural network that delivered both accurate predictions and the rationales for those predictions.
•Major technology vendors are finding ways to cram powerful machine learning models onto mobile devices.
•New tools aim to automate tasks that occupy up to 80 percent of data scientists’ time.

Use of machine learning faces obstacles

Machine learning is one of the most powerful and versatile information technologies available today. But most companies have not begun to put it to use. One recent survey of 3,100 executives in small, medium, and large companies across 17 countries found that fewer than 10 percent were investing in machine learning.

A number of factors are restraining the adoption of machine learning. Qualified practitioners are in short supply. Tools and frameworks for doing machine learning work are immature and still evolving. It can be difficult, time-consuming, and costly to obtain the large datasets that some machine learning model-development techniques require.

Then there is the black-box problem. Even when machine learning models appear to generate valuable information, many executives seem reluctant to deploy them in production. Why? In part, because their inner workings are inscrutable, and some people are uncomfortable with the idea of running their operations on logic they don’t understand and can’t clearly describe. Others may be constrained by regulations that require businesses to offer explanations for their decisions or to prove that decisions do not discriminate against protected classes of people. In such situations, it’s hard to deploy black-box models, no matter how accurate or useful their outputs.

Progress in five areas can help overcome barriers to adoption

These barriers are beginning to fall. Deloitte has identified five key vectors of progress that should help foster significantly greater adoption of machine learning in the enterprise. Three of these advancements—automation, data reduction, and training acceleration—make machine learning easier, cheaper, and/or faster. The others—model interpretability and local machine learning—open up applications in new areas.

The five vectors of progress, ordered by breadth of application, with the widest first:

Automating data science. Developing machine learning solutions requires skills from the discipline of data science, an often-misunderstood field practiced by specialists in high demand but short supply. Data science is a mix of art and science—and digital grunt work. The reality is that as much as 80 percent of the work on which data scientists spend their time can be fully or partially automated. This work might include data wrangling—preprocessing and normalizing data, filling in missing values, for instance, or determining whether to interpret the data in a column as a number or a date; exploratory data analysis—seeking to understand the broad characteristics of the data to help formulate hypotheses about it; feature engineering and selection—selecting the variables in the data that are most likely correlated with what the model is supposed to predict; and algorithm selection and evaluation—testing potentially thousands of algorithms in order to choose those that produce the most accurate results.

Automating these tasks can make data scientists not only more productive but more effective. For instance, while building customer lifetime value models for guests and hosts, data scientists at Airbnb used an automation platform to test multiple algorithms and design approaches, which they would not have otherwise had the time to do. This enabled them to discover changes they could make to their algorithm that increased its accuracy by more than 5 percent, resulting in a material impact.

A growing number of tools and techniques for data science automation, some offered by established companies and others by venture-backed start-ups, can help reduce the time required to execute a machine learning proof of concept from months to days. And automating data science means augmenting data scientists’ productivity, so even in the face of severe talent shortages, enterprises that employ data science automation technologies should be able to significantly expand their machine learning activities.

Reducing need for training data. Training a machine learning model might require up to millions of data elements. This can be a major barrier: Acquiring and labeling data can be time-consuming and costly. Consider, as an example, a medical diagnosis project that requires MRI images labeled with a diagnosis. It might cost over $30,000 to hire a radiologist to review and label 1,000 images at six images an hour. Privacy and confidentiality concerns can also make it difficult to obtain data to work with.

A number of promising techniques for reducing the amount of training data required for machine learning are emerging. One involves the use of synthetic data, generated algorithmically to mimic the characteristics of the real data. This can work surprisingly well. A Deloitte LLP team tested a tool that made it possible to build an accurate model with only a fifth of the training data previously required, by synthesizing the remaining 80 percent.

Synthetic data can not only make it easier to get training data—it may make it easier for organizations to tap into outside data science talent. A number of organizations have successfully engaged third parties, or used crowdsourcing, to devise machine learning models, posting their datasets online for outside data scientists to work with. But this may not be an option if the datasets are proprietary. Researchers at MIT demonstrated a workaround to this conundrum, using synthetic data: They used a real dataset to create a synthetic alternative that they shared with an external data science community. Data scientists within the community created machine learning models using this synthetic data. In 11 out of 15 tests, the models developed from the synthetic data performed as well as those trained on real data.

Another technique that could reduce the need for training data is transfer learning. With this approach, a machine learning model is pre-trained on one dataset as a shortcut to learning a new dataset in a similar domain such as language translation or image recognition. Some vendors offering machine learning tools claim their use of transfer learning can cut the number of training examples that customers need to provide by several orders of magnitude.

Accelerating training. Because of the large volumes of data and complex algorithms involved, the computational process of training a machine learning model can take a long time: hours, days, even weeks to run. Only then can the model be tested and refined. But now, semiconductor and computer manufacturers—both established companies and start-ups—are developing specialized processors such as graphics processing units (GPUs), field-programmable gate arrays, and application-specific integrated circuit to slash the time required to train machine learning models by accelerating the calculations and by speeding the transfer of data within the chip.

These dedicated processors help companies speed up machine learning training and execution multifold, which in turn brings down the associated costs. For instance, a Microsoft research team—in one year, using GPUs—completed a system to recognize conversational speech as capably as humans. Had the team used only CPUs instead, according to one of the researchers, it would have taken five years. Google stated that its own AI chip, the Tensor Processing Unit (TPU), incorporated into a computing system that also includes CPUs and GPUs, provided such a performance boost that it helped avoid the cost of building of a dozen extra data centers.

Early adopters of these specialized AI chips include major technology vendors and research institutions in data science and machine learning, but adoption is spreading to sectors such as retail, financial services, and telecom. With every major cloud provider—including IBM, Microsoft, Google, and Amazon Web Services—offering GPU cloud computing, accelerated training will become available to data science teams in any organization, making it possible to increase their productivity and multiplying the number of applications enterprises choose to undertake.

Explaining results. Machine learning models often suffer from a critical weakness: Many are black boxes, meaning it is impossible to explain with confidence how they made their decisions. This can make them unsuitable or unpalatable for many applications. Physicians and business leaders, for instance, may not accept a medical diagnosis or investment decision without a credible explanation for the decision. In some cases, regulations mandate such explanations. For example, the US banking industry adheres to SR 11-7, guidance published by the Federal Reserve, which among other things requires that model behavior be explained.

But techniques are emerging that help shine light inside the black box of certain machine learning models, making them more interpretable and accurate. MIT researchers, for instance, have demonstrated a method of training a neural network that delivers both accurate predictions and the rationales for those predictions. Some of these techniques are already appearing in commercial data science products.

As it becomes possible to build interpretable machine learning models, companies in highly regulated industries such as financial services, life sciences, and health care will find attractive opportunities to use machine learning. Some of the potential application areas include credit scoring, recommendation engines, customer churn management, fraud detection, and disease diagnosis and treatment.

Deploying locally. The adoption of machine learning will grow along with the ability to deploy the technology where it can improve efficiency and outcomes. Advances in both software and hardware are making it increasingly viable to use the technology on mobile devices and smart sensors. On the software side, technology vendors such as Apple Inc., Facebook, Google, and Microsoft are creating compact machine learning models that require relatively little memory but can still handle tasks such as image recognition and language translation on mobile devices. Microsoft Research Lab’s compression efforts resulted in models that were 10 to 100 times smaller.

On the hardware end, semiconductor vendors such as Intel, Nvidia, and Qualcomm, as well as Google and Microsoft, have developed or are developing their own power-efficient AI chips to bring machine learning to mobile devices.

The emergence of mobile devices as a machine learning platform is expanding the number of potential applications of the technology and inducing companies to develop applications in areas such as smart homes and cities, autonomous vehicles, wearable technology, and the industrial Internet of Things.

Prepare for the mainstreaming of machine learning

Collectively, the five vectors of machine learning progress can help reduce the friction that is preventing some companies from investing in machine learning. And they can help those already using the technology to intensify their use of it. These advancements can also enable new applications across industries and help overcome the constraints of limited resources including talent, infrastructure, or data to train the models.

Companies should look for opportunities to automate some of the work of their oversubscribed data scientists—and ask consultants how they use data science automation. They should keep an eye on emerging techniques such as data synthesis and transfer learning that could ease the challenge of acquiring training data. They should learn what computing resources optimized for machine learning their cloud providers offer. If they are running workloads in their own data centers, they may want to investigate adding specialized hardware into the mix.

Though interpretability of machine learning is still in its early days, companies contemplating high-value applications may want to explore state-of-the-art techniques for improving interpretability. Finally, organizations considering mobile- or device-based machine learning applications should track the performance benchmarks being reported by makers of next-generation chips so they are ready when on-device deployment becomes feasible.

Machine learning has already shown itself to be a valuable technology in many applications. Progress along the five vectors can help overcome some of the obstacles to mainstream adoption.

Source: Deloitte

Preparing for an AI Driven World

AnalyticsAnywhere

In the late 1960s and early 70s, the first computer-aided design (CAD) software packages began to appear. Initially, they were mostly used for high-end engineering tasks, but as they got cheaper and simpler to use, they became a basic tool to automate the work of engineers and architects.

According to a certain logic, with so much of the heavy work being shifted to machines, a lot of engineers and architects must have been put out of work, but in fact just the opposite happened. There are far more of them today than 20 years ago and employment in the sector is supposed to grow another 7% by 2024.

Still, while the dystopian visions of robots taking our jobs are almost certainly overblown, Josh Sutton, Global Head, Data & Artificial Intelligence at Publicis.Sapient sees significant disruption ahead. Unlike the fairly narrow effect of CAD software, AI will transform every industry and not every organization will be able to make the shift. The time to prepare is now.

Shifting Value to Different Tasks

One of the most important distinctions Sutton makes is between jobs and tasks. Just as CAD software replaced the drudgery of drafting, which allowed architects to spend more time with clients and coming up with creative solutions to their needs, automation from AI is shifting work to more of what humans excel at.

For example, in the financial industry, many of what were once considered core functions, such as trading, portfolio allocation and research, have been automated to a large extent. These were once considered high-level tasks that paid well, but computers do them much better and more cheaply.

However, the resources that are saved by automating those tasks are being shifted to ones that humans excel at, like long-term forecasting. ““Humans are much better at that sort of thing,” Sutton says. He also points out that the time and effort being saved with basic functions frees up a lot of time and has opened up a new market in “mass affluent” wealth management.

Finally, humans need to keep an eye on the machines, which for all of their massive computational prowess, still lack basic common sense. Earlier this year, when Dow Jones erroneously reported that Google was buying Apple for $9 billion — a report no thinking person would take seriously — the algorithms bought it and moved markets until humans stepped in.

Human-Machine Collaboration

Another aspect of the AI-driven world that’s emerging is the opportunity for machine learning to extend the capabilities of humans. For example, when a freestyle chess tournament that included both humans and machines was organized, the winner was not a chess master nor a supercomputer, but two amateurs running three simple programs in parallel.

In a similar way, Google, IBM’s Watson division and many others as well are using machine learning to partner with humans to achieve results that neither could achieve alone. One study cited by a White House report during the Obama Administration found that while machines had a 7.5 percent error rate in reading radiology images and humans had a 3.5% error rate, when humans combined their work with machines the error rate dropped to 0.5%.

There is also evidence that machine learning can vastly improve research. Back in 2005, when The Cancer Genome Atlas first began sequencing thousands of tumors, no one knew what to expect. But using artificial intelligence researchers have been able to identify specific patterns in that huge mountain of data that humans would have never been able to identify alone.

Sutton points out that we will never run out of problems to solve, especially when it comes to health, so increasing efficiency does not reduce the work for humans as much as it increases their potential to make a positive impact.

Making New Jobs Possible

A third aspect of the AI-driven world is that it is making it possible to do work that people couldn’t do without help from machines. Much like earlier machines extended our physical capabilities and allowed us to tunnel through mountains and build enormous skyscrapers, today’s cognitive systems are enabling us to extend our minds.

Sutton points to the work of his own agency as an example. In a campaign for Dove covering sport events, algorithms scoured thousands of articles and highlighted coverage that focused on the appearance of female athletes rather than their performance. It sent a powerful message about the double standard that women are subjected to.

Sutton estimates that it would have taken a staff of hundreds of people reading articles every day to manage the campaign in real time, which wouldn’t have been feasible. However, with the help of sophisticated algorithms his firm designed, the same work was able to be done with just a few staffers.

Increasing efficiency through automation doesn’t necessarily mean jobs disappear. In fact, over the past eight years, as automation has increased, unemployment in the US has fallen from 10% to 4.2%, a rate associated with full employment. In manufacturing, where you would expect machines to replace humans at the fastest rate, there is actually a significant labor shortage.

The Lump of Labor Fallacy

The fear that robots will take our jobs is rooted in what economists call the lump of labor fallacy, the false notion that there is a fixed amount of work to do in an economy. Value rarely, if ever, disappears, it just moves to a new place. Automation, by shifting jobs, increases our effectiveness and creates the capacity to do new work, which increases our capacity for prosperity.

However, while machines will not replace humans, it’s become fairly clear that it can disrupt businesses. For example, one thing we are seeing is a shift from cognitive skills to social skills, in which machines take over rote tasks and value shifts to human centered activity. So it is imperative that every enterprise adapt to a new mode of value creation.

“The first step is understanding how leveraging cognitive capabilities will create changes in your industry,” Sutton says, “and that will help you understand the data and technologies you need to move forward. Then you have to look at how that can not only improve present operations, but open up new opportunities that will become feasible in an AI driven world.”

Today, an architect needs to be far more than a draftsman, a waiter needs to do more than place orders and a travel agent needs to do more than book flights. Automation has commoditized those tasks, but opened up possibilities to do far more. We need to focus less on where value is shifting from and more on where value is shifting to.

Source: Innovation Excellence

Decoding Machine Learning Methods

Machine Learning, thinking systems, expert systems, knowledge engineering, decision systems, neural networks – all synonymous loosely woven words in the evolving fabric of Artificial Intelligence. Of these Machine Learning (ML) and Artificial Intelligence (AI) are often debated and used interchangeably. broadly speaking AI can be termed as a futuristic state of self aware smart learning machines in true sense, but for all practical purposes we deal more often with ML at present.

In very abstract terms, ML is a structured approach for deriving meaningful predictions/insights from both structured and unstructured data. ML methods employ complex algorithms that enable analytics based on data, history and patterns. The field of data science continues to scale new heights enabled by the exponential growth in computing power over the last decade. Data scientists are continuously exploring new models & methods each day and sometimes it’s scary to even keep pace with the trends. However to keep matters simple, here is a clean starting point.

Below is an attempt to put a simplified visual representation of the popular ML methods leveraged in the data science field along with their classification. Each of these algorithms are encoded through languages such as R, Python, Scala etc to provide a framework to data scientists in solving complex data driven business problems. However there is an underlying maze of statistical and probabilistic abyss that data scientists need to navigate in order to put these methods to meaningful use.

AnalyticsAnywhere

A brief summary of the above ML methods and how they model are presented in the slides below.

Some of the business applications of these ML methods can be classified as shown in below visual.

machinelearningML

As data becomes the new oil that drives virtual machines, I conclude with the below quote,

“Without data you’re just a person with an opinion.” – W. Edwards Deming

Source: datasciencecentral.com

Accuracy and Speed of Customer Service Improved with AI

AnalyticsAnywhere

Artificial Intelligence (AI) and Machine Learning (ML) are becoming more commonplace in the workplace than ever before, and it is making it possible for customer service speed and accuracy to improve significantly. This means that businesses that are already taking advantage of AI and ML are already ahead of the game, and those who are not have already fallen behind. Most businesses already have ML in mind, as 90% of CIOs interviewed said they were either already using ML or planned to incorporate it very soon into their business model. Here are some reasons you should start working with an artificial intelligence company as soon as possible.

Automation creates efficiency

There are so many things about customer service that can be automated in a customer service environment to save time. For example, customers may text and ask your store hours or return policy 20 times a day. But with AI, those questions will be answered immediately allowing your customer service agents to focus on tasks that require human processing instead of wasting time answering mundane and repetitive questions. AI and ML are also very helpful when doing other mundane tasks such as paperwork.

Accurate measurements and report

To determine what methods of communication are effective, your business should be running reports and measuring the effectiveness of what you are doing. With ML in place, you can run these reports and then use that data to create a more effective AI programs that your customer service department can use.

Automate paperwork

Maybe it used to take your customer service representatives half their day to complete all the paperwork associated with each call or text they took, but it was something your business dealt with to ensure you got enough information. Fortunately, the right ML and AI can do that for you making it possible for your customer service to get to customers faster and be more efficient with the time they have.

Use ML for complex decisions

Many people know that AI and ML can be used to answer simple and mundane questions, but it is more than that. ML can actually be very helpful in making complex decisions, as 52% of CIOs said they already are using it for that very purpose.

Get decision automation

Another great benefit from using AI in your customer service is that it can help with decision automation. In the next three years, it is expected that AI in customer service will drastically improve the speed of decisions, accuracy of decisions, and it is expected to drive top line growth.

Improve customer satisfaction

Customers are as annoyed with long wait times and long customer service experiences just as much as businesses are. It can also help improve the speed of the interaction ensuring the customer is given the right information as quickly as possible or directed to the right customer service representative. There is almost nothing worse than waiting on hold for 20 minutes to talk to someone just to then be transferred around and wait on hold again. Rather than having frustrated employees and frustrated customers, use AI and ML technology to improve your customer satisfaction.

Source: becominghuman.ai

Every Business Today Needs To Prepare For An AI-Driven World. Here’s How:

AnalyticsAnywhere

In the late 1960s and early 70s, the first computer-aided design (CAD) software packages began to appear. Initially, they were mostly used for high-end engineering tasks, but as they got cheaper and simpler to use, they became a basic tool to automate the work of engineers and architects.

According to a certain logic, with so much of the heavy work being shifted to machines, a lot of engineers and architects must have been put out of work, but in fact just the opposite happened. There are far more of them today than 20 years ago and employment in the sector is supposed to grow another 7% by 2024.

Still, while the dystopian visions of robots taking our jobs are almost certainly overblown, Josh Sutton, Global Head, Data & Artificial Intelligence at Publicis.Sapient, sees significant disruption ahead. Unlike the fairly narrow effect of CAD software, AI will transform every industry and not every organization will be able to make the shift. The time to prepare is now.

Shifting Value To Different Tasks

One of the most important distinctions Sutton makes is between jobs and tasks. Just as CAD software replaced the drudgery of drafting, which allowed architects to spend more time with clients and coming up with creative solutions to their needs, automation from AI is shifting work to more of what humans excel at.

For example, in the financial industry, many of what were once considered core functions, such as trading, portfolio allocation and research, have been automated to a large extent. These were once considered high-level tasks that paid well, but computers do them much better and more cheaply.

However, the resources that are saved by automating those tasks are being shifted to ones that humans excel at, like long-term forecasting. ““Humans are much better at that sort of thing,” Sutton says. He also points out that the time and effort being saved with basic functions frees up a lot of time and has opened up a new market in “mass affluent” wealth management.

Finally, humans need to keep an eye on the machines, which for all of their massive computational prowess, still lack basic common sense. Earlier this year, when Dow Jones erroneously reported that Google was buying Apple for $9 billion — a report no thinking person would take seriously — the algorithms bought it and moved markets until humans stepped in.

Human-Machine Collaboration

Another aspect of the AI-driven world that’s emerging is the opportunity for machine learning to extend the capabilities of humans. For example, when a freestyle chess tournament that included both humans and machines was organized, the winner was not a chess master nor a supercomputer, but two amateurs running three simple programs in parallel.

In a similar way, Google, IBM’s Watson division and many others as well are using machine learning to partner with humans to achieve results that neither could achieve alone. One study cited by a White House report during the Obama Administration found that while machines had a 7.5 percent error rate in reading radiology images and humans had a 3.5% error rate, when humans combined their work with machines the error rate dropped to 0.5%.

There is also evidence that machine learning can vastly improve research. Back in 2005, when The Cancer Genome Atlas first began sequencing thousands of tumors, no one knew what to expect. But using artificial intelligence researchers have been able to identify specific patterns in that huge mountain of data that humans would have never been able to identify alone.

Sutton points out that we will never run out of problems to solve, especially when it comes to health, so increasing efficiency does not reduce the work for humans as much as it increases their potential to make a positive impact.

Making New Jobs Possible

A third aspect of the AI-driven world is that it is making it possible to do work that people couldn’t do without help from machines. Much like earlier machines extended our physical capabilities and allowed us to tunnel through mountains and build enormous skyscrapers, today’s cognitive systems are enabling us to extend our minds.

Sutton points to the work of his own agency as an example. In a campaign for Dove covering sport events, algorithms scoured thousands of articles and highlighted coverage that focused on the appearance of female athletes rather than their performance. It sent a powerful message about the double standard that women are subjected to.

Sutton estimates that it would have taken a staff of hundreds of people reading articles every day to manage the campaign in real time, which wouldn’t have been feasible. However, with the help of sophisticated algorithms his firm designed, the same work was able to be done with just a few staffers.

Increasing efficiency through automation doesn’t necessarily mean jobs disappear. In fact, over the past eight years, as automation has increased, unemployment in the US has fallen from 10% to 4.2%, a rate associated with full employment. In manufacturing, where you would expect machines to replace humans at the fastest rate, there is actually a significant labor shortage.

The Lump Of Labor Fallacy

The fear that robots will take our jobs is rooted in what economists call the lump of labor fallacy, the false notion that there is a fixed amount of work to do in an economy. Value rarely, if ever, disappears, it just moves to a new place. Automation, by shifting jobs, increases our effectiveness and creates the capacity to do new work, which increases our capacity for prosperity.

However, while machines will not replace humans, it’s become fairly clear that it can disrupt businesses. For example, one thing we are seeing is a shift from cognitive skills to social skills, in which machines take over rote tasks and value shifts to human centered activity. So it is imperative that every enterprise adapt to a new mode of value creation.

“The first step is understanding how leveraging cognitive capabilities will create changes in your industry,” Sutton says, “and that will help you understand the data and technologies you need to move forward. Then you have to look at how that can not only improve present operations, but open up new opportunities that will become feasible in an AI driven world.”

Today, an architect needs to be far more than a draftsman, a waiter needs to do more than place orders and a travel agent needs to do more than book flights. Automation has commoditized those tasks, but opened up possibilities to do far more. We need to focus less on where value is shifting from and more on where value is shifting to.

Source: Digital Tonto

Understanding Data Roles

AnalyticsAnywhereWith the rise of Big Data has come the accompanying explosion in roles that in some way involve data. Most who are in any way involved with enterprise technology are at least familiar with them by name, but sometimes it’s helpful to look at them through a comprehensive lens that shows us how they all fit together. In understanding how data roles mesh, think about them in terms of two pools: one is responsible for making data ready for use, and another one that puts that data to use. The latter function includes the tightly-woven roles of Data Analysts and Data Scientist, and the former includes such roles as Database Administrator, Data Architect and Data Governance Manager.

Ensuring the data is ready for use

Making Sure the Engine Works.

A car is only as good as its engine, and according to PC Magazine the Database Administrator (DBA), is “responsible for the physical design and management of the database and for the evaluation, selection and implementation of the DBMS.” Techopedia defines the position as one that “directs or performs all activities related to maintaining a successful database environment.” A DBA’s responsibilities include security, optimization, monitoring and troubleshooting, and ensuring the needed capacity to support activities. This of course requires a high level of technical expertise–particularly in SQL, and increasingly in NoSQL. But while the role may be technical, TechTarget maintains that it may require managerial functions, including “establishing policies and procedures pertaining to the management, security, maintenance, and use of the database management system.”

Directing the Vision. With the database engines in place, the task becomes one of creating an infrastructure for taking in, moving and accessing the data. If the DBA builds the car, then the Enterprise Data Architect (EDA) builds the freeway system, laying the framework for how data will be stored, shared and accessed by different departments, systems and applications, and aligning it to business strategy. Bob Lambert describes the skills as including an understanding of the system development life cycle; software project management approaches; data modeling, database design, and SQL development. The role is strategic, requiring an understanding of both existing and emerging technologies (NoSQL databases, analytics tools and visualization tools), and how those may support the organization’s objectives. The EDA’s role requires knowledge sufficient to direct the components of enterprise architecture, but not necessarily practical skills of implementation. With that said, Monster.com lists typical responsibilities as: determining database structural requirements, defining physical structure and functional capabilities, security, backup, and recovery specifications, as well as installing, maintaining and optimizing database performance.

Creating and Enforcing the Rules of Data Flow. A well-architected system requires order. A Data Governance Manager organizes and streamlines how data is collected, stored, shared/accessed, secured and put to use. But don’t think of the role as a traffic cop–the rules of the road are there to not only prevent ‘accidents’, but also to ensure efficiency and value. The governance manager’s responsibilities include enforcing compliance, setting policies and standards, managing the lifecycle of data assets, and ensuring that data is secure, organized and able to be accessed by–and only by– appropriate users. By so doing, the data governance manager improves decision-making, eliminates redundancy, reduces risk of fines/lawsuits, ensures security of proprietary and confidential information, so the organization achieves maximum value (and minimum risk). The position implies at least a functional knowledge of databases and associated technologies, and a thorough knowledge of industry regulations (FINRA, HIPAA, etc.).

Making Use of the Data

We create a system in which data is well-organized and governed so that the business can make maximum use of it by informing day-to-day processes, and deriving insight from data analysts/scientists to improve efficiency or innovation.

Understand the past to guide future decisions. A Data Analyst performs statistical analysis and problem solving, taking organizational data and using it to facilitate better decisions on items ranging from product pricing to customer churn. This requires statistical skills, and critical thinking to draw supportable conclusions. An important part of the job is to make data palpable to the C-suite, so an effective analyst is also an effective communicator. MastersinScience.org refers to data analysts as “data scientists in training” and points out that the line between the roles are often blurred.

Data scientist–Modeling the Future. Data scientists combine advanced mathematical/statistical abilities with advanced programming abilities, including a knowledge of machine learning, and the ability to code in SQL, R, Python or Scala. A key differentiator is that where the Data Analyst primarily analyzes batch/historical data to detect past trends, the Data Scientist builds programs that predict future outcomes. Furthermore, data scientists are building machine learning models that continue to learn and refine their predictive ability as more data is collected.

Of course, as data becomes increasingly the currency of business, as it is predicted to, we expect to see more roles develop, and the ones just described evolve significantly. In fact, we haven’t even discussed one of a role that is now mandated by the EU’s GDPR initiative: The Chief Data Officer, or ‘CDO’.

Source: datasciencecentral.com