Artificial Intelligence: Salaries Heading Skyward

While the average salary for a Software Engineer is around $100,000 to $150,000, to make the big bucks you want to be an AI or Machine Learning (Specialist/Scientist/Engineer.)

AnalyticsAnywhere

Artificial intelligence salaries benefit from the perfect recipe for a sweet paycheck: a hot field and high demand for scarce talent. It’s the ever-reliable law of supply and demand, and right now, anything artificial intelligence-related is in very high demand.

According to Indeed.com, the average IT salary — the keyword is “artificial intelligence engineer” — in the San Francisco area ranges from approximately $134,135 per year for “software engineer” to $169,930 per year for “machine learning engineer.”

However, it can go much higher if you have the credentials firms need. One tenured professor was offered triple his $180,000 salary to join Google, which he declined for a different teaching position.

However, the record, so far, was set in April when the Japanese firm Start Today, which operates the fashion-shopping site Zozotown, posted new job offerings for seven “genius” AI tech experts, offering annual salaries of as much as 100 million yen, or just under US $1 million.

Key Sectors for AI Salaries

Scoring a top AI salary means working in the “right” sector. While plentiful, AI jobs are mainly in just a few sectors — namely tech — and confined to just a few big and expensive cities. Glassdoor, another popular job search site, notes that 67% of all AI jobs listed on its site are located in the Bay Area, Seattle, Los Angeles, and New York City.

It also listed Facebook, NVIDIA, Adobe, Microsoft, Uber, and Accenture as the five best AI companies to work for in 2018, with almost 19% of open AI positions. The average annual base pay for an AI job listed on Glassdoor is $111,118 per year.

Glassdoor also found financial services, consulting and government agencies are actively hiring AI engineering and data science professionals. This includes top firms like Capital One, Fidelity, Goldman Sachs, Booz Allen Hamilton, EY, and McKinsey & Company, NASA’s Jet Propulsion Laboratory, the U.S. Army, and the Federal Reserve Bank.

However, expect that number of jobs and fields to expand considerably in the near future. A recent report from Gartner said that AI will kill off 1.8 million jobs, mostly menial labor, but the field will create 2.3 million new jobs by 2020, such statement is emphasized by a recent Capgemini report that found that 83% of companies using AI say they are adding jobs because of AI.

Best Jobs for AI Salaries

The term “AI” is rather broad and covers a number of disciplines and tasks, including natural language generation and comprehension, speech recognition, chat-bots, machine learning, decision management, deep learning, biometrics, and text analysis and processing. Given the level of specialization each requires, not many professionals can master more than one discipline.

In short, finding the best AI salary calls for actively nurturing the right career path.

While the average pay for an AI programmer is around $100,000 to $150,000, depending on the region of the country, all of these are in the developer/coder realm. To make the big money you want to be an AI engineer. According to Paysa, yet another job search site, an artificial intelligence engineer earns an average of $171,715, ranging from $124,542 at the 25th percentile to $201,853 at the 75th percentile, with top earners earning more than $257,530.

Why so high? Because many come from non-programming backgrounds. The IEEE notes that people with Ph.Ds in sciences like biology and physics are returning to school to learn AI and apply it to their field. They need to straddle the technical, knowing a multitude of languages and hardware architectures, with an understanding of the data involved. The latter makes engineers rare and thus expensive.

Why Are AI Salaries So High?

The fact is, AI is not a discipline you can teach yourself as many developers do. A survey by Stack Overflow found 86.7% of developers were, in fact, self-taught. However, that is for languages like Java, Python, and PHP, not the esoteric art of artificial intelligence.

It requires advanced degrees in computer science, often a Ph.D. In a report, Paysa found that 35 percent of AI positions require a Ph.D. and 26 percent require a master’s degree. Why? Because AI is a rapidly growing field and when you study at the Ph.D. level and participate in academic projects, they tend to be innovative if not bleeding edge, and that gives the student the experience they need for the work environment.

Moreover, it requires multiple disciplines, including C++, STL, Perl, Perforce and APIs like OpenGL and PhysX. In addition, because the AI is doing important calculations, a background in physics or some kind of life science is necessary.

Therefore, to be an effective and in-demand AI developer you need a lot of skills, not just one or two. Indeed lists the top 10 skills you need to know for AI:

1) Machine learning

2) Python

3) R language

4) Data science

5) Hadoop

6) Big Data

7) Java

8) Data mining

9) Spark

10) SAS

As you can see, that is a wide range of skills and none of them is learned overnight. According to The New York Times, there are fewer than 10,000 qualified AI specialists in the world. Element AI, a Montreal company that consults on machine learning systems, published a report earlier this year that there were 22,000 Ph.D.-level computer scientists in the world are capable of building AI systems. Either way, that is too few for the demand reported by Machine Learning News.

Competing Employers Drive Salaries Higher

With so few AI specialists available, tech companies are raiding academia. At the University of Washington, six of 20 artificial intelligence professors are now on leave or partial leave and working for outside companies. In the process, they are limiting the number of professors who can teach the technology, causing a vicious cycle.

US News and World report lists the top 20 schools for AI education. The top five are:

1) Carnegie Mellon University, Pittsburgh, PA

2) Massachusetts Institute of Technology, Cambridge, MA

3) Stanford University, Stanford, CA

4) University of California — Berkeley, Berkeley, CA

5) University of Washington, Seattle, WA

With academia being raided for talent, alternatives are popping up. Google, which is hiring any AI developer it can get its hands on, offers a course on deep learning and machine-learning tools via its Google Cloud Platform Website, and Facebook, also deep in AI, hosts a series of videos on the fundamentals of AI such as algorithms. If you want to take courses online, there is Coursera and Udacity.

Basic computer technology and math backgrounds are the backbone of most artificial intelligence programs. Linear algebra is as necessary as a programming language since machine learning performs analysis on data within matrices, and linear algebra is all about operations on matrices. According to Computer Science Degree Hub, coursework for AI involves study of advanced math, Bayesian networking or graphical modeling, including neural nets, physics, engineering and robotics, computer science and cognitive science theory.

Some things cannot be taught. Working with artificial intelligence does not mean you get to offload the work on the computer. It requires analytical thought process, foresight about technological innovations, technical skills to design, the skill to maintain and repair technology and software programs as well as algorithms. Therefore, it is easy to see why skilled people are so rare — which will drive AI salaries only higher.

Source: Medium

Advertisements

Can artificial intelligence help society as much as it helps business?

The answer is yes—but only if leaders start embracing technological social responsibility (TSR) as a new business imperative for the AI era.

AnalyticsAnywhere

In 1953, US senators grilled General Motors CEO Charles “Engine Charlie” Wilson about his large GM shareholdings: Would they cloud his decision making if he became the US secretary of defense and the interests of General Motors and the United States diverged? Wilson said that he would always put US interests first but that he could not imagine such a divergence taking place, because, “for years I thought what was good for our country was good for General Motors, and vice versa.” Although Wilson was confirmed, his remarks raised eyebrows due to widespread skepticism about the alignment of corporate and societal interests.

The skepticism of the 1950s looks quaint when compared with today’s concerns about whether business leaders will harness the power of artificial intelligence (AI) and workplace automation to pad their own pockets and those of shareholders—not to mention hurting society by causing unemployment, infringing upon privacy, creating safety and security risks, or worse. But is it possible that what is good for society can also be good for business—and vice versa?

Innovation and skill building

To answer this question, we need a balanced perspective that’s informed by history. Technology has long had positive effects on well-being beyond GDP—for example, increasing leisure or improving health and longevity—but it can also have a negative impact, especially in the short term, if adoption heightens stress, inequality, or risk aversion because of fears about job security. A relatively new strand of welfare economics has sought to calculate the value of both the upside and the downside of technology adoption. This is not just a theoretical exercise. What if workers in the automation era fear the future so much that this changes their behavior as consumers and crimps spending? What if stress levels rise to such an extent as workers interface with new technologies that labor productivity suffers?

Building and expanding on existing theories of welfare economics, we simulated how technology adoption today could play out across the economy. The key finding is that two dimensions will be decisive—and in both cases, business has a central role to play (Exhibit 1). The first dimension is the extent to which firms adopt technologies with a view to accelerating innovation-led growth, compared with a narrower focus on labor substitution and cost reduction. The second is the extent to which technology adoption is accompanied by measures to actively manage the labor transitions that will accompany it—in particular, raising skill levels and ensuring a more fluid labor market.

AnalyticsAnywhere

Both of these dimensions are in sync with our previous bottom-line-focused work on AI and automation adoption. In our research, digital leaders who reap the biggest benefits from technology adoption tend to be those who focus on new products or new markets and, as a result, are more likely to increase or stabilize their workforce than reduce it. At the same time, human capital is an essential element of their strategies, since having the talent able to implement and drive digital transformation is a prerequisite for successful execution. No wonder a growing number of companies, from Walmart to German software company SAP, are emphasizing in-house training programs to equip members of their workforce with the skills they will need for a more automated work environment. And both Amazon and Facebook have raised the minimum wage for their workers as a way to attract, retain, and reward talent.

TSR: Technological social responsibility

Given the potential for a win–win across business and society from a socially careful and innovation-driven adoption strategy, we believe the time has come for business leaders across sectors to embed a new imperative in their corporate strategy. We call this imperative technological social responsibility (TSR). It amounts to a conscious alignment between short- and medium-term business goals and longer-term societal ones.

Some of this may sound familiar. Like its cousin, corporate social responsibility, TSR embodies the lofty goal of enlightened self-interest. Yet the self-interest in this case goes beyond regulatory acceptance, consumer perception, or corporate image. By aligning business and societal interests along the twin axes of innovation focus and active transition management, we find that technology adoption can potentially increase productivity and economic growth in a powerful and measurable way.

In economic terms, innovation and transition management could, in a best-case scenario, double the potential growth in welfare—the sum of GDP and additional components of well-being, such as health, leisure, and equality—compared with an average scenario. The welfare growth to 2030 that emerges from this scenario could be even higher than the GDP and welfare gains we have seen in recent years from computers and early automation.

However, other scenarios that pay less heed to innovating or to managing disruptive transitions from tech adoption could slow income growth, increase inequality and unemployment risk, and lead to fewer improvements in leisure, health, and longevity. And that, in turn, would reduce the benefits to business.

At the company level, a workforce that is healthier, happier, better trained, and less stressed, will also be more productive, more adaptable, and better able to drive the technology adoption and innovation surge that will boost revenue and earnings. At the broader level, a society whose overall welfare is improving, and faster than GDP, is a more resilient society better able to handle sometimes painful transitions. In this spirit, New Zealand recently announced that it will shift its economic policy focus from GDP to broader societal well-being.

Leadership imperatives

For business leaders, three priorities will be essential. First, they will need to understand and be convinced of the argument that proactive management of technology transitions is not only in the interest of society at large but also in the more narrowly focused financial interest of companies themselves. Our research is just a starting point, and more work will be needed, including to show how and where individual sectors and companies can benefit from adopting a proactive strategy. Work is already underway at international bodies such as the Organisation of Economic Co-operation and Development to measure welfare effects across countries.

Second, digital reinvention plans will need to have, at their core, a thoughtful and proactive workforce-management strategy. Talent is a key differentiating factor, and there is much talk about the need for training, retraining, and nurturing individuals with the skills needed to implement and operate updated business processes and equipment. But so far, “reskilling” remains an afterthought in many companies. That is shortsighted; our work on digital transformation continues to emphasize the importance of having the right people in the right places as machines increasingly complement humans in the workforce. From that perspective alone, active management of training and workforce mobility will be an essential task for boards in the future.

Third, CEOs must embrace new, farsighted partnerships for social good. The successful adoption of AI and other advanced technologies will require cooperation from multiple stakeholders, especially business leaders and the public sector. One example involves education and skills: business leaders can help inform education providers with a clearer sense of the skills that will be needed in the workplace of the future, even as they look to raise the specific skills of their own workforce. IBM, for one, is partnering with vocational schools to shape curricula and build a pipeline of future “new collar” workers—individuals with job profiles at the nexus of professional and trade work, combining technical skills with a higher educational background. AT&T has partnered with more than 30 universities and multiple online education platforms to enable employees to earn the credentials needed for new digital roles.

Other critical public-sector actions include supporting R&D and innovation; creating markets for public goods, such as healthcare, so that there is a business incentive to serve these markets; and collaborating with businesses on reskilling, helping them to match workers with the skills they need and with the digital-era jobs to which they could most easily transition. A more fluid labor market and better job matching will benefit companies and governments, accelerating the search for talent for the former and reducing the potential transition costs for the latter.

There are many aspects to TSR, and we are just starting to map out some of the most important ones. But as an idea and an imperative, the time has come for technological social responsibility to make a forceful entry into the consciousness and strategies of business leaders everywhere.

Source: McKinsey

What is Natural Language Processing and How Does it Benefit a Business?

We use natural language processing every day. It makes it easier for us to interact with computers and software and allows us to perform complex searches and tasks without the help of a programmer, developer or analyst.

What is Natural Language Processing (NLP) Driven Analytics?

Natural language processing (NLP) is an integral part of today’s advanced analytics. If you have clicked in the search window on Google and entered a question, you know NLP! When NLP is incorporated into the business intelligence environment, business users can enter a question in human language. For example, ‘which sales team member achieved the best numbers last month?’ or ‘which of our products sells best in New York?’

The system translates this natural language search into a more traditional analytics query, and returns the most appropriate answer in the most appropriate form, so users can benefit from smart visualization, tables, numbers or natural language descriptions that are easy to understand.

How Does NLP-Based Analytics Benefit a Business Organization?

Perhaps the most important benefit of NLP is that it allows the business to implement augmented analytics in a self-serve environment with very little required training and ensures that users will adopt business intelligence and analytics as a tool to use every day.

NLP allows the enterprise to expand the use of business intelligence across the enterprise by offering business users an intuitive tool to ask for and receive crucial data and to understand the analytical output and share it with other users.

NLP opens and expands the data repositories and information in an organization in a way that is meaningful, and easy to understand, so data is more accessible and answers are more valuable. This will improve the accuracy of planning and forecasting and allow for a better overall understanding of business results.

Natural language processing helps business users sort through integrated data sources (internal and external) to answer a question in the way the user can understand, and will provide a foundation to simplify and speed the decision process with fact-based, data-driven analysis.

The enterprise can find and use information using natural language queries, rather than complex queries, so business users can achieve results without the assistance of IT or business analysts.

NLP presents results through smart visualization and contextual information delivered in natural language. Because these tools are easy to use and to understand, users are more likely to adopt them and to add value to the organization.

With NLP searches and queries, business users are free to explore data and achieve accurate results and the organization can achieve rapid ROI and sustain low total cost of ownership (TCO) with tools as familiar as a Google search.

Users can combine NLP with plug n’ play predictive analysis or assisted predictive modeling so the organization can achieve data democratization.

NLP and the advanced data discovery tools it supports can provide important, sophisticated tools in a user-friendly environment to suggest relationships, identify patterns and trends, and offer insight to previously hidden information so business users can ‘discover’ subtle, crucial problems and opportunities.

NLP is an integral part of today’s advanced analytics. It establishes an easy-to-use, interactive environment where users can create a search query in natural language and, as such, will support user adoption and provide numerous benefits to the enterprise.

Source: dataversity.net

Here’s how AI can help fight climate change according to the field’s top thinkers

From monitoring deforestation to designing low-carbon materials

Analytics-Anywhere

The AI renaissance of recent years has led many to ask how this technology can help with one of the greatest threats facing humanity: climate change. A new research paper authored by some of the field’s best-known thinkers aims to answer this question, giving a number of examples of how machine learning could help prevent human destruction.

The suggested use-cases are varied, ranging from using AI and satellite imagery to better monitor deforestation, to developing new materials that can replace steel and cement (the production of which accounts for nine percent of global green house gas emissions).

But despite this variety, the paper (which we spotted via MIT Technology Review) returns time and time again to a few broad areas of deployment. Prominent among these are using machine vision to monitor the environment; using data analysis to find inefficiencies in emission-heavy industries; and using AI to model complex systems, like Earth’s own climate, so we can better prepare for future changes.

The authors of the paper — which include DeepMind CEO Demis Hassabis, Turing award winner Yoshua Bengio, and Google Brain co-founder Andrew Ng — say that AI could be “invaluable” in mitigating and preventing the worse effects of climate change, but note that it is not a “silver bullet” and that political action is desperately needed, too.

“Technology alone is not enough,” write the paper’s authors, who were led by David Rolnick, a postdoctoral fellow at the University of Pennsylvania. “[T]echnologies that would reduce climate change have been available for years, but have largely not been adopted at scale by society. While we hope that ML will be useful in reducing the costs associated with climate action, humanity also must decide to act.”

In total, the paper suggests 13 fields where machine learning could be deployed (from which we’ve selected eight examples), which are categorized by the time-frame of their potential impact, and whether or not the technology involved is developed enough to reap certain rewards. You can read the full paper for yourself here, or browse our list below.

  • Build better electricity systems. Electricity systems are “awash with data” but too little is being done to take advantage of this information. Machine learning could help by forecasting electricity generation and demand, allowing suppliers to better integrate renewable resources into national grids and reduce waste. Google’s UK lab DeepMind has demonstrated this sort of work already, using AI to predict the energy output of wind farms.
  • Monitor agricultural emissions and deforestation. Greenhouse gases aren’t just emitted by engines and power plants — a great deal comes from the destruction of trees, peatland, and other plant life which has captured carbon through the process of photosynthesis over millions of years. Deforestation and unsustainable agriculture leads to this carbon being released back into the atmosphere, but using satellite imagery and AI, we can pinpoint where this is happening and protect these natural carbon sinks.
  • Create new low-carbon materials. The paper’s authors note that nine percent of all global emissions of greenhouse gases come from the production of concrete and steel. Machine learning could help reduce this figure by helping to develop low-carbon alternatives to these materials. AI helps scientists discover new materials by allowing them to model the properties and interactions of never-before-seen chemical compounds.
  • Predict extreme weather events. Many of the biggest effects of climate change in the coming decades will be driven by hugely complex systems, like changes in cloud cover and ice sheet dynamics. These are exactly the sort of problems AI is great at digging into. Modeling these changes will help scientists predict extreme weather events, like droughts and hurricanes, which in turn will help governments protect against their worst effects.
  • Make transportation more efficient. The transportation sector accounts for a quarter of global energy-related CO2 emissions, with two-thirds of this generated by road users. As with electricity systems, machine learning could make this sector more efficient, reducing the number of wasted journeys, increasing vehicle efficiency, and shifting freight to low-carbon options like rail. AI could also reduce car usage through the deployment of shared, autonomous vehicles, but the authors note that this technology is still not proven.
  • Reduce wasted energy from buildings. Energy consumed in buildings accounts for another quarter of global energy-related CO2 emissions, and presents some of “the lowest-hanging fruit” for climate action. Buildings are long-lasting and are rarely retrofitted with new technology. Adding just a few smart sensors to monitor air temperature, water temperature, and energy use, can reduce energy usage by 20 percent in a single building, and large-scale projects monitoring whole cities could have an even greater impact.
  • Geoengineer a more reflective Earth. This use-case is probably the most extreme and speculative of all those mentioned, but it’s one some scientists are hopeful about. If we can find ways to make clouds more reflective or create artificial clouds using aerosols, we could reflect more of the Sun’s heat back into space. That’s a big if though, and modeling the potential side-effects of any schemes is hugely important. AI could help with this, but the paper’s authors note there would still be significant “governance challenges” ahead.
  • Give individuals tools to reduce their carbon footprint. According to the paper’s authors, it’s a “common misconception that individuals cannot take meaningful action on climate change.” But people do need to know how they can help. Machine learning could help by calculating an individual’s carbon footprint and flagging small changes they could make to reduce it — like using public transport more; buying meat less often; or reducing electricity use in their house. Adding up individual actions can create a big cumulative effect.

Source: The Verge

We Actually Went Driverless 100 Years Ago

Analytics-Anywhere

In the aftermath of Ubers’s recent fatal crash in Tempe, which involved a driverless car, there has been a great deal of speculation about the future of the driverless automobile. As is often the case, trying to see beyond the near-term fear and natural trepidation, which accompanies handing over control of life and death decisions to machines, can be exceptionally difficult. Yet, this isn’t the first time we’ve encountered the driverless dilemma. There’s another example that’s nearly 100 years old.

Elevating Drivers

Coronado Island, just south of San Diego, is home to one of the world’s Grand Dame resorts, the Hotel Del Coronado. The Hotel Del was built in 1888. Much has changed at the Hotel Del in over a century. However, one thing hasn’t. In the center of the magnificent main Victorian building, is the Otis #61, a brass accordion-doored manual elevator that still shuttles guests, just as it has for the last one-hundred and thirty years. However, this elevator has a driver.

For hotel guests who never even knew that elevators were once run exclusively by “drivers,” the novelty is something they’re drawn to. Still, the look of apprehension and trepidation on many of their faces is clear as they approach an elevator that needs to be driven. You can imagine that they’re thinking, “Is that really safe?,” “Why can’t it operate on its own, the way real elevators do?” or “What if the driver makes a mistake and starts it up just as you’re getting in or out?” After all, he’s human, and humans are known to make mistakes.

Interestingly, although elevator operators were common through the mid-1900s, there were driverless elevators as far back as the early 1900s. There was just one problem. Nobody trusted them. Given the choice between the stairs and a lonely automated elevator, the elevator would remain empty. It wasn’t until the middle of the twentieth century that the tipping point came along for the driverless elevator as the result of a strike by the elevator operators’ union in New York City in 1945.

The strike was devastating, costing the city an estimated one hundred million dollars. Suddenly, there was an economic incentive to go back to the automatic elevator. Over the next decade there was a massive effort to build trust in automatic elevators, which resulted in the elimination of tens of thousands of elevator operator jobs.

Few of us will today step into an elevator and even casually think about the way it operates, how safe it is, or what the risks are. If you find yourself at the Hotel Del and decide to take the elevator, stop and think about just how radical change can be in reshaping our attitudes about what’s safe and normal.

Granted, an automatic elevator is a world apart from an autonomous vehicle, but the fundamental issue with the adoption of “driverless,” in both cases, isn’t so much the technology, which can be much safer without a human driver, it’s about trusting a machine to do something as well as we believe a human can do it–in a word, it’s all about perception.

Now you can use the power of established online marketplaces to grow your brand and sales, while streamlining and simplifying your business processes.

Still doubtful? Perhaps you’re one of the few people who have a fear of elevators? After all, twenty-seven people die yearly as the result of faulty automatic elevators. Elevators definitely kill.

However, you might also be interested in learning that, according to the Center for Disease Control’s National Center for Health Statistics, a whopping one thousand six hundred people die from falling down stairs. I’ll save you the math; that means you’re sixty times as likely to have a fatal accident taking the stairs. Unfortunately, numbers alone rarely change perception.

In an interview for my upcoming book Revealing The Invisible, with Amin Kashi, director of autonomous driving at Mentor, a Siemens business, he told me, “I’m sure we will look back on this in the not too distant future and think to ourselves, how could we have wasted all of that time commuting, how could we have dealt with the inherent lack of safety in the way that we used to drive. All these issues will become so obvious and so clear. From where we stand right now we’re accustomed to a certain behavior so we live with it, but I think we will be amazed that we actually got through it.”

No doubt that it will take time to build a sufficient level of trust in autonomous vehicles. But there’s equally little doubt that one day our children’s children will have a look of apprehension and trepidation on their faces as they approach a car that needs to be driven by a human.

I imagine that they’ll be thinking, “Is that really safe?”

Source: Innovation Excellence

AI is all about instant customer satisfaction

analytics anywhere

Our brains are wired to love and become addicted to instant rewards. Any delay in satisfaction creates stress. Just remember how you feel when a web page takes over 3 seconds to load. We crave technologies that go even faster than our brains. The Google, Amazon, Booking.com, and Uber of this world have been harnessing the benefits of instant reward to boost their sales for years. AI is just the next logical step and there is no way back because the faster you go, the more consumers buy from you, and the faster you want to go.

The goal is not to replace human work but to expand your capacity to deliver the instant value and relevance that your customers crave and that you are not currently able to provide. Hotels chronically complain about how understaffed they are and how hard it is to keep pace. So 2 choices here: 1- embrace AI as an opportunity or 2- keep running a Formula 1 race with a bicycle.

Trending AI Articles:

  1. Natural vs Artificial Neural Networks
  2. A Short Machine Learning Explanation
  3. A.I. of the People, by the People, for the People
  4. Face detection with OpenCV and Deep Learning from image

Cloud AI — the opportunity for hotels

The market for AI is no longer the privilege of a few multi-billion dollar companies. Cloud AI solutions have become widely available for hotels that can massively capitalize on its power at virtually no cost.

Big Data: A new generation of booking engines led by companies such as Avvio are able to learn from customer demographics and adapt their display to better fit the preferences of each customer.

Chatbots: Technologies such as Quicktext and Zoe bot engage customers on your direct channels to help your online visitors access immediately, relevant information while capturing data on them that either the chatbot or you are able to action to increase sales.

Grow out of your terminator fantasy

Some people mix fiction and reality either because they are afraid of AI’s potential, or on the contrary, they expect to see a full human being. This confusion happens because we use terms such as intelligence, neural networks, deep learning etc. It is true that AI is inspired by our brain, but overall, we frequently get inspired by nature to solve challenges. Most of the time we can recognize where the inspiration comes from but the final product is usually quite far from the original model.

With AI it is exactly the same thing. We can use some basic logic but it remains very focused on a specific use case. So, if you want to be able to profit from AI, you need to have realistic expectations. For instance, chatbots are currently able to manage frequently requested tasks such as giving particular information, booking a room, locating and finding relevant places around the hotel etc. They deal with some repetitive tasks that none of your employees want to do and chatbots have become very good at it — even better than humans. However, virtual assistants are not able to serve the customers outside of their perimeter. That’s when you move from autopilot to manual. Taking AI for what it is, rather than your wildest dreams, will enable you to realize that it can benefit your business today.

Source: Becoming Human

AI predictions for 2019 from Yann LeCun, Hilary Mason, Andrew Ng, and Rumman Chowdhury

analyticsanywhere

Artificial intelligence is cast all at once as the technology that will save the world and end it.

To cut through the noise and hype, VentureBeat spoke with luminaries whose views on the right way to do AI have been informed by years of working with some of the biggest tech and industry companies on the planet.

Below find insights from Google Brain cofounder Andrew Ng, Cloudera general manager of ML and Fast Forward Labs founder Hilary Mason, Facebook AI Research founder Yann LeCun, and Accenture’s responsible AI global lead Dr. Rumman Chowdhury. We wanted to get a sense of what they saw as the key milestones of 2018 and hear what they think is in store for 2019.

Amid a recap of the year and predictions for the future, some said they were encouraged to be hearing fewer Terminator AI apocalypse scenarios, as more people understand what AI can and cannot do. But these experts also stressed a continued need for computer and data scientists in the field to adopt responsible ethics as they advance artificial intelligence.

Dr. Rumman Chowdhury

Dr. Rumman Chowdhury is managing director of the Applied Intelligence division at Accenture and global lead of its Responsible AI initiative, and was named to BBC’s 100 Women list in 2017. Last year, I had the honor of sharing the stage with her in Boston at Affectiva’s conference to discuss matters of trust surrounding artificial intelligence. She regularly speaks to audiences around the world on the topic.

For the sake of time, she responded to questions about AI predictions for 2019 via email. All responses from the other people in this article were shared in phone interviews.

Chowdhury said in 2018 she was happy to see growth in public understanding of the capabilities and limits of AI and to hear a more balanced discussion of the threats AI poses — beyond fears of a global takeover by intelligent machines as in The Terminator. “With that comes increasing awareness and questions about privacy and security, and the role AI may play in shaping us and future generations,” she said.

Public awareness of AI still isn’t where she thinks it needs to be, however, and in the year ahead Chowdhury hopes to see more people take advantage of educational resources to understand AI systems and be able to intelligently question AI decisions.

She has been pleasantly surprised by the speed with which tech companies and people in the AI ecosystem have begun to consider the ethical implications of their work. But she wants to see the AI community do more to “move beyond virtue signaling to real action.”

“As for the ethics and AI field — beyond the trolley problem — I’d like to see us digging into the difficult questions AI will raise, the ones that have no clear answer. What is the ‘right’ balance of AI- and IoT-enabled monitoring that allows for security but resists a punitive surveillance state that reinforces existing racial discrimination? How should we shape the redistribution of gains from advanced technology so we are not further increasing the divide between the haves and have-nots? What level of exposure to children allows them to be ‘AI natives’ but not manipulated or homogenized? How do we scale and automate education using AI but still enable creativity and independent thought to flourish?” she asked.

In the year ahead, Chowdhury expects to see more government scrutiny and regulation of tech around the world.

“AI and the power that is wielded by the global tech giants raises a lot of questions about how to regulate the industry and the technology,” she said. “In 2019, we will have to start coming up with the answers to these questions — how do you regulate a technology when it is a multipurpose tool with context-specific outcomes? How do you create regulation that doesn’t stifle innovation or favor large companies (who can absorb the cost of compliance) over small startups? At what level do we regulate? International? National? Local?”

She also expects to see the continued evolution of AI’s role in geopolitical matters.

“This is more than a technology, it is an economy- and society-shaper. We reflect, scale, and enforce our values in this technology, and our industry needs to be less naive about the implications of what we build and how we build it,” she said. For this to happen, she believes people need to move beyond the idea common in the AI industry that if we don’t build it, China will, as if creation alone is where power lies.

“I hope regulators, technologists, and researchers realize that our AI race is about more than just compute power and technical acumen, just like the Cold War was about more than nuclear capabilities,” she said. “We hold the responsibility of recreating the world in a way that is more just, more fair, and more equitable while we have the rare opportunity to do so. This moment in time is fleeting; let’s not squander it.”

<pOn a consumer level, she believes 2019 will see more use of AI in the home. Many people have become much more accustomed to using smart speakers like Google Home and Amazon Echo, as well as a host of smart devices. On this front, she’s curious to see if anything especially interesting emerges from the Consumer Electronics Show — set to kick off in Las Vegas in the second week of January — that might further integrate artificial intelligence into people’s daily lives.

“I think we’re all waiting for a robot butler,” she said.

Andrew Ng

I always laugh more than I expect to when I hear Andrew Ng deliver a whiteboard session at a conference or in an online course. Perhaps because it’s easy to laugh with someone who is both passionate and having a good time.

Ng is an adjunct computer science professor at Stanford University whose name is well known in AI circles for a number of different reasons.

He’s the cofounder of Google Brain, an initiative to spread AI throughout Google’s many products, and the founder of Landing AI, a company that helps businesses integrate AI into their operations.

He’s also the instructor of some of the most popular machine learning courses on YouTube and Coursera, an online learning company he founded, and he founded deeplearning.ai and wrote the book Deep Learning Yearning.

After more than three years there, in 2017 he left his post as chief AI scientist for Baidu, another tech giant that he helped transform into an AI company.

Finally, he’s also part of the $175 million AI Fund and on the board of driverless car company Drive.ai.

Ng spoke with VentureBeat earlier this month when he released the AI Transformation Playbook, a short read about how companies can unlock the positive impacts of artificial intelligence for their own companies.

One major area of progress or change he expects to see in 2019 is AI being used in applications outside of tech or software companies. The biggest untapped opportunities in AI lie beyond the software industry, he said, citing use cases from a McKinsey report that found that AI will generate $13 trillion in GDP by 2030.

“I think a lot of the stories to be told next year [2019] will be in AI applications outside the software industry. As an industry, we’ve done a decent job helping companies like Google and Baidu but also Facebook and Microsoft — which I have nothing to do with — but even companies like Square and Airbnb, Pinterest, are starting to use some AI capabilities. I think the next massive wave of value creation will be when you can get a manufacturing company or agriculture devices company or a health care company to develop dozens of AI solutions to help their businesses.”

Like Chowdhury, Ng was surprised by growth in understanding in what AI can and cannot do in 2018, and pleased that conversations can take place without focusing on the killer robot scenario or fear of artificial general intelligence.

Ng said he intentionally responded to my questions with answers he didn’t expect many others to have.

“I’m trying to cite deliberately a couple of areas which I think are really important for practical applications. I think there are barriers to practical applications of AI, and I think there’s promising progress in some places on these problems,” he said.

In the year ahead, Ng is excited to see progress in two specific areas in AI/ML research that help advance the field as a whole. One is AI that can arrive at accurate conclusions with less data, something called “few shot learning” by some in the field.

“I think the first wave of deep learning progress was mainly big companies with a ton of data training very large neural networks, right? So if you want to build a speech recognition system, train it on 100,000 hours of data. Want to train a machine translation system? Train it on a gazillion pairs of sentences of parallel corpora, and that creates a lot of breakthrough results,” Ng said. “Increasingly I’m seeing results on small data where you want to try to take in results even if you have 1,000 images.”

The other is advances in computer vision referred to as “generalizability.” A computer vision system might work great when trained with pristine images from a high-end X-ray machine at Stanford University. And many advanced companies and researchers in the field have created systems that outperform a human radiologist, but they aren’t very nimble.

“But if you take your trained model and you apply it to an X-ray taken from a lower-end X-ray machine or taken from a different hospital, where the images are a bit blurrier and maybe the X-ray technician has the patient slightly turned to their right so the angle’s a little bit off, it turns out that human radiologists are much better at generalizing to this new context than today’s learning algorithms. And so I think interesting research [is on] trying to improve the generalizability of learning algorithms in new domains,” he said.

Yann LeCun

Yann LeCun is a professor at New York University, Facebook chief AI scientist, and founding director of Facebook AI Research (FAIR), a division of the company that created PyTorch 1.0 and Caffe2, as well as a number of AI systems — like the text translation AI tools Facebook uses billions of times a day or advanced reinforcement learning systems that play Go.

LeCun believes the open source policy FAIR adopts for its research and tools has helped nudge other large tech companies to do the same, something he believes has moved the AI field forward as a whole. LeCun spoke with VentureBeat last month ahead of the NeurIPS conference and the fifth anniversary of FAIR, an organization he describes as interested in the “technical, mathematical underbelly of machine learning that makes it all work.”

“It gets the entire field moving forward faster when more people communicate about the research, and that’s actually a pretty big impact,” he said. “The speed of progress you’re seeing today in AI is largely because of the fact that more people are communicating faster and more efficiently and doing more open research than they were in the past.”

On the ethics front, LeCun is happy to see progress in simply considering the ethical implications of work and the dangers of biased decision-making.

“The fact that this is seen as a problem that people should pay attention to is now well established. This was not the case two or three years ago,” he said.

LeCun said he does not believe ethics and bias in AI have become a major problem that require immediate action yet, but he believes people should be ready for that.

“I don’t think there are … huge life and death issues yet that need to be urgently solved, but they will come and we need to … understand those issues and prevent those issues before they occur,” he said.

Like Ng, LeCun wants to see more AI systems capable of the flexibility that can lead to robust AI systems that do not require pristine input data or exact conditions for accurate output.

LeCun said researchers can already manage perception rather well with deep learning but that a missing piece is an understanding of the overall architecture of a complete AI system.

He said that teaching machines to learn through observation of the world will require self-supervised learning, or model-based reinforcement learning.

“Different people give it different names, but essentially human babies and animals learn how the world works by observing and figure out this huge amount of background information about it, and we don’t know how to do this with machines yet, but that’s one of the big challenges,” he said. “The prize for that is essentially making real progress in AI, as well as machines, to have a bit of common sense and virtual assistants that are not frustrating to talk to and have a wider range of topics and discussions.”

For applications that will help internally at Facebook, LeCun said significant progress toward self-supervised learning will be important, as well as AI that requires less data to return accurate results.

“On the way to solving that problem, we’re hoping to find ways to reduce the amount of data that’s necessary for any particular task like machine translation or image recognition or things like this, and we’re already making progress in that direction; we’re already making an impact on the services that are used by Facebook by using weakly supervised or self-supervised learning for translation and image recognition. So those are things that are actually not just long term, they also have very short term consequences,” he said.

In the future, LeCun wants to see progress made toward AI that can establish causal relationships between events. That’s the ability to not just learn by observation, but to have the practical understanding, for example, that if people are using umbrellas, it’s probably raining.

“That would be very important, because if you want a machine to learn models of the world by observation, it has to be able to know what it can influence to change the state of the world and that there are things you can’t do,” he said. “You know if you are in a room and a table is in front of you and there is an object on top of it like a water bottle, you know you can push the water bottle and it’s going to move, but you can’t move the table because it’s big and heavy — things like this related to causality.”

Hilary Mason

After Cloudera acquired Fast Forward Labs in 2017, Hilary Mason became Cloudera’s general manager of machine learning. Fast Forward Labs, while absorbed into Cloudera, is still in operation, producing applied machine learning reports and advising customers to help them see six months to two years into the future.

One advancement in AI that surprised Mason in 2018 was related to multitask learning, which can train a single neural network to apply multiple kinds of labels when inferring, for example, objects seen in an image.

Fast Forward Labs has also been advising customers on the ethical implications of AI systems. Mason sees a wider awareness for the necessity of putting some kind of ethical framework in place.

“This is something that since we founded Fast Forward — so, five years ago — we’ve been writing about ethics in every report but this year [2018] people have really started to pick up and pay attention, and I think next year we’ll start to see the consequences or some accountability in the space for companies and for people who pay no attention to this,” Mason said. “What I’m not saying very clearly is that I hope that the practice of data science and AI evolve as such that it becomes the default expectation that both technical folks and business leaders creating products with AI will be accounting for ethics and issues of bias and the development of those products, whereas today it is not the default that anyone thinks about those things.”

As more AI systems become part of business operations in the year ahead, Mason expects that product managers and product leaders will begin to make more contributions on the AI front because they’re in the best position to do so.

“I think it’s clearly the people who have the idea of the whole product in mind and understand the business understand what would be valuable and not valuable, who are in the best position to make these decisions about where they should invest,” she said. “So if you want my prediction, I think in the same way we expect all of those people to be minimally competent using something like spreadsheets to do simple modeling, we will soon expect them to be minimally competent in recognizing where AI opportunities in their own products are.”

The democratization of AI, or expansion to corners of a company beyond data science teams, is something that several companies have emphasized, including Google Cloud AI products like Kubeflow Pipelines and AI Hub as well as advice from the CI&T consultancy to ensure AI systems are actually utilized within a company.

Mason also thinks more and more businesses will need to form structures to manage multiple AI systems.

Like an analogy sometimes used to describe challenges faced by people working in DevOps, Mason said, managing a single system can be done with hand-deployed custom scripts, and cron jobs can manage a few dozen. But when you’re managing tens or hundreds of systems, in an enterprise that has security, governance, and risk requirements, you need professional, robust tooling.

Businesses are shifting from having pockets of competency or even brilliance to having a systematic way to pursue machine learning and AI opportunities, she said.

The emphasis on containers for deploying AI makes sense to Mason, since Cloudera recently launched its own container-based machine learning platform. She believes this trend will continue in years ahead so companies can choose between on-premise AI or AI deployed in the cloud.

Finally, Mason believes the business of AI will continue to evolve, with common practices across the industry, not just within individual companies.

“I think we will see a continuing evolution of the professional practice of AI,” she said. “Right now, if you’re a data scientist or an ML engineer at one company and you move to another company, your job will be completely different: different tooling, different expectations, different reporting structures. I think we’ll see consistency there,” she said.

Source: venturebeat.com