A Peek into the Future of Higher Education – Can Artificial Intelligence Drive Remote Learning?

AnalyticsAnywhere

Image Source: Statista
In its current form, Higher Education suffers from an exclusivity complex in many developed countries worldwide. While it’s possible to anyone to attend university in the UK, the £9,250 per year cost of tuition fees has left students feeling alienated. Elsewhere, universities can feel wholly inaccessible for young adults in nations with a weaker transport infrastructure.
As part of this article, I spoke to the head of one of very few universities that aim at utilizing AI to its fullest potential in education.

Chatbots and customized courses
Further developments in Artificial Intelligence may soon change Higher Education forever. Already AI is beginning to make its presence felt on campus, with universities like Staffordshire introducing Beacon, a chatbot designed to act as a 24-hour digital assistant for students. However, Deakin University in Australia has recently set a new standard, using IBM’s Watson AI technology to pre-empt over 1,600 student questions in real-time surrounding the topics of student life, admissions, local directions, financial aid and much more.
While chatbots don’t sound like they’re the kind of technology to swoop in and make university more inclusive for more disadvantaged students, the money institutions will save utilising AI to deal with queries instead of piling workloads on tutors could play a role in making HE more affordable.

Chatbots point to an exponentially larger role to be played by AI. When University 20.35 was developed, the emphasis was firmly on utilising Artificial Intelligence to provide bespoke Higher Education programs to thousands of remote learners. Speaking about this, Dmitry Peskov, head of the university explained: “When we started dealing with this challenge, we saw that educational programs in traditional universities and the teaching methods applied therein didn’t correspond to the needs of either private companies or the state. Everything is changing very quickly, new specialisations are appearing, and the requirements for traditional ones are constantly expanding. We realised that we need a flexible, digital data-driven educational platform where everything would be personalised as much as possible through the use of AI.”
Although it seems highly ambitious, the notion of optimising the personal experience of each student through advanced technology isn’t necessarily new. Writing for EdTech, Dave Doucette acknowledged that delivering a ‘highly individualised experience’ would be every university’s top priority if funding was unlimited.
AI has the potential to bridge the gap between students and their course material – offering personalised tutoring as well as video captioning as a means of making course content more accessible.

Mass remote learning leveraged by AI
Today we’re used to favoring courses that have lower student-to-tutor ratios, because they’ll be able to offer the best level of personal support, right? “University 20.35 is not a university in the traditional sense. We don’t have classrooms, permanent staff, lecturers, rectors and deans – and we don’t teach students based on programs that are available at other universities. It is a digital platform driven by artificial intelligence – the Pushkin AI. In fact, we are an experimental training ground, where advanced EdTech and techniques are being developed that will be commonplace not just tomorrow, but ‘the day after tomorrow’ – in the year 2035. Hence the name of the University,” explains Peskov.
What does this mean for the HE classrooms and lecture halls of the future? If AI delivers on the promise it’s continually showing, learning will transition into the realm of the remote. Students will be able to study at home, complete assignments that a combination of Artificial Intelligence and machine learning has determined is suitable for them, before submitting their work for the technology to automatically assess.

AnalyticsAnywhere2
Image Source: Statista

So does this mean that we’ll no longer be looking out for in-house courses with a student-to-tutor ratio of under 25 in the future? Peskov believes so: “We initially wanted to build a scalable digital platform through the use of AI. Therefore, a potentially unlimited number of participants will be able to get enrolled in the future at the university. But, practically, we plan that by 2020 up to 100,000 people will be connected to our system.”

The Higher Education revolution
University life is ever-changing. The disruptive power of the internet enabled students to learn in just about any location – whether it was through the use of library computers, on laptops at home, or on their mobile phones while traveling to take a morning exam.
Artificial Intelligence enables a logical evolution to take place here, where entire courses can take place from home, with comfort and suitable scalability. Does University 20.35 represent the start of a much wider movement? Dmitry thinks so: “This is a revolution.

The technology groups that are underlying this revolution can be applied in completely different fields: from how we teach children at the preschool age to the mass retraining of older generations. They can be used to fundamentally change the models of universities. They can be used for online courses, for working professions.
We want to make sure that people in the new digital economy are in demand and can easily adapt to any changes and requirements. And this can only be done with the help of AI and a complete revision of existing educational models.”

To Sum Up
Educational institutions have existed in a relatively familiar form throughout the centuries now. As it bids to adapt to the new millennium’s rapid advancements in computing and technology, universities of today can still be found guilty of commanding levels of tuition fees that can fly in the face of inclusivity.

The development of Artificial Intelligence has now offered the world a chance to revolutionize the way students access Higher Education – with affordable home learning and bespoke course content to suit each pupil’s needs. The notion of a ‘revolution’ may cause pre-existing institutions to baulk – but if it’s a revolution that brings greater inclusivity, then it’s a revolution worth doing.

Source: Hackernoon

Advertisements

Artificial Intelligence: Salaries Heading Skyward

While the average salary for a Software Engineer is around $100,000 to $150,000, to make the big bucks you want to be an AI or Machine Learning (Specialist/Scientist/Engineer.)

AnalyticsAnywhere

Artificial intelligence salaries benefit from the perfect recipe for a sweet paycheck: a hot field and high demand for scarce talent. It’s the ever-reliable law of supply and demand, and right now, anything artificial intelligence-related is in very high demand.

According to Indeed.com, the average IT salary — the keyword is “artificial intelligence engineer” — in the San Francisco area ranges from approximately $134,135 per year for “software engineer” to $169,930 per year for “machine learning engineer.”

However, it can go much higher if you have the credentials firms need. One tenured professor was offered triple his $180,000 salary to join Google, which he declined for a different teaching position.

However, the record, so far, was set in April when the Japanese firm Start Today, which operates the fashion-shopping site Zozotown, posted new job offerings for seven “genius” AI tech experts, offering annual salaries of as much as 100 million yen, or just under US $1 million.

Key Sectors for AI Salaries

Scoring a top AI salary means working in the “right” sector. While plentiful, AI jobs are mainly in just a few sectors — namely tech — and confined to just a few big and expensive cities. Glassdoor, another popular job search site, notes that 67% of all AI jobs listed on its site are located in the Bay Area, Seattle, Los Angeles, and New York City.

It also listed Facebook, NVIDIA, Adobe, Microsoft, Uber, and Accenture as the five best AI companies to work for in 2018, with almost 19% of open AI positions. The average annual base pay for an AI job listed on Glassdoor is $111,118 per year.

Glassdoor also found financial services, consulting and government agencies are actively hiring AI engineering and data science professionals. This includes top firms like Capital One, Fidelity, Goldman Sachs, Booz Allen Hamilton, EY, and McKinsey & Company, NASA’s Jet Propulsion Laboratory, the U.S. Army, and the Federal Reserve Bank.

However, expect that number of jobs and fields to expand considerably in the near future. A recent report from Gartner said that AI will kill off 1.8 million jobs, mostly menial labor, but the field will create 2.3 million new jobs by 2020, such statement is emphasized by a recent Capgemini report that found that 83% of companies using AI say they are adding jobs because of AI.

Best Jobs for AI Salaries

The term “AI” is rather broad and covers a number of disciplines and tasks, including natural language generation and comprehension, speech recognition, chat-bots, machine learning, decision management, deep learning, biometrics, and text analysis and processing. Given the level of specialization each requires, not many professionals can master more than one discipline.

In short, finding the best AI salary calls for actively nurturing the right career path.

While the average pay for an AI programmer is around $100,000 to $150,000, depending on the region of the country, all of these are in the developer/coder realm. To make the big money you want to be an AI engineer. According to Paysa, yet another job search site, an artificial intelligence engineer earns an average of $171,715, ranging from $124,542 at the 25th percentile to $201,853 at the 75th percentile, with top earners earning more than $257,530.

Why so high? Because many come from non-programming backgrounds. The IEEE notes that people with Ph.Ds in sciences like biology and physics are returning to school to learn AI and apply it to their field. They need to straddle the technical, knowing a multitude of languages and hardware architectures, with an understanding of the data involved. The latter makes engineers rare and thus expensive.

Why Are AI Salaries So High?

The fact is, AI is not a discipline you can teach yourself as many developers do. A survey by Stack Overflow found 86.7% of developers were, in fact, self-taught. However, that is for languages like Java, Python, and PHP, not the esoteric art of artificial intelligence.

It requires advanced degrees in computer science, often a Ph.D. In a report, Paysa found that 35 percent of AI positions require a Ph.D. and 26 percent require a master’s degree. Why? Because AI is a rapidly growing field and when you study at the Ph.D. level and participate in academic projects, they tend to be innovative if not bleeding edge, and that gives the student the experience they need for the work environment.

Moreover, it requires multiple disciplines, including C++, STL, Perl, Perforce and APIs like OpenGL and PhysX. In addition, because the AI is doing important calculations, a background in physics or some kind of life science is necessary.

Therefore, to be an effective and in-demand AI developer you need a lot of skills, not just one or two. Indeed lists the top 10 skills you need to know for AI:

1) Machine learning

2) Python

3) R language

4) Data science

5) Hadoop

6) Big Data

7) Java

8) Data mining

9) Spark

10) SAS

As you can see, that is a wide range of skills and none of them is learned overnight. According to The New York Times, there are fewer than 10,000 qualified AI specialists in the world. Element AI, a Montreal company that consults on machine learning systems, published a report earlier this year that there were 22,000 Ph.D.-level computer scientists in the world are capable of building AI systems. Either way, that is too few for the demand reported by Machine Learning News.

Competing Employers Drive Salaries Higher

With so few AI specialists available, tech companies are raiding academia. At the University of Washington, six of 20 artificial intelligence professors are now on leave or partial leave and working for outside companies. In the process, they are limiting the number of professors who can teach the technology, causing a vicious cycle.

US News and World report lists the top 20 schools for AI education. The top five are:

1) Carnegie Mellon University, Pittsburgh, PA

2) Massachusetts Institute of Technology, Cambridge, MA

3) Stanford University, Stanford, CA

4) University of California — Berkeley, Berkeley, CA

5) University of Washington, Seattle, WA

With academia being raided for talent, alternatives are popping up. Google, which is hiring any AI developer it can get its hands on, offers a course on deep learning and machine-learning tools via its Google Cloud Platform Website, and Facebook, also deep in AI, hosts a series of videos on the fundamentals of AI such as algorithms. If you want to take courses online, there is Coursera and Udacity.

Basic computer technology and math backgrounds are the backbone of most artificial intelligence programs. Linear algebra is as necessary as a programming language since machine learning performs analysis on data within matrices, and linear algebra is all about operations on matrices. According to Computer Science Degree Hub, coursework for AI involves study of advanced math, Bayesian networking or graphical modeling, including neural nets, physics, engineering and robotics, computer science and cognitive science theory.

Some things cannot be taught. Working with artificial intelligence does not mean you get to offload the work on the computer. It requires analytical thought process, foresight about technological innovations, technical skills to design, the skill to maintain and repair technology and software programs as well as algorithms. Therefore, it is easy to see why skilled people are so rare — which will drive AI salaries only higher.

Source: Medium

Can artificial intelligence help society as much as it helps business?

The answer is yes—but only if leaders start embracing technological social responsibility (TSR) as a new business imperative for the AI era.

AnalyticsAnywhere

In 1953, US senators grilled General Motors CEO Charles “Engine Charlie” Wilson about his large GM shareholdings: Would they cloud his decision making if he became the US secretary of defense and the interests of General Motors and the United States diverged? Wilson said that he would always put US interests first but that he could not imagine such a divergence taking place, because, “for years I thought what was good for our country was good for General Motors, and vice versa.” Although Wilson was confirmed, his remarks raised eyebrows due to widespread skepticism about the alignment of corporate and societal interests.

The skepticism of the 1950s looks quaint when compared with today’s concerns about whether business leaders will harness the power of artificial intelligence (AI) and workplace automation to pad their own pockets and those of shareholders—not to mention hurting society by causing unemployment, infringing upon privacy, creating safety and security risks, or worse. But is it possible that what is good for society can also be good for business—and vice versa?

Innovation and skill building

To answer this question, we need a balanced perspective that’s informed by history. Technology has long had positive effects on well-being beyond GDP—for example, increasing leisure or improving health and longevity—but it can also have a negative impact, especially in the short term, if adoption heightens stress, inequality, or risk aversion because of fears about job security. A relatively new strand of welfare economics has sought to calculate the value of both the upside and the downside of technology adoption. This is not just a theoretical exercise. What if workers in the automation era fear the future so much that this changes their behavior as consumers and crimps spending? What if stress levels rise to such an extent as workers interface with new technologies that labor productivity suffers?

Building and expanding on existing theories of welfare economics, we simulated how technology adoption today could play out across the economy. The key finding is that two dimensions will be decisive—and in both cases, business has a central role to play (Exhibit 1). The first dimension is the extent to which firms adopt technologies with a view to accelerating innovation-led growth, compared with a narrower focus on labor substitution and cost reduction. The second is the extent to which technology adoption is accompanied by measures to actively manage the labor transitions that will accompany it—in particular, raising skill levels and ensuring a more fluid labor market.

AnalyticsAnywhere

Both of these dimensions are in sync with our previous bottom-line-focused work on AI and automation adoption. In our research, digital leaders who reap the biggest benefits from technology adoption tend to be those who focus on new products or new markets and, as a result, are more likely to increase or stabilize their workforce than reduce it. At the same time, human capital is an essential element of their strategies, since having the talent able to implement and drive digital transformation is a prerequisite for successful execution. No wonder a growing number of companies, from Walmart to German software company SAP, are emphasizing in-house training programs to equip members of their workforce with the skills they will need for a more automated work environment. And both Amazon and Facebook have raised the minimum wage for their workers as a way to attract, retain, and reward talent.

TSR: Technological social responsibility

Given the potential for a win–win across business and society from a socially careful and innovation-driven adoption strategy, we believe the time has come for business leaders across sectors to embed a new imperative in their corporate strategy. We call this imperative technological social responsibility (TSR). It amounts to a conscious alignment between short- and medium-term business goals and longer-term societal ones.

Some of this may sound familiar. Like its cousin, corporate social responsibility, TSR embodies the lofty goal of enlightened self-interest. Yet the self-interest in this case goes beyond regulatory acceptance, consumer perception, or corporate image. By aligning business and societal interests along the twin axes of innovation focus and active transition management, we find that technology adoption can potentially increase productivity and economic growth in a powerful and measurable way.

In economic terms, innovation and transition management could, in a best-case scenario, double the potential growth in welfare—the sum of GDP and additional components of well-being, such as health, leisure, and equality—compared with an average scenario. The welfare growth to 2030 that emerges from this scenario could be even higher than the GDP and welfare gains we have seen in recent years from computers and early automation.

However, other scenarios that pay less heed to innovating or to managing disruptive transitions from tech adoption could slow income growth, increase inequality and unemployment risk, and lead to fewer improvements in leisure, health, and longevity. And that, in turn, would reduce the benefits to business.

At the company level, a workforce that is healthier, happier, better trained, and less stressed, will also be more productive, more adaptable, and better able to drive the technology adoption and innovation surge that will boost revenue and earnings. At the broader level, a society whose overall welfare is improving, and faster than GDP, is a more resilient society better able to handle sometimes painful transitions. In this spirit, New Zealand recently announced that it will shift its economic policy focus from GDP to broader societal well-being.

Leadership imperatives

For business leaders, three priorities will be essential. First, they will need to understand and be convinced of the argument that proactive management of technology transitions is not only in the interest of society at large but also in the more narrowly focused financial interest of companies themselves. Our research is just a starting point, and more work will be needed, including to show how and where individual sectors and companies can benefit from adopting a proactive strategy. Work is already underway at international bodies such as the Organisation of Economic Co-operation and Development to measure welfare effects across countries.

Second, digital reinvention plans will need to have, at their core, a thoughtful and proactive workforce-management strategy. Talent is a key differentiating factor, and there is much talk about the need for training, retraining, and nurturing individuals with the skills needed to implement and operate updated business processes and equipment. But so far, “reskilling” remains an afterthought in many companies. That is shortsighted; our work on digital transformation continues to emphasize the importance of having the right people in the right places as machines increasingly complement humans in the workforce. From that perspective alone, active management of training and workforce mobility will be an essential task for boards in the future.

Third, CEOs must embrace new, farsighted partnerships for social good. The successful adoption of AI and other advanced technologies will require cooperation from multiple stakeholders, especially business leaders and the public sector. One example involves education and skills: business leaders can help inform education providers with a clearer sense of the skills that will be needed in the workplace of the future, even as they look to raise the specific skills of their own workforce. IBM, for one, is partnering with vocational schools to shape curricula and build a pipeline of future “new collar” workers—individuals with job profiles at the nexus of professional and trade work, combining technical skills with a higher educational background. AT&T has partnered with more than 30 universities and multiple online education platforms to enable employees to earn the credentials needed for new digital roles.

Other critical public-sector actions include supporting R&D and innovation; creating markets for public goods, such as healthcare, so that there is a business incentive to serve these markets; and collaborating with businesses on reskilling, helping them to match workers with the skills they need and with the digital-era jobs to which they could most easily transition. A more fluid labor market and better job matching will benefit companies and governments, accelerating the search for talent for the former and reducing the potential transition costs for the latter.

There are many aspects to TSR, and we are just starting to map out some of the most important ones. But as an idea and an imperative, the time has come for technological social responsibility to make a forceful entry into the consciousness and strategies of business leaders everywhere.

Source: McKinsey

What is Natural Language Processing and How Does it Benefit a Business?

We use natural language processing every day. It makes it easier for us to interact with computers and software and allows us to perform complex searches and tasks without the help of a programmer, developer or analyst.

What is Natural Language Processing (NLP) Driven Analytics?

Natural language processing (NLP) is an integral part of today’s advanced analytics. If you have clicked in the search window on Google and entered a question, you know NLP! When NLP is incorporated into the business intelligence environment, business users can enter a question in human language. For example, ‘which sales team member achieved the best numbers last month?’ or ‘which of our products sells best in New York?’

The system translates this natural language search into a more traditional analytics query, and returns the most appropriate answer in the most appropriate form, so users can benefit from smart visualization, tables, numbers or natural language descriptions that are easy to understand.

How Does NLP-Based Analytics Benefit a Business Organization?

Perhaps the most important benefit of NLP is that it allows the business to implement augmented analytics in a self-serve environment with very little required training and ensures that users will adopt business intelligence and analytics as a tool to use every day.

NLP allows the enterprise to expand the use of business intelligence across the enterprise by offering business users an intuitive tool to ask for and receive crucial data and to understand the analytical output and share it with other users.

NLP opens and expands the data repositories and information in an organization in a way that is meaningful, and easy to understand, so data is more accessible and answers are more valuable. This will improve the accuracy of planning and forecasting and allow for a better overall understanding of business results.

Natural language processing helps business users sort through integrated data sources (internal and external) to answer a question in the way the user can understand, and will provide a foundation to simplify and speed the decision process with fact-based, data-driven analysis.

The enterprise can find and use information using natural language queries, rather than complex queries, so business users can achieve results without the assistance of IT or business analysts.

NLP presents results through smart visualization and contextual information delivered in natural language. Because these tools are easy to use and to understand, users are more likely to adopt them and to add value to the organization.

With NLP searches and queries, business users are free to explore data and achieve accurate results and the organization can achieve rapid ROI and sustain low total cost of ownership (TCO) with tools as familiar as a Google search.

Users can combine NLP with plug n’ play predictive analysis or assisted predictive modeling so the organization can achieve data democratization.

NLP and the advanced data discovery tools it supports can provide important, sophisticated tools in a user-friendly environment to suggest relationships, identify patterns and trends, and offer insight to previously hidden information so business users can ‘discover’ subtle, crucial problems and opportunities.

NLP is an integral part of today’s advanced analytics. It establishes an easy-to-use, interactive environment where users can create a search query in natural language and, as such, will support user adoption and provide numerous benefits to the enterprise.

Source: dataversity.net

Here’s how AI can help fight climate change according to the field’s top thinkers

From monitoring deforestation to designing low-carbon materials

Analytics-Anywhere

The AI renaissance of recent years has led many to ask how this technology can help with one of the greatest threats facing humanity: climate change. A new research paper authored by some of the field’s best-known thinkers aims to answer this question, giving a number of examples of how machine learning could help prevent human destruction.

The suggested use-cases are varied, ranging from using AI and satellite imagery to better monitor deforestation, to developing new materials that can replace steel and cement (the production of which accounts for nine percent of global green house gas emissions).

But despite this variety, the paper (which we spotted via MIT Technology Review) returns time and time again to a few broad areas of deployment. Prominent among these are using machine vision to monitor the environment; using data analysis to find inefficiencies in emission-heavy industries; and using AI to model complex systems, like Earth’s own climate, so we can better prepare for future changes.

The authors of the paper — which include DeepMind CEO Demis Hassabis, Turing award winner Yoshua Bengio, and Google Brain co-founder Andrew Ng — say that AI could be “invaluable” in mitigating and preventing the worse effects of climate change, but note that it is not a “silver bullet” and that political action is desperately needed, too.

“Technology alone is not enough,” write the paper’s authors, who were led by David Rolnick, a postdoctoral fellow at the University of Pennsylvania. “[T]echnologies that would reduce climate change have been available for years, but have largely not been adopted at scale by society. While we hope that ML will be useful in reducing the costs associated with climate action, humanity also must decide to act.”

In total, the paper suggests 13 fields where machine learning could be deployed (from which we’ve selected eight examples), which are categorized by the time-frame of their potential impact, and whether or not the technology involved is developed enough to reap certain rewards. You can read the full paper for yourself here, or browse our list below.

  • Build better electricity systems. Electricity systems are “awash with data” but too little is being done to take advantage of this information. Machine learning could help by forecasting electricity generation and demand, allowing suppliers to better integrate renewable resources into national grids and reduce waste. Google’s UK lab DeepMind has demonstrated this sort of work already, using AI to predict the energy output of wind farms.
  • Monitor agricultural emissions and deforestation. Greenhouse gases aren’t just emitted by engines and power plants — a great deal comes from the destruction of trees, peatland, and other plant life which has captured carbon through the process of photosynthesis over millions of years. Deforestation and unsustainable agriculture leads to this carbon being released back into the atmosphere, but using satellite imagery and AI, we can pinpoint where this is happening and protect these natural carbon sinks.
  • Create new low-carbon materials. The paper’s authors note that nine percent of all global emissions of greenhouse gases come from the production of concrete and steel. Machine learning could help reduce this figure by helping to develop low-carbon alternatives to these materials. AI helps scientists discover new materials by allowing them to model the properties and interactions of never-before-seen chemical compounds.
  • Predict extreme weather events. Many of the biggest effects of climate change in the coming decades will be driven by hugely complex systems, like changes in cloud cover and ice sheet dynamics. These are exactly the sort of problems AI is great at digging into. Modeling these changes will help scientists predict extreme weather events, like droughts and hurricanes, which in turn will help governments protect against their worst effects.
  • Make transportation more efficient. The transportation sector accounts for a quarter of global energy-related CO2 emissions, with two-thirds of this generated by road users. As with electricity systems, machine learning could make this sector more efficient, reducing the number of wasted journeys, increasing vehicle efficiency, and shifting freight to low-carbon options like rail. AI could also reduce car usage through the deployment of shared, autonomous vehicles, but the authors note that this technology is still not proven.
  • Reduce wasted energy from buildings. Energy consumed in buildings accounts for another quarter of global energy-related CO2 emissions, and presents some of “the lowest-hanging fruit” for climate action. Buildings are long-lasting and are rarely retrofitted with new technology. Adding just a few smart sensors to monitor air temperature, water temperature, and energy use, can reduce energy usage by 20 percent in a single building, and large-scale projects monitoring whole cities could have an even greater impact.
  • Geoengineer a more reflective Earth. This use-case is probably the most extreme and speculative of all those mentioned, but it’s one some scientists are hopeful about. If we can find ways to make clouds more reflective or create artificial clouds using aerosols, we could reflect more of the Sun’s heat back into space. That’s a big if though, and modeling the potential side-effects of any schemes is hugely important. AI could help with this, but the paper’s authors note there would still be significant “governance challenges” ahead.
  • Give individuals tools to reduce their carbon footprint. According to the paper’s authors, it’s a “common misconception that individuals cannot take meaningful action on climate change.” But people do need to know how they can help. Machine learning could help by calculating an individual’s carbon footprint and flagging small changes they could make to reduce it — like using public transport more; buying meat less often; or reducing electricity use in their house. Adding up individual actions can create a big cumulative effect.

Source: The Verge

We Actually Went Driverless 100 Years Ago

Analytics-Anywhere

In the aftermath of Ubers’s recent fatal crash in Tempe, which involved a driverless car, there has been a great deal of speculation about the future of the driverless automobile. As is often the case, trying to see beyond the near-term fear and natural trepidation, which accompanies handing over control of life and death decisions to machines, can be exceptionally difficult. Yet, this isn’t the first time we’ve encountered the driverless dilemma. There’s another example that’s nearly 100 years old.

Elevating Drivers

Coronado Island, just south of San Diego, is home to one of the world’s Grand Dame resorts, the Hotel Del Coronado. The Hotel Del was built in 1888. Much has changed at the Hotel Del in over a century. However, one thing hasn’t. In the center of the magnificent main Victorian building, is the Otis #61, a brass accordion-doored manual elevator that still shuttles guests, just as it has for the last one-hundred and thirty years. However, this elevator has a driver.

For hotel guests who never even knew that elevators were once run exclusively by “drivers,” the novelty is something they’re drawn to. Still, the look of apprehension and trepidation on many of their faces is clear as they approach an elevator that needs to be driven. You can imagine that they’re thinking, “Is that really safe?,” “Why can’t it operate on its own, the way real elevators do?” or “What if the driver makes a mistake and starts it up just as you’re getting in or out?” After all, he’s human, and humans are known to make mistakes.

Interestingly, although elevator operators were common through the mid-1900s, there were driverless elevators as far back as the early 1900s. There was just one problem. Nobody trusted them. Given the choice between the stairs and a lonely automated elevator, the elevator would remain empty. It wasn’t until the middle of the twentieth century that the tipping point came along for the driverless elevator as the result of a strike by the elevator operators’ union in New York City in 1945.

The strike was devastating, costing the city an estimated one hundred million dollars. Suddenly, there was an economic incentive to go back to the automatic elevator. Over the next decade there was a massive effort to build trust in automatic elevators, which resulted in the elimination of tens of thousands of elevator operator jobs.

Few of us will today step into an elevator and even casually think about the way it operates, how safe it is, or what the risks are. If you find yourself at the Hotel Del and decide to take the elevator, stop and think about just how radical change can be in reshaping our attitudes about what’s safe and normal.

Granted, an automatic elevator is a world apart from an autonomous vehicle, but the fundamental issue with the adoption of “driverless,” in both cases, isn’t so much the technology, which can be much safer without a human driver, it’s about trusting a machine to do something as well as we believe a human can do it–in a word, it’s all about perception.

Now you can use the power of established online marketplaces to grow your brand and sales, while streamlining and simplifying your business processes.

Still doubtful? Perhaps you’re one of the few people who have a fear of elevators? After all, twenty-seven people die yearly as the result of faulty automatic elevators. Elevators definitely kill.

However, you might also be interested in learning that, according to the Center for Disease Control’s National Center for Health Statistics, a whopping one thousand six hundred people die from falling down stairs. I’ll save you the math; that means you’re sixty times as likely to have a fatal accident taking the stairs. Unfortunately, numbers alone rarely change perception.

In an interview for my upcoming book Revealing The Invisible, with Amin Kashi, director of autonomous driving at Mentor, a Siemens business, he told me, “I’m sure we will look back on this in the not too distant future and think to ourselves, how could we have wasted all of that time commuting, how could we have dealt with the inherent lack of safety in the way that we used to drive. All these issues will become so obvious and so clear. From where we stand right now we’re accustomed to a certain behavior so we live with it, but I think we will be amazed that we actually got through it.”

No doubt that it will take time to build a sufficient level of trust in autonomous vehicles. But there’s equally little doubt that one day our children’s children will have a look of apprehension and trepidation on their faces as they approach a car that needs to be driven by a human.

I imagine that they’ll be thinking, “Is that really safe?”

Source: Innovation Excellence

10 Reasons Why Every Leader Should be Data Literate

Analytics-Anywhere

With the rapid advances in technology, computing power, the rise of the Data Scientist, Artificial Intelligence, Machine Learning, and the lure of being able to gain insights and meaning from the wealth of data all more possible now than ever before, “Data Literacy” for leaders and managers, within organizations is now needed.

Here are 10 reasons why every leader needs to become Data Literate:

To assist in developing a Data-Driven Culture
Especially applicable to companies that are either not using data yet to power their decision-making, or are at best, on the early part of their data journey.

Quite often a shift in the culture of the company is needed, a change in the way the company is used to working.

To do this efficiently and effectively, if you as a leader are data literate, then it will make the process of becoming “data-driven”, a lot smoother.

To help drill for Data
“Data is the new Oil” (Clive Humby, UK Mathematician).

In the last 2 years alone, over 90 percent of the data in the world was generated (Forbes.com), and 2.5 quintillion bytes of data are being produced each day!

Structured and Unstructured data, text files, images, video’s, documents, data is everywhere.

Being data literate will enable you to take advantage of it, to know where to look in your domain of expertise.

To assist in building a slick, efficient team
Data Scientists, Data Engineers, Machine Learning Engineers, Data Developers, Data Architects, whatever the job title, all are needed to take advantage of data in an organization.

Be data literate and be able to identify the key personnel you need to exploit the knowledge and insights quickly and efficiently.

To ensure compliance with Data Security, Privacy, Governance.
Recent events have meant the focus is now very much on how data is managed and secured, that people’s privacy is protected and respected.

Recent legislation such as GDPR has only added to the importance of this. Literacy with data will enable a full appreciation of how to ensure these issues and concerns are fully addressed and adhered to

To help ensure the correct tools and technology are available
We now live in a fast-paced world, where technology is changing at a rapid rate, where new advances are frequent, new tools, new software.

Part of data literacy is not necessarily being an expert in this area, but being aware of what is available, what is possible, and what is coming.

Having this view, enables your company, your team to be well positioned to use the relevant technology.

To help “spread the word” and form good habits
A good, data literate manager, when presented with an opinion or judgment from a team member, will not take it at face value but will ask them to provide the data to back it up.

This can only help in promoting the use of data and also towards achieving that data-driven culture we discussed previously.

A phrase often used in football coaching, is “practice makes permanent”.

Being constantly asked to back up your opinions with data, by the managers in an organization, will create a habit, and soon everyone will be utilizing the data

To help ensure the right questions are asked of the data
Knowing your data and what is available where in your organization, can only assist in ensuring that the correct questions are being asked of the data, in order to achieve the most beneficial insights possible.

To help gain a competitive advantage
Companies that leverage their data the best, and utilize the insights gained from it, will ultimately gain an advantage over their competitors.

Data Literacy within Leaders is a bare minimum if you want to achieve this.

To gain respect and credence from your team and fellow professionals
Being knowledgeable and appreciative of all things data, will only help in gaining the trust and respect from your fellow team members and others within your organization, and indeed your industry.

In order to survive in the future world of work
The workplace is only going one way, in this digital, data-driven age. Do not become data illiterate, and risk being left behind.

Source: algorithmxlab.com