Can artificial intelligence help society as much as it helps business?

The answer is yes—but only if leaders start embracing technological social responsibility (TSR) as a new business imperative for the AI era.

AnalyticsAnywhere

In 1953, US senators grilled General Motors CEO Charles “Engine Charlie” Wilson about his large GM shareholdings: Would they cloud his decision making if he became the US secretary of defense and the interests of General Motors and the United States diverged? Wilson said that he would always put US interests first but that he could not imagine such a divergence taking place, because, “for years I thought what was good for our country was good for General Motors, and vice versa.” Although Wilson was confirmed, his remarks raised eyebrows due to widespread skepticism about the alignment of corporate and societal interests.

The skepticism of the 1950s looks quaint when compared with today’s concerns about whether business leaders will harness the power of artificial intelligence (AI) and workplace automation to pad their own pockets and those of shareholders—not to mention hurting society by causing unemployment, infringing upon privacy, creating safety and security risks, or worse. But is it possible that what is good for society can also be good for business—and vice versa?

Innovation and skill building

To answer this question, we need a balanced perspective that’s informed by history. Technology has long had positive effects on well-being beyond GDP—for example, increasing leisure or improving health and longevity—but it can also have a negative impact, especially in the short term, if adoption heightens stress, inequality, or risk aversion because of fears about job security. A relatively new strand of welfare economics has sought to calculate the value of both the upside and the downside of technology adoption. This is not just a theoretical exercise. What if workers in the automation era fear the future so much that this changes their behavior as consumers and crimps spending? What if stress levels rise to such an extent as workers interface with new technologies that labor productivity suffers?

Building and expanding on existing theories of welfare economics, we simulated how technology adoption today could play out across the economy. The key finding is that two dimensions will be decisive—and in both cases, business has a central role to play (Exhibit 1). The first dimension is the extent to which firms adopt technologies with a view to accelerating innovation-led growth, compared with a narrower focus on labor substitution and cost reduction. The second is the extent to which technology adoption is accompanied by measures to actively manage the labor transitions that will accompany it—in particular, raising skill levels and ensuring a more fluid labor market.

AnalyticsAnywhere

Both of these dimensions are in sync with our previous bottom-line-focused work on AI and automation adoption. In our research, digital leaders who reap the biggest benefits from technology adoption tend to be those who focus on new products or new markets and, as a result, are more likely to increase or stabilize their workforce than reduce it. At the same time, human capital is an essential element of their strategies, since having the talent able to implement and drive digital transformation is a prerequisite for successful execution. No wonder a growing number of companies, from Walmart to German software company SAP, are emphasizing in-house training programs to equip members of their workforce with the skills they will need for a more automated work environment. And both Amazon and Facebook have raised the minimum wage for their workers as a way to attract, retain, and reward talent.

TSR: Technological social responsibility

Given the potential for a win–win across business and society from a socially careful and innovation-driven adoption strategy, we believe the time has come for business leaders across sectors to embed a new imperative in their corporate strategy. We call this imperative technological social responsibility (TSR). It amounts to a conscious alignment between short- and medium-term business goals and longer-term societal ones.

Some of this may sound familiar. Like its cousin, corporate social responsibility, TSR embodies the lofty goal of enlightened self-interest. Yet the self-interest in this case goes beyond regulatory acceptance, consumer perception, or corporate image. By aligning business and societal interests along the twin axes of innovation focus and active transition management, we find that technology adoption can potentially increase productivity and economic growth in a powerful and measurable way.

In economic terms, innovation and transition management could, in a best-case scenario, double the potential growth in welfare—the sum of GDP and additional components of well-being, such as health, leisure, and equality—compared with an average scenario. The welfare growth to 2030 that emerges from this scenario could be even higher than the GDP and welfare gains we have seen in recent years from computers and early automation.

However, other scenarios that pay less heed to innovating or to managing disruptive transitions from tech adoption could slow income growth, increase inequality and unemployment risk, and lead to fewer improvements in leisure, health, and longevity. And that, in turn, would reduce the benefits to business.

At the company level, a workforce that is healthier, happier, better trained, and less stressed, will also be more productive, more adaptable, and better able to drive the technology adoption and innovation surge that will boost revenue and earnings. At the broader level, a society whose overall welfare is improving, and faster than GDP, is a more resilient society better able to handle sometimes painful transitions. In this spirit, New Zealand recently announced that it will shift its economic policy focus from GDP to broader societal well-being.

Leadership imperatives

For business leaders, three priorities will be essential. First, they will need to understand and be convinced of the argument that proactive management of technology transitions is not only in the interest of society at large but also in the more narrowly focused financial interest of companies themselves. Our research is just a starting point, and more work will be needed, including to show how and where individual sectors and companies can benefit from adopting a proactive strategy. Work is already underway at international bodies such as the Organisation of Economic Co-operation and Development to measure welfare effects across countries.

Second, digital reinvention plans will need to have, at their core, a thoughtful and proactive workforce-management strategy. Talent is a key differentiating factor, and there is much talk about the need for training, retraining, and nurturing individuals with the skills needed to implement and operate updated business processes and equipment. But so far, “reskilling” remains an afterthought in many companies. That is shortsighted; our work on digital transformation continues to emphasize the importance of having the right people in the right places as machines increasingly complement humans in the workforce. From that perspective alone, active management of training and workforce mobility will be an essential task for boards in the future.

Third, CEOs must embrace new, farsighted partnerships for social good. The successful adoption of AI and other advanced technologies will require cooperation from multiple stakeholders, especially business leaders and the public sector. One example involves education and skills: business leaders can help inform education providers with a clearer sense of the skills that will be needed in the workplace of the future, even as they look to raise the specific skills of their own workforce. IBM, for one, is partnering with vocational schools to shape curricula and build a pipeline of future “new collar” workers—individuals with job profiles at the nexus of professional and trade work, combining technical skills with a higher educational background. AT&T has partnered with more than 30 universities and multiple online education platforms to enable employees to earn the credentials needed for new digital roles.

Other critical public-sector actions include supporting R&D and innovation; creating markets for public goods, such as healthcare, so that there is a business incentive to serve these markets; and collaborating with businesses on reskilling, helping them to match workers with the skills they need and with the digital-era jobs to which they could most easily transition. A more fluid labor market and better job matching will benefit companies and governments, accelerating the search for talent for the former and reducing the potential transition costs for the latter.

There are many aspects to TSR, and we are just starting to map out some of the most important ones. But as an idea and an imperative, the time has come for technological social responsibility to make a forceful entry into the consciousness and strategies of business leaders everywhere.

Source: McKinsey

Advertisements

What is Natural Language Processing and How Does it Benefit a Business?

We use natural language processing every day. It makes it easier for us to interact with computers and software and allows us to perform complex searches and tasks without the help of a programmer, developer or analyst.

What is Natural Language Processing (NLP) Driven Analytics?

Natural language processing (NLP) is an integral part of today’s advanced analytics. If you have clicked in the search window on Google and entered a question, you know NLP! When NLP is incorporated into the business intelligence environment, business users can enter a question in human language. For example, ‘which sales team member achieved the best numbers last month?’ or ‘which of our products sells best in New York?’

The system translates this natural language search into a more traditional analytics query, and returns the most appropriate answer in the most appropriate form, so users can benefit from smart visualization, tables, numbers or natural language descriptions that are easy to understand.

How Does NLP-Based Analytics Benefit a Business Organization?

Perhaps the most important benefit of NLP is that it allows the business to implement augmented analytics in a self-serve environment with very little required training and ensures that users will adopt business intelligence and analytics as a tool to use every day.

NLP allows the enterprise to expand the use of business intelligence across the enterprise by offering business users an intuitive tool to ask for and receive crucial data and to understand the analytical output and share it with other users.

NLP opens and expands the data repositories and information in an organization in a way that is meaningful, and easy to understand, so data is more accessible and answers are more valuable. This will improve the accuracy of planning and forecasting and allow for a better overall understanding of business results.

Natural language processing helps business users sort through integrated data sources (internal and external) to answer a question in the way the user can understand, and will provide a foundation to simplify and speed the decision process with fact-based, data-driven analysis.

The enterprise can find and use information using natural language queries, rather than complex queries, so business users can achieve results without the assistance of IT or business analysts.

NLP presents results through smart visualization and contextual information delivered in natural language. Because these tools are easy to use and to understand, users are more likely to adopt them and to add value to the organization.

With NLP searches and queries, business users are free to explore data and achieve accurate results and the organization can achieve rapid ROI and sustain low total cost of ownership (TCO) with tools as familiar as a Google search.

Users can combine NLP with plug n’ play predictive analysis or assisted predictive modeling so the organization can achieve data democratization.

NLP and the advanced data discovery tools it supports can provide important, sophisticated tools in a user-friendly environment to suggest relationships, identify patterns and trends, and offer insight to previously hidden information so business users can ‘discover’ subtle, crucial problems and opportunities.

NLP is an integral part of today’s advanced analytics. It establishes an easy-to-use, interactive environment where users can create a search query in natural language and, as such, will support user adoption and provide numerous benefits to the enterprise.

Source: dataversity.net

Here’s how AI can help fight climate change according to the field’s top thinkers

From monitoring deforestation to designing low-carbon materials

Analytics-Anywhere

The AI renaissance of recent years has led many to ask how this technology can help with one of the greatest threats facing humanity: climate change. A new research paper authored by some of the field’s best-known thinkers aims to answer this question, giving a number of examples of how machine learning could help prevent human destruction.

The suggested use-cases are varied, ranging from using AI and satellite imagery to better monitor deforestation, to developing new materials that can replace steel and cement (the production of which accounts for nine percent of global green house gas emissions).

But despite this variety, the paper (which we spotted via MIT Technology Review) returns time and time again to a few broad areas of deployment. Prominent among these are using machine vision to monitor the environment; using data analysis to find inefficiencies in emission-heavy industries; and using AI to model complex systems, like Earth’s own climate, so we can better prepare for future changes.

The authors of the paper — which include DeepMind CEO Demis Hassabis, Turing award winner Yoshua Bengio, and Google Brain co-founder Andrew Ng — say that AI could be “invaluable” in mitigating and preventing the worse effects of climate change, but note that it is not a “silver bullet” and that political action is desperately needed, too.

“Technology alone is not enough,” write the paper’s authors, who were led by David Rolnick, a postdoctoral fellow at the University of Pennsylvania. “[T]echnologies that would reduce climate change have been available for years, but have largely not been adopted at scale by society. While we hope that ML will be useful in reducing the costs associated with climate action, humanity also must decide to act.”

In total, the paper suggests 13 fields where machine learning could be deployed (from which we’ve selected eight examples), which are categorized by the time-frame of their potential impact, and whether or not the technology involved is developed enough to reap certain rewards. You can read the full paper for yourself here, or browse our list below.

  • Build better electricity systems. Electricity systems are “awash with data” but too little is being done to take advantage of this information. Machine learning could help by forecasting electricity generation and demand, allowing suppliers to better integrate renewable resources into national grids and reduce waste. Google’s UK lab DeepMind has demonstrated this sort of work already, using AI to predict the energy output of wind farms.
  • Monitor agricultural emissions and deforestation. Greenhouse gases aren’t just emitted by engines and power plants — a great deal comes from the destruction of trees, peatland, and other plant life which has captured carbon through the process of photosynthesis over millions of years. Deforestation and unsustainable agriculture leads to this carbon being released back into the atmosphere, but using satellite imagery and AI, we can pinpoint where this is happening and protect these natural carbon sinks.
  • Create new low-carbon materials. The paper’s authors note that nine percent of all global emissions of greenhouse gases come from the production of concrete and steel. Machine learning could help reduce this figure by helping to develop low-carbon alternatives to these materials. AI helps scientists discover new materials by allowing them to model the properties and interactions of never-before-seen chemical compounds.
  • Predict extreme weather events. Many of the biggest effects of climate change in the coming decades will be driven by hugely complex systems, like changes in cloud cover and ice sheet dynamics. These are exactly the sort of problems AI is great at digging into. Modeling these changes will help scientists predict extreme weather events, like droughts and hurricanes, which in turn will help governments protect against their worst effects.
  • Make transportation more efficient. The transportation sector accounts for a quarter of global energy-related CO2 emissions, with two-thirds of this generated by road users. As with electricity systems, machine learning could make this sector more efficient, reducing the number of wasted journeys, increasing vehicle efficiency, and shifting freight to low-carbon options like rail. AI could also reduce car usage through the deployment of shared, autonomous vehicles, but the authors note that this technology is still not proven.
  • Reduce wasted energy from buildings. Energy consumed in buildings accounts for another quarter of global energy-related CO2 emissions, and presents some of “the lowest-hanging fruit” for climate action. Buildings are long-lasting and are rarely retrofitted with new technology. Adding just a few smart sensors to monitor air temperature, water temperature, and energy use, can reduce energy usage by 20 percent in a single building, and large-scale projects monitoring whole cities could have an even greater impact.
  • Geoengineer a more reflective Earth. This use-case is probably the most extreme and speculative of all those mentioned, but it’s one some scientists are hopeful about. If we can find ways to make clouds more reflective or create artificial clouds using aerosols, we could reflect more of the Sun’s heat back into space. That’s a big if though, and modeling the potential side-effects of any schemes is hugely important. AI could help with this, but the paper’s authors note there would still be significant “governance challenges” ahead.
  • Give individuals tools to reduce their carbon footprint. According to the paper’s authors, it’s a “common misconception that individuals cannot take meaningful action on climate change.” But people do need to know how they can help. Machine learning could help by calculating an individual’s carbon footprint and flagging small changes they could make to reduce it — like using public transport more; buying meat less often; or reducing electricity use in their house. Adding up individual actions can create a big cumulative effect.

Source: The Verge

We Actually Went Driverless 100 Years Ago

Analytics-Anywhere

In the aftermath of Ubers’s recent fatal crash in Tempe, which involved a driverless car, there has been a great deal of speculation about the future of the driverless automobile. As is often the case, trying to see beyond the near-term fear and natural trepidation, which accompanies handing over control of life and death decisions to machines, can be exceptionally difficult. Yet, this isn’t the first time we’ve encountered the driverless dilemma. There’s another example that’s nearly 100 years old.

Elevating Drivers

Coronado Island, just south of San Diego, is home to one of the world’s Grand Dame resorts, the Hotel Del Coronado. The Hotel Del was built in 1888. Much has changed at the Hotel Del in over a century. However, one thing hasn’t. In the center of the magnificent main Victorian building, is the Otis #61, a brass accordion-doored manual elevator that still shuttles guests, just as it has for the last one-hundred and thirty years. However, this elevator has a driver.

For hotel guests who never even knew that elevators were once run exclusively by “drivers,” the novelty is something they’re drawn to. Still, the look of apprehension and trepidation on many of their faces is clear as they approach an elevator that needs to be driven. You can imagine that they’re thinking, “Is that really safe?,” “Why can’t it operate on its own, the way real elevators do?” or “What if the driver makes a mistake and starts it up just as you’re getting in or out?” After all, he’s human, and humans are known to make mistakes.

Interestingly, although elevator operators were common through the mid-1900s, there were driverless elevators as far back as the early 1900s. There was just one problem. Nobody trusted them. Given the choice between the stairs and a lonely automated elevator, the elevator would remain empty. It wasn’t until the middle of the twentieth century that the tipping point came along for the driverless elevator as the result of a strike by the elevator operators’ union in New York City in 1945.

The strike was devastating, costing the city an estimated one hundred million dollars. Suddenly, there was an economic incentive to go back to the automatic elevator. Over the next decade there was a massive effort to build trust in automatic elevators, which resulted in the elimination of tens of thousands of elevator operator jobs.

Few of us will today step into an elevator and even casually think about the way it operates, how safe it is, or what the risks are. If you find yourself at the Hotel Del and decide to take the elevator, stop and think about just how radical change can be in reshaping our attitudes about what’s safe and normal.

Granted, an automatic elevator is a world apart from an autonomous vehicle, but the fundamental issue with the adoption of “driverless,” in both cases, isn’t so much the technology, which can be much safer without a human driver, it’s about trusting a machine to do something as well as we believe a human can do it–in a word, it’s all about perception.

Now you can use the power of established online marketplaces to grow your brand and sales, while streamlining and simplifying your business processes.

Still doubtful? Perhaps you’re one of the few people who have a fear of elevators? After all, twenty-seven people die yearly as the result of faulty automatic elevators. Elevators definitely kill.

However, you might also be interested in learning that, according to the Center for Disease Control’s National Center for Health Statistics, a whopping one thousand six hundred people die from falling down stairs. I’ll save you the math; that means you’re sixty times as likely to have a fatal accident taking the stairs. Unfortunately, numbers alone rarely change perception.

In an interview for my upcoming book Revealing The Invisible, with Amin Kashi, director of autonomous driving at Mentor, a Siemens business, he told me, “I’m sure we will look back on this in the not too distant future and think to ourselves, how could we have wasted all of that time commuting, how could we have dealt with the inherent lack of safety in the way that we used to drive. All these issues will become so obvious and so clear. From where we stand right now we’re accustomed to a certain behavior so we live with it, but I think we will be amazed that we actually got through it.”

No doubt that it will take time to build a sufficient level of trust in autonomous vehicles. But there’s equally little doubt that one day our children’s children will have a look of apprehension and trepidation on their faces as they approach a car that needs to be driven by a human.

I imagine that they’ll be thinking, “Is that really safe?”

Source: Innovation Excellence

10 Reasons Why Every Leader Should be Data Literate

Analytics-Anywhere

With the rapid advances in technology, computing power, the rise of the Data Scientist, Artificial Intelligence, Machine Learning, and the lure of being able to gain insights and meaning from the wealth of data all more possible now than ever before, “Data Literacy” for leaders and managers, within organizations is now needed.

Here are 10 reasons why every leader needs to become Data Literate:

To assist in developing a Data-Driven Culture
Especially applicable to companies that are either not using data yet to power their decision-making, or are at best, on the early part of their data journey.

Quite often a shift in the culture of the company is needed, a change in the way the company is used to working.

To do this efficiently and effectively, if you as a leader are data literate, then it will make the process of becoming “data-driven”, a lot smoother.

To help drill for Data
“Data is the new Oil” (Clive Humby, UK Mathematician).

In the last 2 years alone, over 90 percent of the data in the world was generated (Forbes.com), and 2.5 quintillion bytes of data are being produced each day!

Structured and Unstructured data, text files, images, video’s, documents, data is everywhere.

Being data literate will enable you to take advantage of it, to know where to look in your domain of expertise.

To assist in building a slick, efficient team
Data Scientists, Data Engineers, Machine Learning Engineers, Data Developers, Data Architects, whatever the job title, all are needed to take advantage of data in an organization.

Be data literate and be able to identify the key personnel you need to exploit the knowledge and insights quickly and efficiently.

To ensure compliance with Data Security, Privacy, Governance.
Recent events have meant the focus is now very much on how data is managed and secured, that people’s privacy is protected and respected.

Recent legislation such as GDPR has only added to the importance of this. Literacy with data will enable a full appreciation of how to ensure these issues and concerns are fully addressed and adhered to

To help ensure the correct tools and technology are available
We now live in a fast-paced world, where technology is changing at a rapid rate, where new advances are frequent, new tools, new software.

Part of data literacy is not necessarily being an expert in this area, but being aware of what is available, what is possible, and what is coming.

Having this view, enables your company, your team to be well positioned to use the relevant technology.

To help “spread the word” and form good habits
A good, data literate manager, when presented with an opinion or judgment from a team member, will not take it at face value but will ask them to provide the data to back it up.

This can only help in promoting the use of data and also towards achieving that data-driven culture we discussed previously.

A phrase often used in football coaching, is “practice makes permanent”.

Being constantly asked to back up your opinions with data, by the managers in an organization, will create a habit, and soon everyone will be utilizing the data

To help ensure the right questions are asked of the data
Knowing your data and what is available where in your organization, can only assist in ensuring that the correct questions are being asked of the data, in order to achieve the most beneficial insights possible.

To help gain a competitive advantage
Companies that leverage their data the best, and utilize the insights gained from it, will ultimately gain an advantage over their competitors.

Data Literacy within Leaders is a bare minimum if you want to achieve this.

To gain respect and credence from your team and fellow professionals
Being knowledgeable and appreciative of all things data, will only help in gaining the trust and respect from your fellow team members and others within your organization, and indeed your industry.

In order to survive in the future world of work
The workplace is only going one way, in this digital, data-driven age. Do not become data illiterate, and risk being left behind.

Source: algorithmxlab.com

What Is A Technology Adoption Curve?

Analytics-Anywhere

The Five Stages Of A Technology Adoption Life Cycle
In his book, Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers, Geoffrey A. Moore highlights a model that tries to dissect and represent the stages of adoption of high-tech products.

More precisely this model goes through five stages. Each of those stages (innovators, early adopters, early majority, late majority, and laggard) has a specific psychographic that makes that group ready to adopt a tech product.

Why is the technology adoption life cycle useful?
There is a peculiar phase in the life cycle of a high-tech product that Moore calls a “chasm.” This is the phase in which a product is getting used by early adopters, but not yet by an early majority.

In that stage, there is a wide gap between those two psychographic profiles. Indeed, many startups fail because they don’t manage to have the early majority pick up where the early adopters left.

Understanding the technology adoption of a product helps you assess in which stage is a product and when the chasm is close how to fill the gap and allow the early majority to pick up the void left by the early adopters.

That void is created when the early adopters are ready to leave a product which is about to go mainstream. The market is plenty of examples of companies trying to conquer the early majority but failed in doing so, and in the process also lost the enthusiasts that made that product successful in the first place.

What are the stages of a technology adoption life cycle?
The stages of a technology adoption life cycle, it comprises five main psychographic profiles:

  • Innovators
  • Early Adopters
  • Early Majority
  • Late Majority
  • and Laggards

Innovators
Innovators are the first to take action and adopt a product, even though that might be buggy. Those people are willing to take the risk, and those will be the people ready to help you shape your product when that is not perfect.
As they’re in love with the innovative aspect behind it, they are ready to sustain that. This psychographic profile is all about the innovation itself. As this is sort of a hobby for them, they are ready and willing to take the risk of using something that doesn’t work perfectly, but it has great potential.

Early Adopters
Early adopters are among those people ready to try out a product at an early stage. They don’t need you to explain why they should use that innovation.
The early adopter has already researched into it, and she is passionate about the innovation behind that, however, while the innovator will adopt the high-tech product for the sake of the innovation behind it.
The early adopter will make an informed buying decision. In that stage, even though the product is only appealing to a small niche of an early adopter, it’s great and ready.
Those early adopters feel different from the early majority. And if you “betray them” they might probably leave you right away. That is where the chasm stands.

Early majority
The early majority is the psychographic profile made of people that will help you “cross the chasm.” Getting traction means making a product appealing to the early majority. Indeed, the early majority is made of more conscious consumers, that look for useful solutions but also beware of possible fads.

Late Majority
The late majority kicks in only after a product is well established, have a more skeptical approach to technological innovation and feel more comfortable in the adoption only when a product has gone mainstream.

Laggards
Laggards are the last in the technology adoption cycle. While the late majority is skeptical of technological innovation, the laggard is adverse to it.
Thus, unless there is a clear, established an advantage in using a technology those people will hardly become adopters. For some reason, which might be tied to personal or economic aspects, those people are not looking to adopt a technology.

Other factors influencing technological adoption
One of my favorite authors is Jared Diamond, a polymath which knowledge goes beyond books, education or instruction. In fact, Jared Diamond is an ecologist, geographer, biologist, anthropologist.

Whatever you want to label him, the truth is Jared Diamond is just one of the most curious people on earth. As we love to put a label on anything, we get impressed by as many labels one person has.

However, Jared Diamond has been just a curious person looking for answers to compelling and hard questions about our civilization. The search for those answers has brought him to become an expert in many disciplines.

In fact, even though he might not know what’s the latest news about Google‘s algorithm update, Apple’s latest product launch or what features the new iPhone has, I believe Jared Diamond is the most equipped person to understand how the technological landscape evolves. Reason being Jared Diamond has been looking at historical trends in thousands of years and dozens of cultures and civilizations.

He’s also lived for short periods throughout his life with small populations, like New Guineans. In his book Guns, Germs & Steel there is an excerpt that tries to explain why western civilizations were so technologically successful and advanced compared to any other population in the world, say New Guinea.

For many in the modern, hyper-technological world, the answer seems trivial. With the advent of the digital world, even more. We love to read and get inspired every day by the incredible stories of geniuses and successful entrepreneurs that are changing the world.

Jared Diamond has a different explanation for how technology evolves and what influences its adoption throughout history, and it has only in part of doing with the ability to make something that works better than what existed before.

Why the heroic theory of invention is flawed
If you read the accounts of many entrepreneurs that have influenced our modern society, those seem to resemble the stories of heroes, geniuses, and original thinkers. In short, if we didn’t have Edison, Watt, Ford, and Carnegie the western world wouldn’t have been so wildly successful. For how much we love this theory, that doesn’t seem to resemble history.

True, those people were in a way ahead of their times. They were geniuses, risk takers and in some cases mavericks. However, were they the only ones able to advance our society? That is not the case.

Assuming those people were isolated geniuses able to come up with the unimaginable; if the culture around hadn’t been able to acknowledge those inventions, we wouldn’t have traces as of now of those discoveries. So what influenced technological adoption?

The four macro patterns of technological adoption
According to Jared Diamond, there are four patterns to look at when looking for technological adoption:

  1. a relative economical advantage with existing technology
  2. social value and prestige
  3. compatibility with vested interests
  4. the ease with which those advantages can be observed

Relative economic advantage with existing technology
The first point seems obvious. In fact, for one technology to win over the other doesn’t have just to be better; but way more effective. To think of a recent example, when Google took off the search industry. When Google got into the search industry, it was not the first player. It was a latecomer. Yet its algorithm, PageRank, was so superior to its competition that it quickly took off.

What’s next?

Social value and prestige
This is less intuitive. In fact, for how much we love to think of ourselves as rational creatures, in reality, we might be way more social than we’re rational. Thus, social value and prestige of a technological innovation play as much a key role in its adoption as its innovative aspects.

Think about Apple’s products. Apple follows a business model which can be defined as a razor and blade business model. In short, the company attracts users on its platform, iTunes or Apple Store by selling music or apps for a convenient price, while selling its iPhones at very high margins.

However, it is undeniable that what makes Apple able to sell its computers and phones at a higher price compared to competitors is the brand the company was able to build over the years. In short, as of the time of this writing, Apple still represents a status quo that makes the company highly profitable.

Compatibility with vested interests
In Jared Diamond‘s book, Germs, Guns & Steel to prove this point he uses the story of the QWERTY keyboard. This is the keyboard most probably you’re using right now on your mobile device or computer. It is called in this way because its first left-most six letters form the name “QWERTY.”

Have you ever wondered why do you use this standard? You might think this has to do with efficiency. But instead, that is the opposite. This standard has been invented at the end of the 1800s when typewriters became the standard.

When typists were typing too fast those (page 248 of Germs, Guns & Steel) typewriters jammed. In short, they came up with a system that was thought to slow down typists so that typewriters wouldn’t get jammed anymore. Yet as the more than a century went by and we started to use computers, and mobile devices instead of switching to a more efficient system we kept the old one. Why?

According to Jared Diamond, the most compelling reason for not being able to switch to a new standard was the vested interests of small lobbies of typists, typing teachers, typewriter and computer salespeople.

The ease with which those advantages can be observed
When a technological advancement can be easily recognized as the fruit of the success of an organization, country or enterprise, it will be adopted by anyone that wants to keep up with it. Think, for instance, about two countries going to war. One of them has a secret weapon that makes them win the war.

As soon as the enemy that lost the battle finds that out, next time that weapon will also be adopted by the losing side. Think also of another more recent example. As big data has become a secret technological weapon used by Obama to win his electoral campaign. So Trump has used it to take over his competitors during the last US political campaign.

Now that we know what are the four macro patterns of technological adoption and how the technology adoption curve might work it might be easier for you to cross the chasm!

Source: FourWeekMBA

Monetizing Data: 4 Datasets You Need for More Reliable Forecasting

In the era of big data, the focus has long been on data collection and organization. But despite having access to more data than ever before, companies today are reporting a low return on their investment in analytics. Something’s not working. Today, business leaders are caught up in concerns that they don’t have enough data, it’s not accessible or it simply isn’t good enough. Instead of focusing on making data sources bigger or better, companies should be thinking about how they can get more out of the data they already have.

Contrary to popular belief, a high volume of perfect data isn’t necessary to drive strategic insight and action. While that might have been the case with time-series analysis, forecasting using simulation allows companies to do more with less. With simulation software, you aren’t constrained by the hard data points you have for every input; it allows you to enter both qualitative and quantitative information, so you can use human intelligence to make estimates that are later validated for accuracy with observable outcomes. Companies can then use these simulations to test how the market will respond to strategic initiatives by quickly running scenarios before launch. Also, most businesses already have enough collective intelligence within their organization to create a reliable, predictive simulation.

By unifying analytics, building forecasts and accelerating analytic processes, simulation helps companies build a holistic picture of their business to optimize strategy and maximize revenue. Here are the four types of information that companies need to fuel simulation forecasting and monetize their data investments:

1. Sales Data: Define success

The first set of information needed for simulation forecasting is sales data. In building a simulation model, sales data is used to define the market by establishing the outcome you’re trying to influence. That said, simulations can forecast more than sales outcomes in terms of revenue – they can also simulate a variety of other outcomes tied to sales such as new subscribers, website visits, online application submissions or program enrollments. Whatever the outcome is that you’re measuring, it’s helpful to have the information broken out by segment. If you don’t have this level of detail to start, you can continue to integrate new data into the model to make it more comprehensive over time.

2. Competitive Data: Paint a full picture of your market

With simulation forecasting, you are recreating an entire market so you can test how your solution will play out amongst competitors. In order to understand how people within a certain category respond to all of the choices available to them, you will need sales and marketing information for your competition. Competitor data is usually accessible from syndicated sources. If you don’t have access to competitor data, you can use approximate information available from public sources, annual reports or analyses from business experts to build out the competitive market in your simulation.

3. Customer Data: Understand how your consumer thinks

The third area of information needed for simulation is customer intelligence. In order to predict the likelihood a consumer will choose one option instead of another, you need to understand how they think. This requires information around awareness, perceptions and the relative importance of different attributes in driving a decision. These datasets are often collected and available through surveys. But even if there isn’t data from a quantitative study, your brand experts can use their judgment to make initial estimates of these values, and the values will later be verified through calibration and forecasting of observed metrics like sales.

4. Marketing Data: Evaluate the impact of in-market strategies

Finally, to drive simulation forecasting, companies need data on past marketing activity. This information is essential to understand how messaging in the market has influenced consumer decision making. This can be as simple as marketing investments and impressions broken out by paid, owned and earned activity, or it can be as granular as the tactics and specific media channels within each area.

Once a company identifies sources for these four types of data, it’s time to find an effective way to monetize it. The best way to get value from your big data is to identify unanswered business questions. With simulation forecasting, reliable answers are accessible – and you may need less data than you think to get meaningful, trustworthy insight.

Source: InsideBIGDATA