Here’s how AI can help fight climate change according to the field’s top thinkers

From monitoring deforestation to designing low-carbon materials

Analytics-Anywhere

The AI renaissance of recent years has led many to ask how this technology can help with one of the greatest threats facing humanity: climate change. A new research paper authored by some of the field’s best-known thinkers aims to answer this question, giving a number of examples of how machine learning could help prevent human destruction.

The suggested use-cases are varied, ranging from using AI and satellite imagery to better monitor deforestation, to developing new materials that can replace steel and cement (the production of which accounts for nine percent of global green house gas emissions).

But despite this variety, the paper (which we spotted via MIT Technology Review) returns time and time again to a few broad areas of deployment. Prominent among these are using machine vision to monitor the environment; using data analysis to find inefficiencies in emission-heavy industries; and using AI to model complex systems, like Earth’s own climate, so we can better prepare for future changes.

The authors of the paper — which include DeepMind CEO Demis Hassabis, Turing award winner Yoshua Bengio, and Google Brain co-founder Andrew Ng — say that AI could be “invaluable” in mitigating and preventing the worse effects of climate change, but note that it is not a “silver bullet” and that political action is desperately needed, too.

“Technology alone is not enough,” write the paper’s authors, who were led by David Rolnick, a postdoctoral fellow at the University of Pennsylvania. “[T]echnologies that would reduce climate change have been available for years, but have largely not been adopted at scale by society. While we hope that ML will be useful in reducing the costs associated with climate action, humanity also must decide to act.”

In total, the paper suggests 13 fields where machine learning could be deployed (from which we’ve selected eight examples), which are categorized by the time-frame of their potential impact, and whether or not the technology involved is developed enough to reap certain rewards. You can read the full paper for yourself here, or browse our list below.

  • Build better electricity systems. Electricity systems are “awash with data” but too little is being done to take advantage of this information. Machine learning could help by forecasting electricity generation and demand, allowing suppliers to better integrate renewable resources into national grids and reduce waste. Google’s UK lab DeepMind has demonstrated this sort of work already, using AI to predict the energy output of wind farms.
  • Monitor agricultural emissions and deforestation. Greenhouse gases aren’t just emitted by engines and power plants — a great deal comes from the destruction of trees, peatland, and other plant life which has captured carbon through the process of photosynthesis over millions of years. Deforestation and unsustainable agriculture leads to this carbon being released back into the atmosphere, but using satellite imagery and AI, we can pinpoint where this is happening and protect these natural carbon sinks.
  • Create new low-carbon materials. The paper’s authors note that nine percent of all global emissions of greenhouse gases come from the production of concrete and steel. Machine learning could help reduce this figure by helping to develop low-carbon alternatives to these materials. AI helps scientists discover new materials by allowing them to model the properties and interactions of never-before-seen chemical compounds.
  • Predict extreme weather events. Many of the biggest effects of climate change in the coming decades will be driven by hugely complex systems, like changes in cloud cover and ice sheet dynamics. These are exactly the sort of problems AI is great at digging into. Modeling these changes will help scientists predict extreme weather events, like droughts and hurricanes, which in turn will help governments protect against their worst effects.
  • Make transportation more efficient. The transportation sector accounts for a quarter of global energy-related CO2 emissions, with two-thirds of this generated by road users. As with electricity systems, machine learning could make this sector more efficient, reducing the number of wasted journeys, increasing vehicle efficiency, and shifting freight to low-carbon options like rail. AI could also reduce car usage through the deployment of shared, autonomous vehicles, but the authors note that this technology is still not proven.
  • Reduce wasted energy from buildings. Energy consumed in buildings accounts for another quarter of global energy-related CO2 emissions, and presents some of “the lowest-hanging fruit” for climate action. Buildings are long-lasting and are rarely retrofitted with new technology. Adding just a few smart sensors to monitor air temperature, water temperature, and energy use, can reduce energy usage by 20 percent in a single building, and large-scale projects monitoring whole cities could have an even greater impact.
  • Geoengineer a more reflective Earth. This use-case is probably the most extreme and speculative of all those mentioned, but it’s one some scientists are hopeful about. If we can find ways to make clouds more reflective or create artificial clouds using aerosols, we could reflect more of the Sun’s heat back into space. That’s a big if though, and modeling the potential side-effects of any schemes is hugely important. AI could help with this, but the paper’s authors note there would still be significant “governance challenges” ahead.
  • Give individuals tools to reduce their carbon footprint. According to the paper’s authors, it’s a “common misconception that individuals cannot take meaningful action on climate change.” But people do need to know how they can help. Machine learning could help by calculating an individual’s carbon footprint and flagging small changes they could make to reduce it — like using public transport more; buying meat less often; or reducing electricity use in their house. Adding up individual actions can create a big cumulative effect.

Source: The Verge

Advertisements

We Actually Went Driverless 100 Years Ago

Analytics-Anywhere

In the aftermath of Ubers’s recent fatal crash in Tempe, which involved a driverless car, there has been a great deal of speculation about the future of the driverless automobile. As is often the case, trying to see beyond the near-term fear and natural trepidation, which accompanies handing over control of life and death decisions to machines, can be exceptionally difficult. Yet, this isn’t the first time we’ve encountered the driverless dilemma. There’s another example that’s nearly 100 years old.

Elevating Drivers

Coronado Island, just south of San Diego, is home to one of the world’s Grand Dame resorts, the Hotel Del Coronado. The Hotel Del was built in 1888. Much has changed at the Hotel Del in over a century. However, one thing hasn’t. In the center of the magnificent main Victorian building, is the Otis #61, a brass accordion-doored manual elevator that still shuttles guests, just as it has for the last one-hundred and thirty years. However, this elevator has a driver.

For hotel guests who never even knew that elevators were once run exclusively by “drivers,” the novelty is something they’re drawn to. Still, the look of apprehension and trepidation on many of their faces is clear as they approach an elevator that needs to be driven. You can imagine that they’re thinking, “Is that really safe?,” “Why can’t it operate on its own, the way real elevators do?” or “What if the driver makes a mistake and starts it up just as you’re getting in or out?” After all, he’s human, and humans are known to make mistakes.

Interestingly, although elevator operators were common through the mid-1900s, there were driverless elevators as far back as the early 1900s. There was just one problem. Nobody trusted them. Given the choice between the stairs and a lonely automated elevator, the elevator would remain empty. It wasn’t until the middle of the twentieth century that the tipping point came along for the driverless elevator as the result of a strike by the elevator operators’ union in New York City in 1945.

The strike was devastating, costing the city an estimated one hundred million dollars. Suddenly, there was an economic incentive to go back to the automatic elevator. Over the next decade there was a massive effort to build trust in automatic elevators, which resulted in the elimination of tens of thousands of elevator operator jobs.

Few of us will today step into an elevator and even casually think about the way it operates, how safe it is, or what the risks are. If you find yourself at the Hotel Del and decide to take the elevator, stop and think about just how radical change can be in reshaping our attitudes about what’s safe and normal.

Granted, an automatic elevator is a world apart from an autonomous vehicle, but the fundamental issue with the adoption of “driverless,” in both cases, isn’t so much the technology, which can be much safer without a human driver, it’s about trusting a machine to do something as well as we believe a human can do it–in a word, it’s all about perception.

Now you can use the power of established online marketplaces to grow your brand and sales, while streamlining and simplifying your business processes.

Still doubtful? Perhaps you’re one of the few people who have a fear of elevators? After all, twenty-seven people die yearly as the result of faulty automatic elevators. Elevators definitely kill.

However, you might also be interested in learning that, according to the Center for Disease Control’s National Center for Health Statistics, a whopping one thousand six hundred people die from falling down stairs. I’ll save you the math; that means you’re sixty times as likely to have a fatal accident taking the stairs. Unfortunately, numbers alone rarely change perception.

In an interview for my upcoming book Revealing The Invisible, with Amin Kashi, director of autonomous driving at Mentor, a Siemens business, he told me, “I’m sure we will look back on this in the not too distant future and think to ourselves, how could we have wasted all of that time commuting, how could we have dealt with the inherent lack of safety in the way that we used to drive. All these issues will become so obvious and so clear. From where we stand right now we’re accustomed to a certain behavior so we live with it, but I think we will be amazed that we actually got through it.”

No doubt that it will take time to build a sufficient level of trust in autonomous vehicles. But there’s equally little doubt that one day our children’s children will have a look of apprehension and trepidation on their faces as they approach a car that needs to be driven by a human.

I imagine that they’ll be thinking, “Is that really safe?”

Source: Innovation Excellence

10 Reasons Why Every Leader Should be Data Literate

Analytics-Anywhere

With the rapid advances in technology, computing power, the rise of the Data Scientist, Artificial Intelligence, Machine Learning, and the lure of being able to gain insights and meaning from the wealth of data all more possible now than ever before, “Data Literacy” for leaders and managers, within organizations is now needed.

Here are 10 reasons why every leader needs to become Data Literate:

To assist in developing a Data-Driven Culture
Especially applicable to companies that are either not using data yet to power their decision-making, or are at best, on the early part of their data journey.

Quite often a shift in the culture of the company is needed, a change in the way the company is used to working.

To do this efficiently and effectively, if you as a leader are data literate, then it will make the process of becoming “data-driven”, a lot smoother.

To help drill for Data
“Data is the new Oil” (Clive Humby, UK Mathematician).

In the last 2 years alone, over 90 percent of the data in the world was generated (Forbes.com), and 2.5 quintillion bytes of data are being produced each day!

Structured and Unstructured data, text files, images, video’s, documents, data is everywhere.

Being data literate will enable you to take advantage of it, to know where to look in your domain of expertise.

To assist in building a slick, efficient team
Data Scientists, Data Engineers, Machine Learning Engineers, Data Developers, Data Architects, whatever the job title, all are needed to take advantage of data in an organization.

Be data literate and be able to identify the key personnel you need to exploit the knowledge and insights quickly and efficiently.

To ensure compliance with Data Security, Privacy, Governance.
Recent events have meant the focus is now very much on how data is managed and secured, that people’s privacy is protected and respected.

Recent legislation such as GDPR has only added to the importance of this. Literacy with data will enable a full appreciation of how to ensure these issues and concerns are fully addressed and adhered to

To help ensure the correct tools and technology are available
We now live in a fast-paced world, where technology is changing at a rapid rate, where new advances are frequent, new tools, new software.

Part of data literacy is not necessarily being an expert in this area, but being aware of what is available, what is possible, and what is coming.

Having this view, enables your company, your team to be well positioned to use the relevant technology.

To help “spread the word” and form good habits
A good, data literate manager, when presented with an opinion or judgment from a team member, will not take it at face value but will ask them to provide the data to back it up.

This can only help in promoting the use of data and also towards achieving that data-driven culture we discussed previously.

A phrase often used in football coaching, is “practice makes permanent”.

Being constantly asked to back up your opinions with data, by the managers in an organization, will create a habit, and soon everyone will be utilizing the data

To help ensure the right questions are asked of the data
Knowing your data and what is available where in your organization, can only assist in ensuring that the correct questions are being asked of the data, in order to achieve the most beneficial insights possible.

To help gain a competitive advantage
Companies that leverage their data the best, and utilize the insights gained from it, will ultimately gain an advantage over their competitors.

Data Literacy within Leaders is a bare minimum if you want to achieve this.

To gain respect and credence from your team and fellow professionals
Being knowledgeable and appreciative of all things data, will only help in gaining the trust and respect from your fellow team members and others within your organization, and indeed your industry.

In order to survive in the future world of work
The workplace is only going one way, in this digital, data-driven age. Do not become data illiterate, and risk being left behind.

Source: algorithmxlab.com

What Is A Technology Adoption Curve?

Analytics-Anywhere

The Five Stages Of A Technology Adoption Life Cycle
In his book, Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers, Geoffrey A. Moore highlights a model that tries to dissect and represent the stages of adoption of high-tech products.

More precisely this model goes through five stages. Each of those stages (innovators, early adopters, early majority, late majority, and laggard) has a specific psychographic that makes that group ready to adopt a tech product.

Why is the technology adoption life cycle useful?
There is a peculiar phase in the life cycle of a high-tech product that Moore calls a “chasm.” This is the phase in which a product is getting used by early adopters, but not yet by an early majority.

In that stage, there is a wide gap between those two psychographic profiles. Indeed, many startups fail because they don’t manage to have the early majority pick up where the early adopters left.

Understanding the technology adoption of a product helps you assess in which stage is a product and when the chasm is close how to fill the gap and allow the early majority to pick up the void left by the early adopters.

That void is created when the early adopters are ready to leave a product which is about to go mainstream. The market is plenty of examples of companies trying to conquer the early majority but failed in doing so, and in the process also lost the enthusiasts that made that product successful in the first place.

What are the stages of a technology adoption life cycle?
The stages of a technology adoption life cycle, it comprises five main psychographic profiles:

  • Innovators
  • Early Adopters
  • Early Majority
  • Late Majority
  • and Laggards

Innovators
Innovators are the first to take action and adopt a product, even though that might be buggy. Those people are willing to take the risk, and those will be the people ready to help you shape your product when that is not perfect.
As they’re in love with the innovative aspect behind it, they are ready to sustain that. This psychographic profile is all about the innovation itself. As this is sort of a hobby for them, they are ready and willing to take the risk of using something that doesn’t work perfectly, but it has great potential.

Early Adopters
Early adopters are among those people ready to try out a product at an early stage. They don’t need you to explain why they should use that innovation.
The early adopter has already researched into it, and she is passionate about the innovation behind that, however, while the innovator will adopt the high-tech product for the sake of the innovation behind it.
The early adopter will make an informed buying decision. In that stage, even though the product is only appealing to a small niche of an early adopter, it’s great and ready.
Those early adopters feel different from the early majority. And if you “betray them” they might probably leave you right away. That is where the chasm stands.

Early majority
The early majority is the psychographic profile made of people that will help you “cross the chasm.” Getting traction means making a product appealing to the early majority. Indeed, the early majority is made of more conscious consumers, that look for useful solutions but also beware of possible fads.

Late Majority
The late majority kicks in only after a product is well established, have a more skeptical approach to technological innovation and feel more comfortable in the adoption only when a product has gone mainstream.

Laggards
Laggards are the last in the technology adoption cycle. While the late majority is skeptical of technological innovation, the laggard is adverse to it.
Thus, unless there is a clear, established an advantage in using a technology those people will hardly become adopters. For some reason, which might be tied to personal or economic aspects, those people are not looking to adopt a technology.

Other factors influencing technological adoption
One of my favorite authors is Jared Diamond, a polymath which knowledge goes beyond books, education or instruction. In fact, Jared Diamond is an ecologist, geographer, biologist, anthropologist.

Whatever you want to label him, the truth is Jared Diamond is just one of the most curious people on earth. As we love to put a label on anything, we get impressed by as many labels one person has.

However, Jared Diamond has been just a curious person looking for answers to compelling and hard questions about our civilization. The search for those answers has brought him to become an expert in many disciplines.

In fact, even though he might not know what’s the latest news about Google‘s algorithm update, Apple’s latest product launch or what features the new iPhone has, I believe Jared Diamond is the most equipped person to understand how the technological landscape evolves. Reason being Jared Diamond has been looking at historical trends in thousands of years and dozens of cultures and civilizations.

He’s also lived for short periods throughout his life with small populations, like New Guineans. In his book Guns, Germs & Steel there is an excerpt that tries to explain why western civilizations were so technologically successful and advanced compared to any other population in the world, say New Guinea.

For many in the modern, hyper-technological world, the answer seems trivial. With the advent of the digital world, even more. We love to read and get inspired every day by the incredible stories of geniuses and successful entrepreneurs that are changing the world.

Jared Diamond has a different explanation for how technology evolves and what influences its adoption throughout history, and it has only in part of doing with the ability to make something that works better than what existed before.

Why the heroic theory of invention is flawed
If you read the accounts of many entrepreneurs that have influenced our modern society, those seem to resemble the stories of heroes, geniuses, and original thinkers. In short, if we didn’t have Edison, Watt, Ford, and Carnegie the western world wouldn’t have been so wildly successful. For how much we love this theory, that doesn’t seem to resemble history.

True, those people were in a way ahead of their times. They were geniuses, risk takers and in some cases mavericks. However, were they the only ones able to advance our society? That is not the case.

Assuming those people were isolated geniuses able to come up with the unimaginable; if the culture around hadn’t been able to acknowledge those inventions, we wouldn’t have traces as of now of those discoveries. So what influenced technological adoption?

The four macro patterns of technological adoption
According to Jared Diamond, there are four patterns to look at when looking for technological adoption:

  1. a relative economical advantage with existing technology
  2. social value and prestige
  3. compatibility with vested interests
  4. the ease with which those advantages can be observed

Relative economic advantage with existing technology
The first point seems obvious. In fact, for one technology to win over the other doesn’t have just to be better; but way more effective. To think of a recent example, when Google took off the search industry. When Google got into the search industry, it was not the first player. It was a latecomer. Yet its algorithm, PageRank, was so superior to its competition that it quickly took off.

What’s next?

Social value and prestige
This is less intuitive. In fact, for how much we love to think of ourselves as rational creatures, in reality, we might be way more social than we’re rational. Thus, social value and prestige of a technological innovation play as much a key role in its adoption as its innovative aspects.

Think about Apple’s products. Apple follows a business model which can be defined as a razor and blade business model. In short, the company attracts users on its platform, iTunes or Apple Store by selling music or apps for a convenient price, while selling its iPhones at very high margins.

However, it is undeniable that what makes Apple able to sell its computers and phones at a higher price compared to competitors is the brand the company was able to build over the years. In short, as of the time of this writing, Apple still represents a status quo that makes the company highly profitable.

Compatibility with vested interests
In Jared Diamond‘s book, Germs, Guns & Steel to prove this point he uses the story of the QWERTY keyboard. This is the keyboard most probably you’re using right now on your mobile device or computer. It is called in this way because its first left-most six letters form the name “QWERTY.”

Have you ever wondered why do you use this standard? You might think this has to do with efficiency. But instead, that is the opposite. This standard has been invented at the end of the 1800s when typewriters became the standard.

When typists were typing too fast those (page 248 of Germs, Guns & Steel) typewriters jammed. In short, they came up with a system that was thought to slow down typists so that typewriters wouldn’t get jammed anymore. Yet as the more than a century went by and we started to use computers, and mobile devices instead of switching to a more efficient system we kept the old one. Why?

According to Jared Diamond, the most compelling reason for not being able to switch to a new standard was the vested interests of small lobbies of typists, typing teachers, typewriter and computer salespeople.

The ease with which those advantages can be observed
When a technological advancement can be easily recognized as the fruit of the success of an organization, country or enterprise, it will be adopted by anyone that wants to keep up with it. Think, for instance, about two countries going to war. One of them has a secret weapon that makes them win the war.

As soon as the enemy that lost the battle finds that out, next time that weapon will also be adopted by the losing side. Think also of another more recent example. As big data has become a secret technological weapon used by Obama to win his electoral campaign. So Trump has used it to take over his competitors during the last US political campaign.

Now that we know what are the four macro patterns of technological adoption and how the technology adoption curve might work it might be easier for you to cross the chasm!

Source: FourWeekMBA

Monetizing Data: 4 Datasets You Need for More Reliable Forecasting

In the era of big data, the focus has long been on data collection and organization. But despite having access to more data than ever before, companies today are reporting a low return on their investment in analytics. Something’s not working. Today, business leaders are caught up in concerns that they don’t have enough data, it’s not accessible or it simply isn’t good enough. Instead of focusing on making data sources bigger or better, companies should be thinking about how they can get more out of the data they already have.

Contrary to popular belief, a high volume of perfect data isn’t necessary to drive strategic insight and action. While that might have been the case with time-series analysis, forecasting using simulation allows companies to do more with less. With simulation software, you aren’t constrained by the hard data points you have for every input; it allows you to enter both qualitative and quantitative information, so you can use human intelligence to make estimates that are later validated for accuracy with observable outcomes. Companies can then use these simulations to test how the market will respond to strategic initiatives by quickly running scenarios before launch. Also, most businesses already have enough collective intelligence within their organization to create a reliable, predictive simulation.

By unifying analytics, building forecasts and accelerating analytic processes, simulation helps companies build a holistic picture of their business to optimize strategy and maximize revenue. Here are the four types of information that companies need to fuel simulation forecasting and monetize their data investments:

1. Sales Data: Define success

The first set of information needed for simulation forecasting is sales data. In building a simulation model, sales data is used to define the market by establishing the outcome you’re trying to influence. That said, simulations can forecast more than sales outcomes in terms of revenue – they can also simulate a variety of other outcomes tied to sales such as new subscribers, website visits, online application submissions or program enrollments. Whatever the outcome is that you’re measuring, it’s helpful to have the information broken out by segment. If you don’t have this level of detail to start, you can continue to integrate new data into the model to make it more comprehensive over time.

2. Competitive Data: Paint a full picture of your market

With simulation forecasting, you are recreating an entire market so you can test how your solution will play out amongst competitors. In order to understand how people within a certain category respond to all of the choices available to them, you will need sales and marketing information for your competition. Competitor data is usually accessible from syndicated sources. If you don’t have access to competitor data, you can use approximate information available from public sources, annual reports or analyses from business experts to build out the competitive market in your simulation.

3. Customer Data: Understand how your consumer thinks

The third area of information needed for simulation is customer intelligence. In order to predict the likelihood a consumer will choose one option instead of another, you need to understand how they think. This requires information around awareness, perceptions and the relative importance of different attributes in driving a decision. These datasets are often collected and available through surveys. But even if there isn’t data from a quantitative study, your brand experts can use their judgment to make initial estimates of these values, and the values will later be verified through calibration and forecasting of observed metrics like sales.

4. Marketing Data: Evaluate the impact of in-market strategies

Finally, to drive simulation forecasting, companies need data on past marketing activity. This information is essential to understand how messaging in the market has influenced consumer decision making. This can be as simple as marketing investments and impressions broken out by paid, owned and earned activity, or it can be as granular as the tactics and specific media channels within each area.

Once a company identifies sources for these four types of data, it’s time to find an effective way to monetize it. The best way to get value from your big data is to identify unanswered business questions. With simulation forecasting, reliable answers are accessible – and you may need less data than you think to get meaningful, trustworthy insight.

Source: InsideBIGDATA

AI is all about instant customer satisfaction

analytics anywhere

Our brains are wired to love and become addicted to instant rewards. Any delay in satisfaction creates stress. Just remember how you feel when a web page takes over 3 seconds to load. We crave technologies that go even faster than our brains. The Google, Amazon, Booking.com, and Uber of this world have been harnessing the benefits of instant reward to boost their sales for years. AI is just the next logical step and there is no way back because the faster you go, the more consumers buy from you, and the faster you want to go.

The goal is not to replace human work but to expand your capacity to deliver the instant value and relevance that your customers crave and that you are not currently able to provide. Hotels chronically complain about how understaffed they are and how hard it is to keep pace. So 2 choices here: 1- embrace AI as an opportunity or 2- keep running a Formula 1 race with a bicycle.

Trending AI Articles:

  1. Natural vs Artificial Neural Networks
  2. A Short Machine Learning Explanation
  3. A.I. of the People, by the People, for the People
  4. Face detection with OpenCV and Deep Learning from image

Cloud AI — the opportunity for hotels

The market for AI is no longer the privilege of a few multi-billion dollar companies. Cloud AI solutions have become widely available for hotels that can massively capitalize on its power at virtually no cost.

Big Data: A new generation of booking engines led by companies such as Avvio are able to learn from customer demographics and adapt their display to better fit the preferences of each customer.

Chatbots: Technologies such as Quicktext and Zoe bot engage customers on your direct channels to help your online visitors access immediately, relevant information while capturing data on them that either the chatbot or you are able to action to increase sales.

Grow out of your terminator fantasy

Some people mix fiction and reality either because they are afraid of AI’s potential, or on the contrary, they expect to see a full human being. This confusion happens because we use terms such as intelligence, neural networks, deep learning etc. It is true that AI is inspired by our brain, but overall, we frequently get inspired by nature to solve challenges. Most of the time we can recognize where the inspiration comes from but the final product is usually quite far from the original model.

With AI it is exactly the same thing. We can use some basic logic but it remains very focused on a specific use case. So, if you want to be able to profit from AI, you need to have realistic expectations. For instance, chatbots are currently able to manage frequently requested tasks such as giving particular information, booking a room, locating and finding relevant places around the hotel etc. They deal with some repetitive tasks that none of your employees want to do and chatbots have become very good at it — even better than humans. However, virtual assistants are not able to serve the customers outside of their perimeter. That’s when you move from autopilot to manual. Taking AI for what it is, rather than your wildest dreams, will enable you to realize that it can benefit your business today.

Source: Becoming Human

How open-source software took over the world

analyticsanywhere

It was just five years ago that there was an ample dose of skepticism from investors about the viability of open source as a business model. The common thesis was that Red Hat was a snowflake and that no other open-source company would be significant in the software universe.

Fast-forward to today and we’ve witnessed the growing excitement in the space: Red Hat is being acquired by IBM for $32 billion (3x times its market cap from 2014); MuleSoft was acquired after going public for $6.5 billion; MongoDB is now worth north of $4 billion; Elastic’s IPO now values the company at $6 billion; and, through the merger of Cloudera and Hortonworks, a new company with a market cap north of $4 billion will emerge. In addition, there’s a growing cohort of impressive OSS companies working their way through the growth stages of their evolution: Confluent, HashiCorp, DataBricks, Kong, Cockroach Labs and many others. Given the relative multiples that Wall Street and private investors are assigning to these open-source companies, it seems pretty clear that something special is happening.

So, why did this movement that once represented the bleeding edge of software become the hot place to be? There are a number of fundamental changes that have advanced open-source businesses and their prospects in the market.

From open source to open core to SaaS
The original open-source projects were not really businesses, they were revolutions against the unfair profits that closed-source software companies were reaping. Microsoft, Oracle, SAP and others were extracting monopoly-like “rents” for software, which the top developers of the time didn’t believe was world class. So, beginning with the most broadly used components of software – operating systems and databases – progressive developers collaborated, often asynchronously, to author great pieces of software. Everyone could not only see the software in the open, but through a loosely knit governance model, they added, improved and enhanced it.

The software was originally created by and for developers, which meant that at first it wasn’t the most user-friendly. But it was performant, robust and flexible. These merits gradually percolated across the software world and, over a decade, Linux became the second most popular OS for servers (next to Windows); MySQL mirrored that feat by eating away at Oracle’s dominance.

The first entrepreneurial ventures attempted to capitalize on this adoption by offering “enterprise-grade” support subscriptions for these software distributions. Red Hat emerged the winner in the Linux race and MySQL (the company) for databases. These businesses had some obvious limitations – it was harder to monetize software with just support services, but the market size for OS’s and databases was so large that, in spite of more challenged business models, sizeable companies could be built.

The successful adoption of Linux and MySQL laid the foundation for the second generation of open-source companies – the poster children of this generation were Cloudera and Hortonworks. These open-source projects and businesses were fundamentally different from the first generation on two dimensions. First, the software was principally developed within an existing company and not by a broad, unaffiliated community (in the case of Hadoop, the software took shape within Yahoo!) . Second, these businesses were based on the model that only parts of software in the project were licensed for free, so they could charge customers for use of some of the software under a commercial license. The commercial aspects were specifically built for enterprise production use and thus easier to monetize. These companies, therefore, had the ability to capture more revenue even if the market for their product didn’t have quite as much appeal as operating systems and databases.

However, there were downsides to this second generation model of open-source business. The first was that no company singularly held ‘moral authority’ over the software – and therefore the contenders competed for profits by offering increasing parts of their software for free. Second, these companies often balkanized the evolution of the software in an attempt to differentiate themselves. To make matters more difficult, these businesses were not built with a cloud service in mind. Therefore, cloud providers were able to use the open-source software to create SaaS businesses of the same software base. Amazon’s EMR is a great example of this.

The latest evolution came when entrepreneurial developers grasped the business model challenges existent in the first two generations – Gen 1 and Gen 2 – of open-source companies, and evolved the projects with two important elements. The first is that the open-source software is now developed largely within the confines of businesses. Often, more than 90% of the lines of code in these projects are written by the employees of the company that commercialized the software. Second, these businesses offer their own software as a cloud service from very early on. In a sense, these are Open Core / Cloud service hybrid businesses with multiple pathways to monetize their product. By offering the products as SaaS, these businesses can interweave open-source software with commercial software so customers no longer have to worry about which license they should be taking. Companies like Elastic, Mongo, and Confluent with services like Elastic Cloud, Confluent Cloud, and MongoDB Atlas are examples of this Gen 3. The implications of this evolution are that open-source software companies now have the opportunity to become the dominant business model for software infrastructure.

The role of the community
While the products of these Gen 3 companies are definitely more tightly controlled by the host companies, the open-source community still plays a pivotal role in the creation and development of the open-source projects. For one, the community still discovers the most innovative and relevant projects. They star the projects on GitHub, download the software in order to try it, and evangelize what they perceive to be the better project so that others can benefit from great software. Much like how a good blog post or a tweet spreads virally, great open-source software leverages network effects. It is the community that is the source of promotion for that virality.

The community also ends up effectively being the “product manager” for these projects. It asks for enhancements and improvements; it points out the shortcomings of the software. The feature requests are not in a product requirements document, but on GitHub, comments threads and Hacker News. And, if an open-source project diligently responds to the community, it will shape itself to the features and capabilities that developers want.

The community also acts as the QA department for open-source software. It will identify bugs and shortcomings in the software; test 0.x versions diligently; and give the companies feedback on what is working or what is not. The community will also reward great software with positive feedback, which will encourage broader use.

What has changed though, is that the community is not as involved as it used to be in the actual coding of the software projects. While that is a drawback relative to Gen 1 and Gen 2 companies, it is also one of the inevitable realities of the evolving business model.

Rise of the developer
It is also important to realize the increasing importance of the developer for these open-source projects. The traditional go-to-market model of closed source software targeted IT as the purchasing center of software. While IT still plays a role, the real customers of open source are the developers who often discover the software, and then download and integrate it into the prototype versions of the projects that they are working on. Once “infected”by open-source software, these projects work their way through the development cycles of organizations from design, to prototyping, to development, to integration and testing, to staging, and finally to production. By the time the open-source software gets to production it is rarely, if ever, displaced. Fundamentally, the software is never “sold”; it is adopted by the developers who appreciate the software more because they can see it and use it themselves rather than being subject to it based on executive decisions.

In other words, open-source software permeates itself through the true experts, and makes the selection process much more grassroots than it has ever been historically. The developers basically vote with their feet. This is in stark contrast to how software has traditionally been sold.

Virtues of the open-source business model
The resulting business model of an open-source company looks quite different than a traditional software business. First of all, the revenue line is different. Side-by-side, a closed source software company will generally be able to charge more per unit than an open-source company. Even today, customers do have some level of resistance to paying a high price per unit for software that is theoretically “free.” But, even though open-source software is lower cost per unit, it makes up the total market size by leveraging the elasticity in the market. When something is cheaper, more people buy it. That’s why open-source companies have such massive and rapid adoption when they achieve product-market fit.

Another great advantage of open-source companies is their far more efficient and viral go-to-market motion. The first and most obvious benefit is that a user is already a “customer” before she even pays for it. Because so much of the initial adoption of open-source software comes from developers organically downloading and using the software, the companies themselves can often bypass both the marketing pitch and the proof-of-concept stage of the sales cycle. The sales pitch is more along the lines of, “you already use 500 instances of our software in your environment, wouldn’t you like to upgrade to the enterprise edition and get these additional features?” This translates to much shorter sales cycles, the need for far fewer sales engineers per account executive, and much quicker payback periods of the cost of selling. In fact, in an ideal situation, open-source companies can operate with favorable Account Executives to Systems Engineer ratios and can go from sales qualified lead (SQL) to closed sales within one quarter.

This virality allows for open-source software businesses to be far more efficient than traditional software businesses from a cash consumption basis. Some of the best open-source companies have been able to grow their business at triple-digit growth rates well into their life while maintaining moderate of burn rates of cash. This is hard to imagine in a traditional software company. Needless to say, less cash consumption equals less dilution for the founders.

Open source to freemium
One last aspect of the changing open-source business that is worth elaborating on is the gradual movement from true open-source to community-assisted freemium. As mentioned above, the early open-source projects leveraged the community as key contributors to the software base. In addition, even for slight elements of commercially-licensed software, there was significant pushback from the community. These days the community and the customer base are much more knowledgeable about the open-source business model, and there is an appreciation for the fact that open-source companies deserve to have a “paywall” so that they can continue to build and innovate.

In fact, from a customer perspective the two value propositions of open-source software are that you a) read the code; b) treat it as freemium. The notion of freemium is that you can basically use it for free until it’s deployed in production or in some degree of scale. Companies like Elastic and Cockroach Labs have gone as far as actually open sourcing all their software but applying a commercial license to parts of the software base. The rationale being that real enterprise customers would pay whether the software is open or closed, and they are more incentivized to use commercial software if they can actually read the code. Indeed, there is a risk that someone could read the code, modify it slightly, and fork the distribution. But in developed economies – where much of the rents exist anyway, it’s unlikely that enterprise companies will elect the copycat as a supplier.

A key enabler to this movement has been the more modern software licenses that companies have either originally embraced or migrated to over time. Mongo’s new license, as well as those of Elastic and Cockroach are good examples of these. Unlike the Apache incubated license – which was often the starting point for open-source projects a decade ago, these licenses are far more business-friendly and most model open-source businesses are adopting them.

The future
When we originally penned this article on open source four years ago, we aspirationally hoped that we would see the birth of iconic open-source companies. At a time where there was only one model – Red Hat – we believed that there would be many more. Today, we see a healthy cohort of open-source businesses, which is quite exciting. I believe we are just scratching the surface of the kind of iconic companies that we will see emerge from the open-source gene pool. From one perspective, these companies valued in the billions are a testament to the power of the model. What is clear is that open source is no longer a fringe approach to software. When top companies around the world are polled, few of them intend to have their core software systems be anything but open source. And if the Fortune 5000 migrate their spend on closed source software to open source, we will see the emergence of a whole new landscape of software companies, with the leaders of this new cohort valued in the tens of billions of dollars.

Clearly, that day is not tomorrow. These open-source companies will need to grow and mature and develop their products and organization in the coming decade. But the trend is undeniable and here at Index we’re honored to have been here for the early days of this journey.

Source: Techcrunch.com