10 Reasons Why Every Leader Should be Data Literate

Analytics-Anywhere

With the rapid advances in technology, computing power, the rise of the Data Scientist, Artificial Intelligence, Machine Learning, and the lure of being able to gain insights and meaning from the wealth of data all more possible now than ever before, “Data Literacy” for leaders and managers, within organizations is now needed.

Here are 10 reasons why every leader needs to become Data Literate:

To assist in developing a Data-Driven Culture
Especially applicable to companies that are either not using data yet to power their decision-making, or are at best, on the early part of their data journey.

Quite often a shift in the culture of the company is needed, a change in the way the company is used to working.

To do this efficiently and effectively, if you as a leader are data literate, then it will make the process of becoming “data-driven”, a lot smoother.

To help drill for Data
“Data is the new Oil” (Clive Humby, UK Mathematician).

In the last 2 years alone, over 90 percent of the data in the world was generated (Forbes.com), and 2.5 quintillion bytes of data are being produced each day!

Structured and Unstructured data, text files, images, video’s, documents, data is everywhere.

Being data literate will enable you to take advantage of it, to know where to look in your domain of expertise.

To assist in building a slick, efficient team
Data Scientists, Data Engineers, Machine Learning Engineers, Data Developers, Data Architects, whatever the job title, all are needed to take advantage of data in an organization.

Be data literate and be able to identify the key personnel you need to exploit the knowledge and insights quickly and efficiently.

To ensure compliance with Data Security, Privacy, Governance.
Recent events have meant the focus is now very much on how data is managed and secured, that people’s privacy is protected and respected.

Recent legislation such as GDPR has only added to the importance of this. Literacy with data will enable a full appreciation of how to ensure these issues and concerns are fully addressed and adhered to

To help ensure the correct tools and technology are available
We now live in a fast-paced world, where technology is changing at a rapid rate, where new advances are frequent, new tools, new software.

Part of data literacy is not necessarily being an expert in this area, but being aware of what is available, what is possible, and what is coming.

Having this view, enables your company, your team to be well positioned to use the relevant technology.

To help “spread the word” and form good habits
A good, data literate manager, when presented with an opinion or judgment from a team member, will not take it at face value but will ask them to provide the data to back it up.

This can only help in promoting the use of data and also towards achieving that data-driven culture we discussed previously.

A phrase often used in football coaching, is “practice makes permanent”.

Being constantly asked to back up your opinions with data, by the managers in an organization, will create a habit, and soon everyone will be utilizing the data

To help ensure the right questions are asked of the data
Knowing your data and what is available where in your organization, can only assist in ensuring that the correct questions are being asked of the data, in order to achieve the most beneficial insights possible.

To help gain a competitive advantage
Companies that leverage their data the best, and utilize the insights gained from it, will ultimately gain an advantage over their competitors.

Data Literacy within Leaders is a bare minimum if you want to achieve this.

To gain respect and credence from your team and fellow professionals
Being knowledgeable and appreciative of all things data, will only help in gaining the trust and respect from your fellow team members and others within your organization, and indeed your industry.

In order to survive in the future world of work
The workplace is only going one way, in this digital, data-driven age. Do not become data illiterate, and risk being left behind.

Source: algorithmxlab.com

Advertisements

What Is A Technology Adoption Curve?

Analytics-Anywhere

The Five Stages Of A Technology Adoption Life Cycle
In his book, Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers, Geoffrey A. Moore highlights a model that tries to dissect and represent the stages of adoption of high-tech products.

More precisely this model goes through five stages. Each of those stages (innovators, early adopters, early majority, late majority, and laggard) has a specific psychographic that makes that group ready to adopt a tech product.

Why is the technology adoption life cycle useful?
There is a peculiar phase in the life cycle of a high-tech product that Moore calls a “chasm.” This is the phase in which a product is getting used by early adopters, but not yet by an early majority.

In that stage, there is a wide gap between those two psychographic profiles. Indeed, many startups fail because they don’t manage to have the early majority pick up where the early adopters left.

Understanding the technology adoption of a product helps you assess in which stage is a product and when the chasm is close how to fill the gap and allow the early majority to pick up the void left by the early adopters.

That void is created when the early adopters are ready to leave a product which is about to go mainstream. The market is plenty of examples of companies trying to conquer the early majority but failed in doing so, and in the process also lost the enthusiasts that made that product successful in the first place.

What are the stages of a technology adoption life cycle?
The stages of a technology adoption life cycle, it comprises five main psychographic profiles:

  • Innovators
  • Early Adopters
  • Early Majority
  • Late Majority
  • and Laggards

Innovators
Innovators are the first to take action and adopt a product, even though that might be buggy. Those people are willing to take the risk, and those will be the people ready to help you shape your product when that is not perfect.
As they’re in love with the innovative aspect behind it, they are ready to sustain that. This psychographic profile is all about the innovation itself. As this is sort of a hobby for them, they are ready and willing to take the risk of using something that doesn’t work perfectly, but it has great potential.

Early Adopters
Early adopters are among those people ready to try out a product at an early stage. They don’t need you to explain why they should use that innovation.
The early adopter has already researched into it, and she is passionate about the innovation behind that, however, while the innovator will adopt the high-tech product for the sake of the innovation behind it.
The early adopter will make an informed buying decision. In that stage, even though the product is only appealing to a small niche of an early adopter, it’s great and ready.
Those early adopters feel different from the early majority. And if you “betray them” they might probably leave you right away. That is where the chasm stands.

Early majority
The early majority is the psychographic profile made of people that will help you “cross the chasm.” Getting traction means making a product appealing to the early majority. Indeed, the early majority is made of more conscious consumers, that look for useful solutions but also beware of possible fads.

Late Majority
The late majority kicks in only after a product is well established, have a more skeptical approach to technological innovation and feel more comfortable in the adoption only when a product has gone mainstream.

Laggards
Laggards are the last in the technology adoption cycle. While the late majority is skeptical of technological innovation, the laggard is adverse to it.
Thus, unless there is a clear, established an advantage in using a technology those people will hardly become adopters. For some reason, which might be tied to personal or economic aspects, those people are not looking to adopt a technology.

Other factors influencing technological adoption
One of my favorite authors is Jared Diamond, a polymath which knowledge goes beyond books, education or instruction. In fact, Jared Diamond is an ecologist, geographer, biologist, anthropologist.

Whatever you want to label him, the truth is Jared Diamond is just one of the most curious people on earth. As we love to put a label on anything, we get impressed by as many labels one person has.

However, Jared Diamond has been just a curious person looking for answers to compelling and hard questions about our civilization. The search for those answers has brought him to become an expert in many disciplines.

In fact, even though he might not know what’s the latest news about Google‘s algorithm update, Apple’s latest product launch or what features the new iPhone has, I believe Jared Diamond is the most equipped person to understand how the technological landscape evolves. Reason being Jared Diamond has been looking at historical trends in thousands of years and dozens of cultures and civilizations.

He’s also lived for short periods throughout his life with small populations, like New Guineans. In his book Guns, Germs & Steel there is an excerpt that tries to explain why western civilizations were so technologically successful and advanced compared to any other population in the world, say New Guinea.

For many in the modern, hyper-technological world, the answer seems trivial. With the advent of the digital world, even more. We love to read and get inspired every day by the incredible stories of geniuses and successful entrepreneurs that are changing the world.

Jared Diamond has a different explanation for how technology evolves and what influences its adoption throughout history, and it has only in part of doing with the ability to make something that works better than what existed before.

Why the heroic theory of invention is flawed
If you read the accounts of many entrepreneurs that have influenced our modern society, those seem to resemble the stories of heroes, geniuses, and original thinkers. In short, if we didn’t have Edison, Watt, Ford, and Carnegie the western world wouldn’t have been so wildly successful. For how much we love this theory, that doesn’t seem to resemble history.

True, those people were in a way ahead of their times. They were geniuses, risk takers and in some cases mavericks. However, were they the only ones able to advance our society? That is not the case.

Assuming those people were isolated geniuses able to come up with the unimaginable; if the culture around hadn’t been able to acknowledge those inventions, we wouldn’t have traces as of now of those discoveries. So what influenced technological adoption?

The four macro patterns of technological adoption
According to Jared Diamond, there are four patterns to look at when looking for technological adoption:

  1. a relative economical advantage with existing technology
  2. social value and prestige
  3. compatibility with vested interests
  4. the ease with which those advantages can be observed

Relative economic advantage with existing technology
The first point seems obvious. In fact, for one technology to win over the other doesn’t have just to be better; but way more effective. To think of a recent example, when Google took off the search industry. When Google got into the search industry, it was not the first player. It was a latecomer. Yet its algorithm, PageRank, was so superior to its competition that it quickly took off.

What’s next?

Social value and prestige
This is less intuitive. In fact, for how much we love to think of ourselves as rational creatures, in reality, we might be way more social than we’re rational. Thus, social value and prestige of a technological innovation play as much a key role in its adoption as its innovative aspects.

Think about Apple’s products. Apple follows a business model which can be defined as a razor and blade business model. In short, the company attracts users on its platform, iTunes or Apple Store by selling music or apps for a convenient price, while selling its iPhones at very high margins.

However, it is undeniable that what makes Apple able to sell its computers and phones at a higher price compared to competitors is the brand the company was able to build over the years. In short, as of the time of this writing, Apple still represents a status quo that makes the company highly profitable.

Compatibility with vested interests
In Jared Diamond‘s book, Germs, Guns & Steel to prove this point he uses the story of the QWERTY keyboard. This is the keyboard most probably you’re using right now on your mobile device or computer. It is called in this way because its first left-most six letters form the name “QWERTY.”

Have you ever wondered why do you use this standard? You might think this has to do with efficiency. But instead, that is the opposite. This standard has been invented at the end of the 1800s when typewriters became the standard.

When typists were typing too fast those (page 248 of Germs, Guns & Steel) typewriters jammed. In short, they came up with a system that was thought to slow down typists so that typewriters wouldn’t get jammed anymore. Yet as the more than a century went by and we started to use computers, and mobile devices instead of switching to a more efficient system we kept the old one. Why?

According to Jared Diamond, the most compelling reason for not being able to switch to a new standard was the vested interests of small lobbies of typists, typing teachers, typewriter and computer salespeople.

The ease with which those advantages can be observed
When a technological advancement can be easily recognized as the fruit of the success of an organization, country or enterprise, it will be adopted by anyone that wants to keep up with it. Think, for instance, about two countries going to war. One of them has a secret weapon that makes them win the war.

As soon as the enemy that lost the battle finds that out, next time that weapon will also be adopted by the losing side. Think also of another more recent example. As big data has become a secret technological weapon used by Obama to win his electoral campaign. So Trump has used it to take over his competitors during the last US political campaign.

Now that we know what are the four macro patterns of technological adoption and how the technology adoption curve might work it might be easier for you to cross the chasm!

Source: FourWeekMBA

Monetizing Data: 4 Datasets You Need for More Reliable Forecasting

In the era of big data, the focus has long been on data collection and organization. But despite having access to more data than ever before, companies today are reporting a low return on their investment in analytics. Something’s not working. Today, business leaders are caught up in concerns that they don’t have enough data, it’s not accessible or it simply isn’t good enough. Instead of focusing on making data sources bigger or better, companies should be thinking about how they can get more out of the data they already have.

Contrary to popular belief, a high volume of perfect data isn’t necessary to drive strategic insight and action. While that might have been the case with time-series analysis, forecasting using simulation allows companies to do more with less. With simulation software, you aren’t constrained by the hard data points you have for every input; it allows you to enter both qualitative and quantitative information, so you can use human intelligence to make estimates that are later validated for accuracy with observable outcomes. Companies can then use these simulations to test how the market will respond to strategic initiatives by quickly running scenarios before launch. Also, most businesses already have enough collective intelligence within their organization to create a reliable, predictive simulation.

By unifying analytics, building forecasts and accelerating analytic processes, simulation helps companies build a holistic picture of their business to optimize strategy and maximize revenue. Here are the four types of information that companies need to fuel simulation forecasting and monetize their data investments:

1. Sales Data: Define success

The first set of information needed for simulation forecasting is sales data. In building a simulation model, sales data is used to define the market by establishing the outcome you’re trying to influence. That said, simulations can forecast more than sales outcomes in terms of revenue – they can also simulate a variety of other outcomes tied to sales such as new subscribers, website visits, online application submissions or program enrollments. Whatever the outcome is that you’re measuring, it’s helpful to have the information broken out by segment. If you don’t have this level of detail to start, you can continue to integrate new data into the model to make it more comprehensive over time.

2. Competitive Data: Paint a full picture of your market

With simulation forecasting, you are recreating an entire market so you can test how your solution will play out amongst competitors. In order to understand how people within a certain category respond to all of the choices available to them, you will need sales and marketing information for your competition. Competitor data is usually accessible from syndicated sources. If you don’t have access to competitor data, you can use approximate information available from public sources, annual reports or analyses from business experts to build out the competitive market in your simulation.

3. Customer Data: Understand how your consumer thinks

The third area of information needed for simulation is customer intelligence. In order to predict the likelihood a consumer will choose one option instead of another, you need to understand how they think. This requires information around awareness, perceptions and the relative importance of different attributes in driving a decision. These datasets are often collected and available through surveys. But even if there isn’t data from a quantitative study, your brand experts can use their judgment to make initial estimates of these values, and the values will later be verified through calibration and forecasting of observed metrics like sales.

4. Marketing Data: Evaluate the impact of in-market strategies

Finally, to drive simulation forecasting, companies need data on past marketing activity. This information is essential to understand how messaging in the market has influenced consumer decision making. This can be as simple as marketing investments and impressions broken out by paid, owned and earned activity, or it can be as granular as the tactics and specific media channels within each area.

Once a company identifies sources for these four types of data, it’s time to find an effective way to monetize it. The best way to get value from your big data is to identify unanswered business questions. With simulation forecasting, reliable answers are accessible – and you may need less data than you think to get meaningful, trustworthy insight.

Source: InsideBIGDATA

AI is all about instant customer satisfaction

analytics anywhere

Our brains are wired to love and become addicted to instant rewards. Any delay in satisfaction creates stress. Just remember how you feel when a web page takes over 3 seconds to load. We crave technologies that go even faster than our brains. The Google, Amazon, Booking.com, and Uber of this world have been harnessing the benefits of instant reward to boost their sales for years. AI is just the next logical step and there is no way back because the faster you go, the more consumers buy from you, and the faster you want to go.

The goal is not to replace human work but to expand your capacity to deliver the instant value and relevance that your customers crave and that you are not currently able to provide. Hotels chronically complain about how understaffed they are and how hard it is to keep pace. So 2 choices here: 1- embrace AI as an opportunity or 2- keep running a Formula 1 race with a bicycle.

Trending AI Articles:

  1. Natural vs Artificial Neural Networks
  2. A Short Machine Learning Explanation
  3. A.I. of the People, by the People, for the People
  4. Face detection with OpenCV and Deep Learning from image

Cloud AI — the opportunity for hotels

The market for AI is no longer the privilege of a few multi-billion dollar companies. Cloud AI solutions have become widely available for hotels that can massively capitalize on its power at virtually no cost.

Big Data: A new generation of booking engines led by companies such as Avvio are able to learn from customer demographics and adapt their display to better fit the preferences of each customer.

Chatbots: Technologies such as Quicktext and Zoe bot engage customers on your direct channels to help your online visitors access immediately, relevant information while capturing data on them that either the chatbot or you are able to action to increase sales.

Grow out of your terminator fantasy

Some people mix fiction and reality either because they are afraid of AI’s potential, or on the contrary, they expect to see a full human being. This confusion happens because we use terms such as intelligence, neural networks, deep learning etc. It is true that AI is inspired by our brain, but overall, we frequently get inspired by nature to solve challenges. Most of the time we can recognize where the inspiration comes from but the final product is usually quite far from the original model.

With AI it is exactly the same thing. We can use some basic logic but it remains very focused on a specific use case. So, if you want to be able to profit from AI, you need to have realistic expectations. For instance, chatbots are currently able to manage frequently requested tasks such as giving particular information, booking a room, locating and finding relevant places around the hotel etc. They deal with some repetitive tasks that none of your employees want to do and chatbots have become very good at it — even better than humans. However, virtual assistants are not able to serve the customers outside of their perimeter. That’s when you move from autopilot to manual. Taking AI for what it is, rather than your wildest dreams, will enable you to realize that it can benefit your business today.

Source: Becoming Human

How open-source software took over the world

analyticsanywhere

It was just five years ago that there was an ample dose of skepticism from investors about the viability of open source as a business model. The common thesis was that Red Hat was a snowflake and that no other open-source company would be significant in the software universe.

Fast-forward to today and we’ve witnessed the growing excitement in the space: Red Hat is being acquired by IBM for $32 billion (3x times its market cap from 2014); MuleSoft was acquired after going public for $6.5 billion; MongoDB is now worth north of $4 billion; Elastic’s IPO now values the company at $6 billion; and, through the merger of Cloudera and Hortonworks, a new company with a market cap north of $4 billion will emerge. In addition, there’s a growing cohort of impressive OSS companies working their way through the growth stages of their evolution: Confluent, HashiCorp, DataBricks, Kong, Cockroach Labs and many others. Given the relative multiples that Wall Street and private investors are assigning to these open-source companies, it seems pretty clear that something special is happening.

So, why did this movement that once represented the bleeding edge of software become the hot place to be? There are a number of fundamental changes that have advanced open-source businesses and their prospects in the market.

From open source to open core to SaaS
The original open-source projects were not really businesses, they were revolutions against the unfair profits that closed-source software companies were reaping. Microsoft, Oracle, SAP and others were extracting monopoly-like “rents” for software, which the top developers of the time didn’t believe was world class. So, beginning with the most broadly used components of software – operating systems and databases – progressive developers collaborated, often asynchronously, to author great pieces of software. Everyone could not only see the software in the open, but through a loosely knit governance model, they added, improved and enhanced it.

The software was originally created by and for developers, which meant that at first it wasn’t the most user-friendly. But it was performant, robust and flexible. These merits gradually percolated across the software world and, over a decade, Linux became the second most popular OS for servers (next to Windows); MySQL mirrored that feat by eating away at Oracle’s dominance.

The first entrepreneurial ventures attempted to capitalize on this adoption by offering “enterprise-grade” support subscriptions for these software distributions. Red Hat emerged the winner in the Linux race and MySQL (the company) for databases. These businesses had some obvious limitations – it was harder to monetize software with just support services, but the market size for OS’s and databases was so large that, in spite of more challenged business models, sizeable companies could be built.

The successful adoption of Linux and MySQL laid the foundation for the second generation of open-source companies – the poster children of this generation were Cloudera and Hortonworks. These open-source projects and businesses were fundamentally different from the first generation on two dimensions. First, the software was principally developed within an existing company and not by a broad, unaffiliated community (in the case of Hadoop, the software took shape within Yahoo!) . Second, these businesses were based on the model that only parts of software in the project were licensed for free, so they could charge customers for use of some of the software under a commercial license. The commercial aspects were specifically built for enterprise production use and thus easier to monetize. These companies, therefore, had the ability to capture more revenue even if the market for their product didn’t have quite as much appeal as operating systems and databases.

However, there were downsides to this second generation model of open-source business. The first was that no company singularly held ‘moral authority’ over the software – and therefore the contenders competed for profits by offering increasing parts of their software for free. Second, these companies often balkanized the evolution of the software in an attempt to differentiate themselves. To make matters more difficult, these businesses were not built with a cloud service in mind. Therefore, cloud providers were able to use the open-source software to create SaaS businesses of the same software base. Amazon’s EMR is a great example of this.

The latest evolution came when entrepreneurial developers grasped the business model challenges existent in the first two generations – Gen 1 and Gen 2 – of open-source companies, and evolved the projects with two important elements. The first is that the open-source software is now developed largely within the confines of businesses. Often, more than 90% of the lines of code in these projects are written by the employees of the company that commercialized the software. Second, these businesses offer their own software as a cloud service from very early on. In a sense, these are Open Core / Cloud service hybrid businesses with multiple pathways to monetize their product. By offering the products as SaaS, these businesses can interweave open-source software with commercial software so customers no longer have to worry about which license they should be taking. Companies like Elastic, Mongo, and Confluent with services like Elastic Cloud, Confluent Cloud, and MongoDB Atlas are examples of this Gen 3. The implications of this evolution are that open-source software companies now have the opportunity to become the dominant business model for software infrastructure.

The role of the community
While the products of these Gen 3 companies are definitely more tightly controlled by the host companies, the open-source community still plays a pivotal role in the creation and development of the open-source projects. For one, the community still discovers the most innovative and relevant projects. They star the projects on GitHub, download the software in order to try it, and evangelize what they perceive to be the better project so that others can benefit from great software. Much like how a good blog post or a tweet spreads virally, great open-source software leverages network effects. It is the community that is the source of promotion for that virality.

The community also ends up effectively being the “product manager” for these projects. It asks for enhancements and improvements; it points out the shortcomings of the software. The feature requests are not in a product requirements document, but on GitHub, comments threads and Hacker News. And, if an open-source project diligently responds to the community, it will shape itself to the features and capabilities that developers want.

The community also acts as the QA department for open-source software. It will identify bugs and shortcomings in the software; test 0.x versions diligently; and give the companies feedback on what is working or what is not. The community will also reward great software with positive feedback, which will encourage broader use.

What has changed though, is that the community is not as involved as it used to be in the actual coding of the software projects. While that is a drawback relative to Gen 1 and Gen 2 companies, it is also one of the inevitable realities of the evolving business model.

Rise of the developer
It is also important to realize the increasing importance of the developer for these open-source projects. The traditional go-to-market model of closed source software targeted IT as the purchasing center of software. While IT still plays a role, the real customers of open source are the developers who often discover the software, and then download and integrate it into the prototype versions of the projects that they are working on. Once “infected”by open-source software, these projects work their way through the development cycles of organizations from design, to prototyping, to development, to integration and testing, to staging, and finally to production. By the time the open-source software gets to production it is rarely, if ever, displaced. Fundamentally, the software is never “sold”; it is adopted by the developers who appreciate the software more because they can see it and use it themselves rather than being subject to it based on executive decisions.

In other words, open-source software permeates itself through the true experts, and makes the selection process much more grassroots than it has ever been historically. The developers basically vote with their feet. This is in stark contrast to how software has traditionally been sold.

Virtues of the open-source business model
The resulting business model of an open-source company looks quite different than a traditional software business. First of all, the revenue line is different. Side-by-side, a closed source software company will generally be able to charge more per unit than an open-source company. Even today, customers do have some level of resistance to paying a high price per unit for software that is theoretically “free.” But, even though open-source software is lower cost per unit, it makes up the total market size by leveraging the elasticity in the market. When something is cheaper, more people buy it. That’s why open-source companies have such massive and rapid adoption when they achieve product-market fit.

Another great advantage of open-source companies is their far more efficient and viral go-to-market motion. The first and most obvious benefit is that a user is already a “customer” before she even pays for it. Because so much of the initial adoption of open-source software comes from developers organically downloading and using the software, the companies themselves can often bypass both the marketing pitch and the proof-of-concept stage of the sales cycle. The sales pitch is more along the lines of, “you already use 500 instances of our software in your environment, wouldn’t you like to upgrade to the enterprise edition and get these additional features?” This translates to much shorter sales cycles, the need for far fewer sales engineers per account executive, and much quicker payback periods of the cost of selling. In fact, in an ideal situation, open-source companies can operate with favorable Account Executives to Systems Engineer ratios and can go from sales qualified lead (SQL) to closed sales within one quarter.

This virality allows for open-source software businesses to be far more efficient than traditional software businesses from a cash consumption basis. Some of the best open-source companies have been able to grow their business at triple-digit growth rates well into their life while maintaining moderate of burn rates of cash. This is hard to imagine in a traditional software company. Needless to say, less cash consumption equals less dilution for the founders.

Open source to freemium
One last aspect of the changing open-source business that is worth elaborating on is the gradual movement from true open-source to community-assisted freemium. As mentioned above, the early open-source projects leveraged the community as key contributors to the software base. In addition, even for slight elements of commercially-licensed software, there was significant pushback from the community. These days the community and the customer base are much more knowledgeable about the open-source business model, and there is an appreciation for the fact that open-source companies deserve to have a “paywall” so that they can continue to build and innovate.

In fact, from a customer perspective the two value propositions of open-source software are that you a) read the code; b) treat it as freemium. The notion of freemium is that you can basically use it for free until it’s deployed in production or in some degree of scale. Companies like Elastic and Cockroach Labs have gone as far as actually open sourcing all their software but applying a commercial license to parts of the software base. The rationale being that real enterprise customers would pay whether the software is open or closed, and they are more incentivized to use commercial software if they can actually read the code. Indeed, there is a risk that someone could read the code, modify it slightly, and fork the distribution. But in developed economies – where much of the rents exist anyway, it’s unlikely that enterprise companies will elect the copycat as a supplier.

A key enabler to this movement has been the more modern software licenses that companies have either originally embraced or migrated to over time. Mongo’s new license, as well as those of Elastic and Cockroach are good examples of these. Unlike the Apache incubated license – which was often the starting point for open-source projects a decade ago, these licenses are far more business-friendly and most model open-source businesses are adopting them.

The future
When we originally penned this article on open source four years ago, we aspirationally hoped that we would see the birth of iconic open-source companies. At a time where there was only one model – Red Hat – we believed that there would be many more. Today, we see a healthy cohort of open-source businesses, which is quite exciting. I believe we are just scratching the surface of the kind of iconic companies that we will see emerge from the open-source gene pool. From one perspective, these companies valued in the billions are a testament to the power of the model. What is clear is that open source is no longer a fringe approach to software. When top companies around the world are polled, few of them intend to have their core software systems be anything but open source. And if the Fortune 5000 migrate their spend on closed source software to open source, we will see the emergence of a whole new landscape of software companies, with the leaders of this new cohort valued in the tens of billions of dollars.

Clearly, that day is not tomorrow. These open-source companies will need to grow and mature and develop their products and organization in the coming decade. But the trend is undeniable and here at Index we’re honored to have been here for the early days of this journey.

Source: Techcrunch.com

AI predictions for 2019 from Yann LeCun, Hilary Mason, Andrew Ng, and Rumman Chowdhury

analyticsanywhere

Artificial intelligence is cast all at once as the technology that will save the world and end it.

To cut through the noise and hype, VentureBeat spoke with luminaries whose views on the right way to do AI have been informed by years of working with some of the biggest tech and industry companies on the planet.

Below find insights from Google Brain cofounder Andrew Ng, Cloudera general manager of ML and Fast Forward Labs founder Hilary Mason, Facebook AI Research founder Yann LeCun, and Accenture’s responsible AI global lead Dr. Rumman Chowdhury. We wanted to get a sense of what they saw as the key milestones of 2018 and hear what they think is in store for 2019.

Amid a recap of the year and predictions for the future, some said they were encouraged to be hearing fewer Terminator AI apocalypse scenarios, as more people understand what AI can and cannot do. But these experts also stressed a continued need for computer and data scientists in the field to adopt responsible ethics as they advance artificial intelligence.

Dr. Rumman Chowdhury

Dr. Rumman Chowdhury is managing director of the Applied Intelligence division at Accenture and global lead of its Responsible AI initiative, and was named to BBC’s 100 Women list in 2017. Last year, I had the honor of sharing the stage with her in Boston at Affectiva’s conference to discuss matters of trust surrounding artificial intelligence. She regularly speaks to audiences around the world on the topic.

For the sake of time, she responded to questions about AI predictions for 2019 via email. All responses from the other people in this article were shared in phone interviews.

Chowdhury said in 2018 she was happy to see growth in public understanding of the capabilities and limits of AI and to hear a more balanced discussion of the threats AI poses — beyond fears of a global takeover by intelligent machines as in The Terminator. “With that comes increasing awareness and questions about privacy and security, and the role AI may play in shaping us and future generations,” she said.

Public awareness of AI still isn’t where she thinks it needs to be, however, and in the year ahead Chowdhury hopes to see more people take advantage of educational resources to understand AI systems and be able to intelligently question AI decisions.

She has been pleasantly surprised by the speed with which tech companies and people in the AI ecosystem have begun to consider the ethical implications of their work. But she wants to see the AI community do more to “move beyond virtue signaling to real action.”

“As for the ethics and AI field — beyond the trolley problem — I’d like to see us digging into the difficult questions AI will raise, the ones that have no clear answer. What is the ‘right’ balance of AI- and IoT-enabled monitoring that allows for security but resists a punitive surveillance state that reinforces existing racial discrimination? How should we shape the redistribution of gains from advanced technology so we are not further increasing the divide between the haves and have-nots? What level of exposure to children allows them to be ‘AI natives’ but not manipulated or homogenized? How do we scale and automate education using AI but still enable creativity and independent thought to flourish?” she asked.

In the year ahead, Chowdhury expects to see more government scrutiny and regulation of tech around the world.

“AI and the power that is wielded by the global tech giants raises a lot of questions about how to regulate the industry and the technology,” she said. “In 2019, we will have to start coming up with the answers to these questions — how do you regulate a technology when it is a multipurpose tool with context-specific outcomes? How do you create regulation that doesn’t stifle innovation or favor large companies (who can absorb the cost of compliance) over small startups? At what level do we regulate? International? National? Local?”

She also expects to see the continued evolution of AI’s role in geopolitical matters.

“This is more than a technology, it is an economy- and society-shaper. We reflect, scale, and enforce our values in this technology, and our industry needs to be less naive about the implications of what we build and how we build it,” she said. For this to happen, she believes people need to move beyond the idea common in the AI industry that if we don’t build it, China will, as if creation alone is where power lies.

“I hope regulators, technologists, and researchers realize that our AI race is about more than just compute power and technical acumen, just like the Cold War was about more than nuclear capabilities,” she said. “We hold the responsibility of recreating the world in a way that is more just, more fair, and more equitable while we have the rare opportunity to do so. This moment in time is fleeting; let’s not squander it.”

<pOn a consumer level, she believes 2019 will see more use of AI in the home. Many people have become much more accustomed to using smart speakers like Google Home and Amazon Echo, as well as a host of smart devices. On this front, she’s curious to see if anything especially interesting emerges from the Consumer Electronics Show — set to kick off in Las Vegas in the second week of January — that might further integrate artificial intelligence into people’s daily lives.

“I think we’re all waiting for a robot butler,” she said.

Andrew Ng

I always laugh more than I expect to when I hear Andrew Ng deliver a whiteboard session at a conference or in an online course. Perhaps because it’s easy to laugh with someone who is both passionate and having a good time.

Ng is an adjunct computer science professor at Stanford University whose name is well known in AI circles for a number of different reasons.

He’s the cofounder of Google Brain, an initiative to spread AI throughout Google’s many products, and the founder of Landing AI, a company that helps businesses integrate AI into their operations.

He’s also the instructor of some of the most popular machine learning courses on YouTube and Coursera, an online learning company he founded, and he founded deeplearning.ai and wrote the book Deep Learning Yearning.

After more than three years there, in 2017 he left his post as chief AI scientist for Baidu, another tech giant that he helped transform into an AI company.

Finally, he’s also part of the $175 million AI Fund and on the board of driverless car company Drive.ai.

Ng spoke with VentureBeat earlier this month when he released the AI Transformation Playbook, a short read about how companies can unlock the positive impacts of artificial intelligence for their own companies.

One major area of progress or change he expects to see in 2019 is AI being used in applications outside of tech or software companies. The biggest untapped opportunities in AI lie beyond the software industry, he said, citing use cases from a McKinsey report that found that AI will generate $13 trillion in GDP by 2030.

“I think a lot of the stories to be told next year [2019] will be in AI applications outside the software industry. As an industry, we’ve done a decent job helping companies like Google and Baidu but also Facebook and Microsoft — which I have nothing to do with — but even companies like Square and Airbnb, Pinterest, are starting to use some AI capabilities. I think the next massive wave of value creation will be when you can get a manufacturing company or agriculture devices company or a health care company to develop dozens of AI solutions to help their businesses.”

Like Chowdhury, Ng was surprised by growth in understanding in what AI can and cannot do in 2018, and pleased that conversations can take place without focusing on the killer robot scenario or fear of artificial general intelligence.

Ng said he intentionally responded to my questions with answers he didn’t expect many others to have.

“I’m trying to cite deliberately a couple of areas which I think are really important for practical applications. I think there are barriers to practical applications of AI, and I think there’s promising progress in some places on these problems,” he said.

In the year ahead, Ng is excited to see progress in two specific areas in AI/ML research that help advance the field as a whole. One is AI that can arrive at accurate conclusions with less data, something called “few shot learning” by some in the field.

“I think the first wave of deep learning progress was mainly big companies with a ton of data training very large neural networks, right? So if you want to build a speech recognition system, train it on 100,000 hours of data. Want to train a machine translation system? Train it on a gazillion pairs of sentences of parallel corpora, and that creates a lot of breakthrough results,” Ng said. “Increasingly I’m seeing results on small data where you want to try to take in results even if you have 1,000 images.”

The other is advances in computer vision referred to as “generalizability.” A computer vision system might work great when trained with pristine images from a high-end X-ray machine at Stanford University. And many advanced companies and researchers in the field have created systems that outperform a human radiologist, but they aren’t very nimble.

“But if you take your trained model and you apply it to an X-ray taken from a lower-end X-ray machine or taken from a different hospital, where the images are a bit blurrier and maybe the X-ray technician has the patient slightly turned to their right so the angle’s a little bit off, it turns out that human radiologists are much better at generalizing to this new context than today’s learning algorithms. And so I think interesting research [is on] trying to improve the generalizability of learning algorithms in new domains,” he said.

Yann LeCun

Yann LeCun is a professor at New York University, Facebook chief AI scientist, and founding director of Facebook AI Research (FAIR), a division of the company that created PyTorch 1.0 and Caffe2, as well as a number of AI systems — like the text translation AI tools Facebook uses billions of times a day or advanced reinforcement learning systems that play Go.

LeCun believes the open source policy FAIR adopts for its research and tools has helped nudge other large tech companies to do the same, something he believes has moved the AI field forward as a whole. LeCun spoke with VentureBeat last month ahead of the NeurIPS conference and the fifth anniversary of FAIR, an organization he describes as interested in the “technical, mathematical underbelly of machine learning that makes it all work.”

“It gets the entire field moving forward faster when more people communicate about the research, and that’s actually a pretty big impact,” he said. “The speed of progress you’re seeing today in AI is largely because of the fact that more people are communicating faster and more efficiently and doing more open research than they were in the past.”

On the ethics front, LeCun is happy to see progress in simply considering the ethical implications of work and the dangers of biased decision-making.

“The fact that this is seen as a problem that people should pay attention to is now well established. This was not the case two or three years ago,” he said.

LeCun said he does not believe ethics and bias in AI have become a major problem that require immediate action yet, but he believes people should be ready for that.

“I don’t think there are … huge life and death issues yet that need to be urgently solved, but they will come and we need to … understand those issues and prevent those issues before they occur,” he said.

Like Ng, LeCun wants to see more AI systems capable of the flexibility that can lead to robust AI systems that do not require pristine input data or exact conditions for accurate output.

LeCun said researchers can already manage perception rather well with deep learning but that a missing piece is an understanding of the overall architecture of a complete AI system.

He said that teaching machines to learn through observation of the world will require self-supervised learning, or model-based reinforcement learning.

“Different people give it different names, but essentially human babies and animals learn how the world works by observing and figure out this huge amount of background information about it, and we don’t know how to do this with machines yet, but that’s one of the big challenges,” he said. “The prize for that is essentially making real progress in AI, as well as machines, to have a bit of common sense and virtual assistants that are not frustrating to talk to and have a wider range of topics and discussions.”

For applications that will help internally at Facebook, LeCun said significant progress toward self-supervised learning will be important, as well as AI that requires less data to return accurate results.

“On the way to solving that problem, we’re hoping to find ways to reduce the amount of data that’s necessary for any particular task like machine translation or image recognition or things like this, and we’re already making progress in that direction; we’re already making an impact on the services that are used by Facebook by using weakly supervised or self-supervised learning for translation and image recognition. So those are things that are actually not just long term, they also have very short term consequences,” he said.

In the future, LeCun wants to see progress made toward AI that can establish causal relationships between events. That’s the ability to not just learn by observation, but to have the practical understanding, for example, that if people are using umbrellas, it’s probably raining.

“That would be very important, because if you want a machine to learn models of the world by observation, it has to be able to know what it can influence to change the state of the world and that there are things you can’t do,” he said. “You know if you are in a room and a table is in front of you and there is an object on top of it like a water bottle, you know you can push the water bottle and it’s going to move, but you can’t move the table because it’s big and heavy — things like this related to causality.”

Hilary Mason

After Cloudera acquired Fast Forward Labs in 2017, Hilary Mason became Cloudera’s general manager of machine learning. Fast Forward Labs, while absorbed into Cloudera, is still in operation, producing applied machine learning reports and advising customers to help them see six months to two years into the future.

One advancement in AI that surprised Mason in 2018 was related to multitask learning, which can train a single neural network to apply multiple kinds of labels when inferring, for example, objects seen in an image.

Fast Forward Labs has also been advising customers on the ethical implications of AI systems. Mason sees a wider awareness for the necessity of putting some kind of ethical framework in place.

“This is something that since we founded Fast Forward — so, five years ago — we’ve been writing about ethics in every report but this year [2018] people have really started to pick up and pay attention, and I think next year we’ll start to see the consequences or some accountability in the space for companies and for people who pay no attention to this,” Mason said. “What I’m not saying very clearly is that I hope that the practice of data science and AI evolve as such that it becomes the default expectation that both technical folks and business leaders creating products with AI will be accounting for ethics and issues of bias and the development of those products, whereas today it is not the default that anyone thinks about those things.”

As more AI systems become part of business operations in the year ahead, Mason expects that product managers and product leaders will begin to make more contributions on the AI front because they’re in the best position to do so.

“I think it’s clearly the people who have the idea of the whole product in mind and understand the business understand what would be valuable and not valuable, who are in the best position to make these decisions about where they should invest,” she said. “So if you want my prediction, I think in the same way we expect all of those people to be minimally competent using something like spreadsheets to do simple modeling, we will soon expect them to be minimally competent in recognizing where AI opportunities in their own products are.”

The democratization of AI, or expansion to corners of a company beyond data science teams, is something that several companies have emphasized, including Google Cloud AI products like Kubeflow Pipelines and AI Hub as well as advice from the CI&T consultancy to ensure AI systems are actually utilized within a company.

Mason also thinks more and more businesses will need to form structures to manage multiple AI systems.

Like an analogy sometimes used to describe challenges faced by people working in DevOps, Mason said, managing a single system can be done with hand-deployed custom scripts, and cron jobs can manage a few dozen. But when you’re managing tens or hundreds of systems, in an enterprise that has security, governance, and risk requirements, you need professional, robust tooling.

Businesses are shifting from having pockets of competency or even brilliance to having a systematic way to pursue machine learning and AI opportunities, she said.

The emphasis on containers for deploying AI makes sense to Mason, since Cloudera recently launched its own container-based machine learning platform. She believes this trend will continue in years ahead so companies can choose between on-premise AI or AI deployed in the cloud.

Finally, Mason believes the business of AI will continue to evolve, with common practices across the industry, not just within individual companies.

“I think we will see a continuing evolution of the professional practice of AI,” she said. “Right now, if you’re a data scientist or an ML engineer at one company and you move to another company, your job will be completely different: different tooling, different expectations, different reporting structures. I think we’ll see consistency there,” she said.

Source: venturebeat.com

When AI and Data Analytics Meet Healthcare

analytics anywhere

The Telegraph covered a robotic revolution in the healthcare sector and predicted an increase in robotic systems in hospitals in the coming decade. Insights from 2016 indicate that about 86% of healthcare provider organizations and technology vendors to healthcare are using artificial intelligence technology. Institutions across the globe are adopting to automation, machine learning, and artificial intelligence (AI) including doctors, hospitals, insurance companies, and industries with ties to healthcare.

Here are a few of the many ways AI and data analytics are paving the road to better healthcare.

1. Mining Medical Records and Devising Treatment Plans

In a day, a radiologist attends to almost 200 patients and 3000 medical images. Today, every person who visits a medical practitioner has their medical record created. The number of records will only grow in the coming years. Analyzing this data and determining a treatment plan consumes valuable time. AI can help reduce the workload and expedite the medical process with the help of something called as a Patient Data Mining.

MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) created ‘ICU Intervene’, a machine-learning approach that collects a significant amount of ICU data ranging from medical to demographic details. Through this data, the AI can determine the types of treatment the patients need and quicken the diagnosis to save critical time.

“Data gathered and presented by AI algorithms will enable healthcare providers and doctors to see patients’ health risks and take more precise, early action to prevent, lessen the impact of or forestall disease progression. These interventions will curb healthcare costs and lead to improved patient health outcomes,” said Derek Gordon, COO of Lumiat, to Cygnismedia.

2. Assisting in Repetitive Jobs and Future Prediction

Routine jobs such as X-rays, CT scans, and data entry can be offloaded to an AI assistant.

In cardiology and radiology, not only does analysis and compilation of data consume crucial time but is also prone to trial and error. AI can prove to be more accurate and helpful in such scenarios. It can read CT scans and medical reports to provide a diagnosis of similar images stored in the database.

In fact, a Chicago start-up, Careskore uses a cloud-based predictive analytics platform. Using Zeus algorithm in real time, Careskore predicts the likeliness of an individual’s hospitalization after studying a range of data which includes a combination of behavioral, demographic and clinical data.

3. Blending Physical and Virtual Consultations

Chat bots used in the healthcare sector interact with the patients through telephone, text, or website to schedule appointments and follow-ups, billing, processing 24×7 urgent requests for customer care, and so on. They help in reducing the overall administrative cost of the hospital.

Medical Virtual Assistants (MVA) collect and compile a patient’s medical and demographic details. M-health apps help people track their health and notify patients about upcoming appointments. They are also programmed to answer the basic health-related or medical queries of a patient.

4. Medication Management

AI-enabled systems can track patients’ data and suggest treatments based on analysis. An Israeli start-up developed AI algorithms closely accurate or even more precise than humans when it comes to the early detection of conditions such as coronary aneurysms, brain bleeds, malignant tissue in breast mammography, and osteoporosis. Such assistance can significantly augment the medical procedure.

IBM Watson launched a unique programme for oncologists where AI will study all the structured and unstructured data of a patient, and suggest treatment pathways to the doctor.

5. Finding the right talent in Healthcare

As the healthcare industry grows, there is always a need for qualified healthcare professionals. Often, hiring managers receive hundreds of resumes per open role. When shortlisting candidates for interviews, they use various data points such as filtering out candidates with too many or too few years of experience. Beyond this level of filtering, many companies are using AI chatbot software for recruitment. For example, Accenture uses Min, an AI virtual recruiter to hire data scientists in Singapore. This helps recruiters save time, improve efficiency, and make fair hiring decisions. For candidates, the chatbot engages, interviews, and shortlists them 24/7.

6. Helping People Make Better Health Choices

Based on the demographic, behavioral, and medical data of people, AI-enabled systems can predict health risks in advance and warn people accordingly. Six months after El Camino Hospital in Silicon Valley applied artificial intelligence, the rate of patients with fatal diseases fell by a 39 percent.

As per OECD estimates and figures from The United States Institute of Medicine, the top 15 countries by healthcare expenditure waste an average of between $1,100 and $1,700 per person annually. Health App Solutions offered by AI helps healthcare systems avoid needless hospitalizations.

Not only does AI help Doctors by advising treatment solutions, but also enables people to lead a better and healthier lifestyle.

Source: datasciencecentral.com