Stanford is Using Machine Learning on Satellite Images to Predict Poverty

•Stanford’s machine learning model can predict poverty
•It uses satellite imagery to gather data and runs it through the algorithm
•Night time images are cross checked with day time images to predict the economic status of the region
•It’s open source, code is available on GitHub for both R and python

AnalyticsAnywhere

Eliminating poverty is the number one goal of most countries around the world. However, the process of going around rural areas and manually tracking census data is time consuming, labor intensive and expensive.

Considering that, a group of researchers at Stanford have pioneered an approach that combines machine learning with satellite images to make predicting poverty quicker, easier and less expensive.

How does the algorithm work?

Using this machine learning algorithm, the model is able to predict per capita consumption expenditure of a particular location when provided with it’s satellite images. The algorithm runs through millions of images of rural regions throughout the world. It then compares the presence of light in a region during the day and at night to predict it’s economic activity. This approach is called transfer learning.

Using the images captured during the night, the algorithm cross references it with the day time images to gauge the infrastructure there. In general, a brightly lit area means it is powered by electricity and must be better off than the alternative.

Before making it’s predictions, the algorithm has been made to cross check it’s results with actual survey data in order to improve it’s accuracy.

So far, this study was performed for regions in 5 countries – Nigeria, Uganda, Tanzania, Rwanda and Malawi. Check out a small video on this study below:

Our take on this

Anything that helps eliminate poverty is good in our books and when it comes to machine learning doing the work, even better. Stanford claims that it’s model predicts poverty almost as well as the manually collected data so that makes it a feasible option for the survey administrators.

It’s also an open-sourced project and they have made their code available on GitHub here. It’s available both in R and python so anyone with an interest in the subject can try it on their own systems.

Apart from Stanford, researchers at the University of Buffalo are also using machine learning and satellite images to predict poverty. Their approach differs from Stanford’s as they have added cell phone data to their model. The Pentagon is also offering $100,000 to anyone who can read the data from satellite images in the same way that Stanford’s model does.

Source: analyticsvidhya.com

Advertisements

Machine learning and the five vectors of progress

What’s keeping leaders from adopting machine learning? Well, tools are still evolving, practitioners are scarce, and the technology is a bit inscrutable for comfort. But five vectors of progress are making it easier, faster, and cheaper to deploy machine learning and could bring it into the mainstream.

AnalyticsAnywhere

Though nearly every industry is finding applications for machine learning—the artificial intelligence technology that feeds on data to automatically discover patterns and anomalies and make predictions—most companies are not yet taking advantage. However, five vectors of progress are making it easier, faster, and cheaper to deploy machine learning and could eventually help to bring the technology into the mainstream. With barriers to use beginning to fall, every enterprise can begin exploring applications of this transformative technology.

Signals
•Tech vendors claim they can reduce the need for training data by several orders of magnitude, using a technique called transfer learning.
•Specialized chips dramatically accelerate the training of machine learning models; at Microsoft, they cut the time to develop a speech recognition system by 80 percent.
•Researchers at MIT have demonstrated a method of training a neural network that delivered both accurate predictions and the rationales for those predictions.
•Major technology vendors are finding ways to cram powerful machine learning models onto mobile devices.
•New tools aim to automate tasks that occupy up to 80 percent of data scientists’ time.

Use of machine learning faces obstacles

Machine learning is one of the most powerful and versatile information technologies available today. But most companies have not begun to put it to use. One recent survey of 3,100 executives in small, medium, and large companies across 17 countries found that fewer than 10 percent were investing in machine learning.

A number of factors are restraining the adoption of machine learning. Qualified practitioners are in short supply. Tools and frameworks for doing machine learning work are immature and still evolving. It can be difficult, time-consuming, and costly to obtain the large datasets that some machine learning model-development techniques require.

Then there is the black-box problem. Even when machine learning models appear to generate valuable information, many executives seem reluctant to deploy them in production. Why? In part, because their inner workings are inscrutable, and some people are uncomfortable with the idea of running their operations on logic they don’t understand and can’t clearly describe. Others may be constrained by regulations that require businesses to offer explanations for their decisions or to prove that decisions do not discriminate against protected classes of people. In such situations, it’s hard to deploy black-box models, no matter how accurate or useful their outputs.

Progress in five areas can help overcome barriers to adoption

These barriers are beginning to fall. Deloitte has identified five key vectors of progress that should help foster significantly greater adoption of machine learning in the enterprise. Three of these advancements—automation, data reduction, and training acceleration—make machine learning easier, cheaper, and/or faster. The others—model interpretability and local machine learning—open up applications in new areas.

The five vectors of progress, ordered by breadth of application, with the widest first:

Automating data science. Developing machine learning solutions requires skills from the discipline of data science, an often-misunderstood field practiced by specialists in high demand but short supply. Data science is a mix of art and science—and digital grunt work. The reality is that as much as 80 percent of the work on which data scientists spend their time can be fully or partially automated. This work might include data wrangling—preprocessing and normalizing data, filling in missing values, for instance, or determining whether to interpret the data in a column as a number or a date; exploratory data analysis—seeking to understand the broad characteristics of the data to help formulate hypotheses about it; feature engineering and selection—selecting the variables in the data that are most likely correlated with what the model is supposed to predict; and algorithm selection and evaluation—testing potentially thousands of algorithms in order to choose those that produce the most accurate results.

Automating these tasks can make data scientists not only more productive but more effective. For instance, while building customer lifetime value models for guests and hosts, data scientists at Airbnb used an automation platform to test multiple algorithms and design approaches, which they would not have otherwise had the time to do. This enabled them to discover changes they could make to their algorithm that increased its accuracy by more than 5 percent, resulting in a material impact.

A growing number of tools and techniques for data science automation, some offered by established companies and others by venture-backed start-ups, can help reduce the time required to execute a machine learning proof of concept from months to days. And automating data science means augmenting data scientists’ productivity, so even in the face of severe talent shortages, enterprises that employ data science automation technologies should be able to significantly expand their machine learning activities.

Reducing need for training data. Training a machine learning model might require up to millions of data elements. This can be a major barrier: Acquiring and labeling data can be time-consuming and costly. Consider, as an example, a medical diagnosis project that requires MRI images labeled with a diagnosis. It might cost over $30,000 to hire a radiologist to review and label 1,000 images at six images an hour. Privacy and confidentiality concerns can also make it difficult to obtain data to work with.

A number of promising techniques for reducing the amount of training data required for machine learning are emerging. One involves the use of synthetic data, generated algorithmically to mimic the characteristics of the real data. This can work surprisingly well. A Deloitte LLP team tested a tool that made it possible to build an accurate model with only a fifth of the training data previously required, by synthesizing the remaining 80 percent.

Synthetic data can not only make it easier to get training data—it may make it easier for organizations to tap into outside data science talent. A number of organizations have successfully engaged third parties, or used crowdsourcing, to devise machine learning models, posting their datasets online for outside data scientists to work with. But this may not be an option if the datasets are proprietary. Researchers at MIT demonstrated a workaround to this conundrum, using synthetic data: They used a real dataset to create a synthetic alternative that they shared with an external data science community. Data scientists within the community created machine learning models using this synthetic data. In 11 out of 15 tests, the models developed from the synthetic data performed as well as those trained on real data.

Another technique that could reduce the need for training data is transfer learning. With this approach, a machine learning model is pre-trained on one dataset as a shortcut to learning a new dataset in a similar domain such as language translation or image recognition. Some vendors offering machine learning tools claim their use of transfer learning can cut the number of training examples that customers need to provide by several orders of magnitude.

Accelerating training. Because of the large volumes of data and complex algorithms involved, the computational process of training a machine learning model can take a long time: hours, days, even weeks to run. Only then can the model be tested and refined. But now, semiconductor and computer manufacturers—both established companies and start-ups—are developing specialized processors such as graphics processing units (GPUs), field-programmable gate arrays, and application-specific integrated circuit to slash the time required to train machine learning models by accelerating the calculations and by speeding the transfer of data within the chip.

These dedicated processors help companies speed up machine learning training and execution multifold, which in turn brings down the associated costs. For instance, a Microsoft research team—in one year, using GPUs—completed a system to recognize conversational speech as capably as humans. Had the team used only CPUs instead, according to one of the researchers, it would have taken five years. Google stated that its own AI chip, the Tensor Processing Unit (TPU), incorporated into a computing system that also includes CPUs and GPUs, provided such a performance boost that it helped avoid the cost of building of a dozen extra data centers.

Early adopters of these specialized AI chips include major technology vendors and research institutions in data science and machine learning, but adoption is spreading to sectors such as retail, financial services, and telecom. With every major cloud provider—including IBM, Microsoft, Google, and Amazon Web Services—offering GPU cloud computing, accelerated training will become available to data science teams in any organization, making it possible to increase their productivity and multiplying the number of applications enterprises choose to undertake.

Explaining results. Machine learning models often suffer from a critical weakness: Many are black boxes, meaning it is impossible to explain with confidence how they made their decisions. This can make them unsuitable or unpalatable for many applications. Physicians and business leaders, for instance, may not accept a medical diagnosis or investment decision without a credible explanation for the decision. In some cases, regulations mandate such explanations. For example, the US banking industry adheres to SR 11-7, guidance published by the Federal Reserve, which among other things requires that model behavior be explained.

But techniques are emerging that help shine light inside the black box of certain machine learning models, making them more interpretable and accurate. MIT researchers, for instance, have demonstrated a method of training a neural network that delivers both accurate predictions and the rationales for those predictions. Some of these techniques are already appearing in commercial data science products.

As it becomes possible to build interpretable machine learning models, companies in highly regulated industries such as financial services, life sciences, and health care will find attractive opportunities to use machine learning. Some of the potential application areas include credit scoring, recommendation engines, customer churn management, fraud detection, and disease diagnosis and treatment.

Deploying locally. The adoption of machine learning will grow along with the ability to deploy the technology where it can improve efficiency and outcomes. Advances in both software and hardware are making it increasingly viable to use the technology on mobile devices and smart sensors. On the software side, technology vendors such as Apple Inc., Facebook, Google, and Microsoft are creating compact machine learning models that require relatively little memory but can still handle tasks such as image recognition and language translation on mobile devices. Microsoft Research Lab’s compression efforts resulted in models that were 10 to 100 times smaller.

On the hardware end, semiconductor vendors such as Intel, Nvidia, and Qualcomm, as well as Google and Microsoft, have developed or are developing their own power-efficient AI chips to bring machine learning to mobile devices.

The emergence of mobile devices as a machine learning platform is expanding the number of potential applications of the technology and inducing companies to develop applications in areas such as smart homes and cities, autonomous vehicles, wearable technology, and the industrial Internet of Things.

Prepare for the mainstreaming of machine learning

Collectively, the five vectors of machine learning progress can help reduce the friction that is preventing some companies from investing in machine learning. And they can help those already using the technology to intensify their use of it. These advancements can also enable new applications across industries and help overcome the constraints of limited resources including talent, infrastructure, or data to train the models.

Companies should look for opportunities to automate some of the work of their oversubscribed data scientists—and ask consultants how they use data science automation. They should keep an eye on emerging techniques such as data synthesis and transfer learning that could ease the challenge of acquiring training data. They should learn what computing resources optimized for machine learning their cloud providers offer. If they are running workloads in their own data centers, they may want to investigate adding specialized hardware into the mix.

Though interpretability of machine learning is still in its early days, companies contemplating high-value applications may want to explore state-of-the-art techniques for improving interpretability. Finally, organizations considering mobile- or device-based machine learning applications should track the performance benchmarks being reported by makers of next-generation chips so they are ready when on-device deployment becomes feasible.

Machine learning has already shown itself to be a valuable technology in many applications. Progress along the five vectors can help overcome some of the obstacles to mainstream adoption.

Source: Deloitte

Decoding Machine Learning Methods

Machine Learning, thinking systems, expert systems, knowledge engineering, decision systems, neural networks – all synonymous loosely woven words in the evolving fabric of Artificial Intelligence. Of these Machine Learning (ML) and Artificial Intelligence (AI) are often debated and used interchangeably. broadly speaking AI can be termed as a futuristic state of self aware smart learning machines in true sense, but for all practical purposes we deal more often with ML at present.

In very abstract terms, ML is a structured approach for deriving meaningful predictions/insights from both structured and unstructured data. ML methods employ complex algorithms that enable analytics based on data, history and patterns. The field of data science continues to scale new heights enabled by the exponential growth in computing power over the last decade. Data scientists are continuously exploring new models & methods each day and sometimes it’s scary to even keep pace with the trends. However to keep matters simple, here is a clean starting point.

Below is an attempt to put a simplified visual representation of the popular ML methods leveraged in the data science field along with their classification. Each of these algorithms are encoded through languages such as R, Python, Scala etc to provide a framework to data scientists in solving complex data driven business problems. However there is an underlying maze of statistical and probabilistic abyss that data scientists need to navigate in order to put these methods to meaningful use.

AnalyticsAnywhere

A brief summary of the above ML methods and how they model are presented in the slides below.

Some of the business applications of these ML methods can be classified as shown in below visual.

machinelearningML

As data becomes the new oil that drives virtual machines, I conclude with the below quote,

“Without data you’re just a person with an opinion.” – W. Edwards Deming

Source: datasciencecentral.com

Accuracy and Speed of Customer Service Improved with AI

AnalyticsAnywhere

Artificial Intelligence (AI) and Machine Learning (ML) are becoming more commonplace in the workplace than ever before, and it is making it possible for customer service speed and accuracy to improve significantly. This means that businesses that are already taking advantage of AI and ML are already ahead of the game, and those who are not have already fallen behind. Most businesses already have ML in mind, as 90% of CIOs interviewed said they were either already using ML or planned to incorporate it very soon into their business model. Here are some reasons you should start working with an artificial intelligence company as soon as possible.

Automation creates efficiency

There are so many things about customer service that can be automated in a customer service environment to save time. For example, customers may text and ask your store hours or return policy 20 times a day. But with AI, those questions will be answered immediately allowing your customer service agents to focus on tasks that require human processing instead of wasting time answering mundane and repetitive questions. AI and ML are also very helpful when doing other mundane tasks such as paperwork.

Accurate measurements and report

To determine what methods of communication are effective, your business should be running reports and measuring the effectiveness of what you are doing. With ML in place, you can run these reports and then use that data to create a more effective AI programs that your customer service department can use.

Automate paperwork

Maybe it used to take your customer service representatives half their day to complete all the paperwork associated with each call or text they took, but it was something your business dealt with to ensure you got enough information. Fortunately, the right ML and AI can do that for you making it possible for your customer service to get to customers faster and be more efficient with the time they have.

Use ML for complex decisions

Many people know that AI and ML can be used to answer simple and mundane questions, but it is more than that. ML can actually be very helpful in making complex decisions, as 52% of CIOs said they already are using it for that very purpose.

Get decision automation

Another great benefit from using AI in your customer service is that it can help with decision automation. In the next three years, it is expected that AI in customer service will drastically improve the speed of decisions, accuracy of decisions, and it is expected to drive top line growth.

Improve customer satisfaction

Customers are as annoyed with long wait times and long customer service experiences just as much as businesses are. It can also help improve the speed of the interaction ensuring the customer is given the right information as quickly as possible or directed to the right customer service representative. There is almost nothing worse than waiting on hold for 20 minutes to talk to someone just to then be transferred around and wait on hold again. Rather than having frustrated employees and frustrated customers, use AI and ML technology to improve your customer satisfaction.

Source: becominghuman.ai

What Is Machine Learning???

Machine Learning for Dummies

AnalyticsAnywhere

Amazon uses it. Target uses it. Google uses it. “It” is machine learning, and it’s revolutionizing the way companies do business worldwide.

Machine learning is the ability for computer programs to analyze big data, extract information automatically, and learn from it. With 250 million active customers and tens of millions of products, Amazon’s machine learning makes accurate product recommendations based on the customer’s browsing and purchasing behavior almost instantly. No humans could do that.

Target uses machine learning to predict the offline buying behaviors of shoppers. A memorable case study highlights how Target knew a high school girl was pregnant before her parents did.

Google’s driverless cars are using machine learning to make our roads safer, and IBM’s Watson is making waves in healthcare with its machine learning and cognitive computing power.

Is your business next? Can you think of any deep data analysis or predictions that your company can produce? What impact would it have on your business’s bottom line, or how could it give you a competitive edge?

Why Is Machine Learning Important?

Data is being generated faster than at any other time in history. We are now at a point where data analysis cannot be done manually due to the amount of the data. This has driven the rise of MI — the ability for computer programs to analyze big data and extract information automatically.

The purpose of machine learning is to produce more positive outcomes with increasingly precise predictions. These outcomes are defined by what matters most to you and your company, such as higher sales and increased efficiency.

Every time you search on Google for a local service, you are feeding in valuable data to Google’s machine learning algorithm. This allows for Google to produce increasingly more relevant rankings for local businesses that provide that service.

Big Big Data

It’s important to remember that the data itself will not produce anything. It’s critical to draw accurate insights from that data. The success of machine learning depends upon producing the right learning algorithm and accurate data sets. This will allow a machine to obtain the most efficient insights possible from the information provided. Like human data analysts, one may catch an error another could potentially miss.

Digital Transformation

Machine learning and digital technologies are disrupting every industry. According to Gartner, “Smart machines will enter mainstream adoption by 2021.” Adopting early may provide your organization with a major competitive edge. Personally, I’m extremely excited by the trend and recently spent time at Harvard attending its Competing on Business Analytics and Big Data program along with 60 senior global executives from various industries.

Interested In Bringing The Power Of Machine Learning To Your Company?

Here are my recommendations to get started with the help of the right tools and experts:

  1. Secure all of the past data you have collected (offline and online sales data, accounting, customer information, product inventory, etc.). In case you might think your company doesn’t generate enough data to require machine learning, I can assure you that there is more data out there than you think, starting with general industry data. Next, think about how you can gather even more data points from all silos of your organization and elsewhere, like chatter about your brand on social media.
  2. Identify the business insights that you would benefit from most. For example, some companies are using learning algorithms for sales lead scoring.
  3. Create a strategy with clear executables to produce the desired outcomes such as fraud protection, higher sales, increased profit margin and the ability to predict customer behavior. Evaluate and revisit this strategy regularly.

Source: Forbes

Machine Learning and Prediction in Medicine — Beyond the Peak of Inflated Expectations

Big data, we have all heard, promise to transform health care with the widespread capture of electronic health records and high-volume data streams from sources ranging from insurance claims and registries to personal genomics and biosensors. Artificial-intelligence and machine-learning predictive algorithms, which can already automatically drive cars, recognize spoken language, and detect credit card fraud, are the keys to unlocking the data that can precisely inform real-time decisions. But in the “hype cycle” of emerging technologies, machine learning now rides atop the “peak of inflated expectations.”

Prediction is not new to medicine. From risk scores to guide anticoagulation (CHADS) and the use of cholesterol medications (ASCVD) to risk stratification of patients in the intensive care unit (APACHE), data-driven clinical predictions are routine in medical practice. In combination with modern machine learning, clinical data sources enable us to rapidly generate prediction models for thousands of similar clinical questions. From early-warning systems for sepsis to superhuman imaging diagnostics, the potential applicability of these approaches is substantial.

Yet there are problems with real-world data sources. Whereas conventional approaches are largely based on data from cohorts that are carefully constructed to mitigate bias, emerging data sources are typically less structured, since they were designed to serve a different purpose (e.g., clinical care and billing). Issues ranging from patient self-selection to confounding by indication to inconsistent availability of outcome data can result in inadvertent bias, and even racial profiling, in machine predictions. Awareness of such challenges may keep the hype from outpacing the hope for how data analytics can improve medical decision making.

Machine-learning methods are particularly suited to predictions based on existing data, but precise predictions about the distant future are often fundamentally impossible. Prognosis models for HER-negative breast cancer had to be inverted in the face of targeted therapies, and the predicted efficacy of influenza vaccination varies with disease prevalence and community immunization rates. Given that the practice of medicine is constantly evolving in response to new technology, epidemiology, and social phenomena, we will always be chasing a moving target.

The rise and fall of Google Flu remind us that forecasting an annual event on the basis of 1 year of data is effectively using only a single data point and thus runs into fundamental time-series problems. Yet if the future will not necessarily resemble the past, simply accumulating mass data over time has diminishing returns. Research into decision-support algorithms that automatically learn inpatient medical practice patterns from electronic health records reveals that accumulating multiple years of historical data is worse than simply using the most recent year of data. When our goal is learning how medicine should be practiced in the future, the relevance of clinical data decays with an effective “half-life” of about 4 months. To assess the usefulness of prediction models, we must evaluate them not on their ability to recapitulate historical trends, but instead on their accuracy in predicting future events.

Although machine-learning algorithms can improve the accuracy of prediction over the use of conventional regression models by capturing complex, nonlinear relationships in the data, no amount of algorithmic finesse or computing power can squeeze out information that is not present. That’s why clinical data alone have relatively limited predictive power for hospital readmissions that may have more to do with social determinants of health.

The apparent solution is to pile on greater varieties of data, including anything from sociodemographics to personal genomics to mobile-sensor readouts to a patient’s credit history and Web-browsing logs. Incorporating the correct data stream can substantially improve predictions, but even with a deterministic (nonrandom) process, chaos theory explains why even simple nonlinear systems cannot be precisely predicted into the distant future. The so-called butterfly effect refers to the future’s extreme sensitivity to initial conditions. Tiny variations, which seem dismissible as trivial rounding errors in measurements, can accumulate into massively different future events. Identical twins with the same observable demographic characteristics, lifestyle, medical care, and genetics necessarily generate the same predictions — but can still end up with completely different real outcomes.

Though no method can precisely predict the date you will die, for example, that level of precision is generally not necessary for predictions to be useful. By reframing complex phenomena in terms of limited multiple-choice questions (e.g., Will you have a heart attack within 10 years? Are you more or less likely than average to end up back in the hospital within 30 days?), predictive algorithms can operate as diagnostic screening tests to stratify patient populations by risk and inform discrete decision making.

Research continues to improve the accuracy of clinical predictions, but even a perfectly calibrated prediction model may not translate into better clinical care. An accurate prediction of a patient outcome does not tell us what to do if we want to change that outcome — in fact, we cannot even assume that it’s possible to change the predicted outcomes.

Machine-learning approaches are powered by identification of strong, but theory-free, associations in the data. Confounding makes it a substantial leap in causal inference to identify modifiable factors that will actually alter outcomes. It is true, for instance, that palliative care consults and norepinephrine infusions are highly predictive of patient death, but it would be irrational to conclude that stopping either will reduce mortality. Models accurately predict that a patient with heart failure, coronary artery disease, and renal failure is at high risk for postsurgical complications, but they offer no opportunity for reducing that risk (other than forgoing the surgery). Moreover, many such predictions are “highly accurate” mainly for cases whose likely outcome is already obvious to practicing clinicians. The last mile of clinical implementation thus ends up being the far more critical task of predicting events early enough for a relevant intervention to influence care decisions and outcomes.

With machine learning situated at the peak of inflated expectations, we can soften a subsequent crash into a “trough of disillusionment” by fostering a stronger appreciation of the technology’s capabilities and limitations. Before we hold computerized systems (or humans) up against an idealized and unrealizable standard of perfection, let our benchmark be the real-world standards of care whereby doctors grossly misestimate the positive predictive value of screening tests for rare diagnoses, routinely overestimate patient life expectancy by a factor of 3, and deliver care of widely varied intensity in the last 6 months of life.

Although predictive algorithms cannot eliminate medical uncertainty, they already improve allocation of scarce health care resources, helping to avert hospitalization for patients with low-risk pulmonary embolisms (PESI) and fairly prioritizing patients for liver transplantation by means of MELD scores. Early-warning systems that once would have taken years to create can now be rapidly developed and optimized from real-world data, just as deep-learning neural networks routinely yield state-of-the-art image-recognition capabilities previously thought to be impossible.

Whether such artificial-intelligence systems are “smarter” than human practitioners makes for a stimulating debate — but is largely irrelevant. Combining machine-learning software with the best human clinician “hardware” will permit delivery of care that outperforms what either can do alone. Let’s move past the hype cycle and on to the “slope of enlightenment,” where we use every information and data resource to consistently improve our collective health.

Source: The New England Journal of Medicine

Past, Present and Future of AI / Machine Learning (Google I/O ’17)

 

We are in the middle of a major shift in computing that’s transitioning us from a mobile-first world into one that’s AI-first. AI will touch every industry and transform the products and services we use daily. Breakthroughs in machine learning have enabled dramatic improvements in the quality of Google Translate, made your photos easier to organize with Google Photos, and enabled improvements in Search, Maps, YouTube, and more.