AI-powered diagnostic device takes out Tricorder XPrize

OnTheGo

Launched in 2012, the Qualcomm Tricorder XPrize tasked competing teams with developing a portable and versatile medical diagnostics machine that would give people “unprecedented access” to information about their health. The contest has now been run and won, with an AI-powered device awarded top honors and US$2.5 million for its trouble.

This particular XPrize – a series of competitions aimed at solving global issues – was created to encourage the development of a device that mimicked the iconic Tricorder from Star Trek. More specifically, this meant the ability to diagnose 13 conditions including anemia, diabetes, sleep apnea and urinary tract infections, along with the ability to detect three of five additional diseases: HIV, hypertension, melanoma, shingles and strep throat.

The competition was whittled down to ten finalists in 2014, and then again to two in December last year. The Taiwan-based Dynamical Biomarkers Group took second place with its prototype for a smartphone-based diagnostics device, but was beaten out by Final Frontier Medical Devices from Pennsylvania.

The winning machine is called DxtER and uses artificial intelligence to teach itself to diagnose medical conditions. It does this by using a set of non-invasive sensors to check vital signs, body chemistry and biological functions and draws on data from clinical emergency medicine and actual patients. All this data is then synthesized by the AI engine and the device spits out a “quick and accurate assessment.”

In addition to the $2.5 million, the Final Frontier and Dynamical Biomarkers Group teams (which received a not-too-shabby $1 million for second place) will benefit from ongoing support and funding from XPrize and its partners. This includes R&D partnerships with the US Food and Drug Administration and the University of California San Diego. Meanwhile, Lowe’s Home Improvements has committed to distributing a consumer-ready version of the device, while the General Hospital of Maputo in Mozambique will provide it to its doctors, nurses and patients.

“We could not be more pleased with the quality of innovation and performance of the teams who competed, particularly with teams Final Frontier and Dynamical Biomarkers Group,” said Marcus Shingles, CEO of the XPrize Foundation. “Although this XPrize competition phase has ended, XPrize, Qualcomm Foundation, and a network of strategic partners are committed and excited to now be entering a new phase which will support these teams in their attempt to scale impact and the continued evolution of the Tricorder device through a series of new post-competition initiatives.”

Source: Newatlas.com

Machine Learning Provides Competitive Edge in Retail

OnTheGo

A simple concept behind machine learning is proving that computation software can access a dataset and create results from that access. That concept also serves as the most crucial element in providing meaningful personalized service for customers.

In marshaling its resources, Amazon has begun to school retailers and search engines on how crucial an element machine learning is to a competitive environment.

Several Amazon advertising services are starting to rival Google in a significant business model for Google, online advertising. Amazon has been long offered Product Display Ads that feature product images and text that relate to people’s searches. It just launched a few advanced advertising services such as a cloud-based header bidding service, according to MarTech.

More to the point of machine learning, Amazon is now beefing up services related to this technology. The company has announced a new program that will allow developers to build and host most Alexa skills using Amazon Web Services for free. It also introduced three new AI services — Amazon Rekognition, which can perform image recognition, categorization, and facial analysis; Amazon Polly, a deep learning-driven text-to-speech (TTS) service; and Amazon Lex, a natural language and speech recognition program. The initiatives will bolster Amazon Web Services (AWS) against Microsoft and Google.

These product milestones for Amazon occur as the retail industry — the most frequent users of personalization ads — confront a complex puzzle of tech and trends. Retailers face a massive distribution transformation. Retailers are shifting away from their traditional locations. For instance, mid-level malls are losing stores such as JCPenney’s, Sears, and Macy’s. Other retailers are experimenting with smaller stores and kiosks in an effort to adjust their floor space. Even once online-only retailers such as Warby Parker and — you guessed it, Amazon — have added small brick-and-mortar stores to establish a cohesive consumer experience.

Changing Consumer Behaviors
Changing consumer digital behaviors are adding to the challenge for retailers. Behaviors such as “webrooming” and “showrooming” have become more popular over the last five holiday shopping seasons, and now have become standard activities. Webrooming and showrooming are when shoppers visit physical stores but use their smart phones to comparison shop and check competitive prices, and even place orders with a store’s competitior. The adoption of these behaviors meant retailers had to improve their mobile sites, launch apps, examine beacons, and consider virtual reality to create a customer experience that supports the brand and retains sales.

All of this has raised the bar for correlating data variety for trends — new sources, new contexts, and new intentions, all at different times. Managers who had just converted to the church of analytics now must listen to a new measurement sermon: where does machine learning fit within their business? And because of Amazon, retail managers are experiencing an urgency to learn machine learning protocols and also plan how to execute strategy in a world becoming dominated by a giant competitor.

Through its operational prowess, scale of services, and inroads into IoT devices and cloud solutions, Amazon has positioned itself to make a myriad of correlations between business metrics and technical metrics. I mentioned in recent posts that nascent search activity emerging from Amazon site visitors is rivaling search engines as a consumer starting point for researching products and services. Amazon can now take significant advantage with machine learning. Much of machine learning relies on data preparation, addressing data quality such as treating missing variables. Amazon has an opportunity to provide better context with the search conducted, and play a central role with partners who want to better understand how their products are received.

Amazon can then leverage its discoveries into meaningful customer and business value. A potential example is implementing tactics influenced by BizDevOps, a blend of front-end software development with business development and operations tactics. Its purpose is to align app development to customer and business value in upfront planning. That alignment has become critical as analytics has shifted from singular inferences from website activity into a central measurement of various activity across digital media and IoT devices. If you do a Google search, you’ll find more than a few posts on the topic of BizDevOps mentioning Amazon as a model example.

Retail’s Machine Learning Future
Amazon’s potential with machine learning is a long way from the early years when Wall Street analysts criticized the once-only-a-book retailer about its quarterly losses. Amazon’s machine learning potential also has far reaching implications.

Amazon’s interest in personalization ads and growing machine learning prowess is tantalizing to supporters of programmatic advertising, which aims messages and gains access to highly targeted, highly valued audiences. Marketers can better predict how ad creative, products, and services can be combined to better appeal to customers in different cycles of the customer experience or a sale. Amazon can ultimately play a central role with platform partners who want to better understand how their products are received.

If this Amazon news makes your strategic team feel that they are behind the curve, take heart. The good news is that machine learning is in its early stages with retailers seeking ways to integrate data and devices that produce the data. Retailers turn to Google for search and paid ads because it covers a large number of industries, so Amazon will remain a retail niche for now.

But if business managers want to find potential success like Amazon has found, they must look internally with technology teams to see how machine learning techniques can be the operational glue between business resources and personalized experience for customers.

Source: allanalytics.com

Machine learning creates living atlas of the planet

Machine learning, combined with satellite imagery and Cloud computing, is enabling understanding of the world and making the food supply chain more efficient.

OnTheGo

There are more than 7 billion people on Earth now, and roughly one in eight people do not have enough to eat. According to the World Bank, the human population will hit an astounding 9 billion by 2050. With rapidly increasing population, the growing need for food is becoming a grave concern.

The burden is now on technology to make up for the looming food crises in the coming decades. But fortunately there is no shortage of ideas and innovative minds are seeking solutions to combat this problem.

Machine learning to the rescue
Descartes Labs, a Los Alamos, New Mexico-based start-up is using machine learning to analyze satellite imagery to predict food supplies months in advance of current methods employed by the US government, a technique that could help predict food crises before they happen.

Descartes Labs pulls images from public databases like NASA’s Landsat and MODIS, ESA’s Sentinel missions and other private satellite imagery providers, including Planet. It also keeps a check on Google Earth and Amazon Web Services public datasets. This continuous up-to-date imagery is referred to as the ‘Living Atlas of the Plant’.

The commercial atlas, designed to provide real-time forecasts of commodity agriculture, uses decades of remotely sensed images stored on the Cloud to offer land use and land change analysis.

Descartes Labs cross-references the satellite information with other relevant data such as weather forecasts and prices of agricultural products. This data is then entered into the machine learning software, tracking and calculating future food supplies with amazing accuracy. By processing these images and data via their advanced machine learning algorithm, Descartes Labs collect remarkably in-depth information such as being able to distinguish individual crop fields and determining the specific field’s crop by analyzing how the sun’s light is reflecting off its surface. After the type of crop has been established, the machine learning program then monitors the field’s production levels.

“With machine learning techniques, we look at tons of pixels from satellites, and that tells us what’s growing,” says Mark Johnson, CEO and Co-founder, Descartes Labs.

How to tackle a data deluge
The total database includes approximately a petabyte — or 1015 bytes — of data. Descartes has actually reprocessed the whole 40-year archive starting with the first Landsat satellite imagery to offer completely Cloud-free view of land use and land change to create this ‘Living Atlas of the Planet’.

The data platform is said to have analyzed over 2.8 quadrillion multispectral pixels for this. It enables processing at petabytes per day rates using multi-source data to produce calibrated, georeferenced imagery stacks at desired points in time and space that can be used for pixel level or global scale analysis or for visualizing or measure changes such as floods, or changes in the condition of crops. “The platform is built for analysis. It is not built to store the data. This is a vastly different philosophy than traditional data platforms,” says Daniela Moody, Remote Sensing and Machine Learning Specialist, Descartes Labs.

The platform churns out imageries at specific locations for specific time at different wavelengths, thus offering unique insights into land cover changes over broad swaths of land. For instance, the NDVI (normalized difference vegetation index) reveals live green vegetation using a combination of red and near-infrared spectral bands (Figure 2). Combining NDVI with visible spectral bands allows a user to examine the landscape through many lenses. The platform offers both Web and API interfaces. While the Web interface offers options for visualizing data, whereas the API allows the user to interact directly with the data for specific analysis. The platform’s scalable Cloud infrastructure quickly ingests, analyzes, and creates predictions from the imagery.

Change is the only constant
The ability to have such fine-grained data on agricultural production will help in making the food supply chain more efficient. As Descartes Labs adds more geospatial data to its already robust database of earth imagery, these models will get even more accurate. Cloud computing and storage, combined with recent advances in machine learning and open software, are enabling understanding of the world at an unprecedented scale and detail.

Earth is not a static place, and researchers who study it need tools that keep up with the constant change. “We designed this platform to answer the problems of commodity agriculture,” Moody adds, “and in doing so we created a platform that is incredible and allows us to have a living atlas of the world.”

Source: geospatialworld.net

Our Fear of Artificial Intelligence

A true AI might ruin the world—but that assumes it’s possible at all.

OnTheGo

Years ago I had coffee with a friend who ran a startup. He had just turned 40. His father was ill, his back was sore, and he found himself overwhelmed by life. “Don’t laugh at me,” he said, “but I was counting on the singularity.”

My friend worked in technology; he’d seen the changes that faster microprocessors and networks had wrought. It wasn’t that much of a step for him to believe that before he was beset by middle age, the intelligence of machines would exceed that of humans—a moment that futurists call the singularity. A benevolent superintelligence might analyze the human genetic code at great speed and unlock the secret to eternal youth. At the very least, it might know how to fix your back.

But what if it wasn’t so benevolent? Nick Bostrom, a philosopher who directs the Future of Humanity Institute at the University of Oxford, describes the following scenario in his book Superintelligence, which has prompted a great deal of debate about the future of artificial intelligence. Imagine a machine that we might call a “paper-clip maximizer”—that is, a machine programmed to make as many paper clips as possible. Now imagine that this machine somehow became incredibly intelligent. Given its goals, it might then decide to create new, more efficient paper-clip-manufacturing machines—until, King Midas style, it had converted essentially everything to paper clips.

No worries, you might say: you could just program it to make exactly a million paper clips and halt. But what if it makes the paper clips and then decides to check its work? Has it counted correctly? It needs to become smarter to be sure. The superintelligent machine manufactures some as-yet-uninvented raw-computing material (call it “computronium”) and uses that to check each doubt. But each new doubt yields further digital doubts, and so on, until the entire earth is converted to computronium. Except for the million paper clips.

Bostrom does not believe that the paper-clip maximizer will come to be, exactly; it’s a thought experiment, one designed to show how even careful system design can fail to restrain extreme machine intelligence. But he does believe that superintelligence could emerge, and while it could be great, he thinks it could also decide it doesn’t need humans around. Or do any number of other things that destroy the world. The title of chapter 8 is: “Is the default outcome doom?”

If this sounds absurd to you, you’re not alone. Critics such as the robotics pioneer Rodney Brooks say that people who fear a runaway AI misunderstand what computers are doing when we say they’re thinking or getting smart. From this perspective, the putative superintelligence Bostrom describes is far in the future and perhaps impossible.

Yet a lot of smart, thoughtful people agree with Bostrom and are worried now. Why?

Volition
The question “Can a machine think?” has shadowed computer science from its beginnings. Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term “artificial intelligence” in 1955. As AI researchers in the 1960s and 1970s began to use computers to recognize images, translate between languages, and understand instructions in normal language and not just code, the idea that computers would eventually develop the ability to speak and think—and thus to do evil—bubbled into mainstream culture. Even beyond the oft-referenced HAL from 2001: A Space Odyssey, the 1970 movie Colossus: The Forbin Project featured a large blinking mainframe computer that brings the world to the brink of nuclear destruction; a similar theme was explored 13 years later in WarGames. The androids of 1973’s Westworld went crazy and started killing.

When AI research fell far short of its lofty goals, funding dried up to a trickle, beginning long “AI winters.” Even so, the torch of the intelligent machine was carried forth in the 1980s and ’90s by sci-fi authors like Vernor Vinge, who popularized the concept of the singularity; researchers like the roboticist Hans Moravec, an expert in computer vision; and the engineer/entrepreneur Ray Kurzweil, author of the 1999 book The Age of Spiritual Machines. Whereas Turing had posited a humanlike intelligence, Vinge, Moravec, and Kurzweil were thinking bigger: when a computer became capable of independently devising ways to achieve goals, it would very likely be capable of introspection—and thus able to modify its software and make itself more intelligent. In short order, such a computer would be able to design its own hardware.

As Kurzweil described it, this would begin a beautiful new era. Such machines would have the insight and patience (measured in picoseconds) to solve the outstanding problems of nanotechnology and spaceflight; they would improve the human condition and let us upload our consciousness into an immortal digital form. Intelligence would spread throughout the cosmos.

You can also find the exact opposite of such sunny optimism. Stephen Hawking has warned that because people would be unable to compete with an advanced AI, it “could spell the end of the human race.” Upon reading Superintelligence, the entrepreneur Elon Musk tweeted: “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” Musk then followed with a $10 million grant to the Future of Life Institute. Not to be confused with Bostrom’s center, this is an organization that says it is “working to mitigate existential risks facing humanity,” the ones that could arise “from the development of human-level artificial intelligence.”

No one is suggesting that anything like superintelligence exists now. In fact, we still have nothing approaching a general-purpose artificial intelligence or even a clear path to how it could be achieved. Recent advances in AI, from automated assistants such as Apple’s Siri to Google’s driverless cars, also reveal the technology’s severe limitations; both can be thrown off by situations that they haven’t encountered before. Artificial neural networks can learn for themselves to recognize cats in photos. But they must be shown hundreds of thousands of examples and still end up much less accurate at spotting cats than a child.

This is where skeptics such as Brooks, a founder of iRobot and Rethink Robotics, come in. Even if it’s impressive—relative to what earlier computers could manage—for a computer to recognize a picture of a cat, the machine has no volition, no sense of what cat-ness is or what else is happening in the picture, and none of the countless other insights that humans have. In this view, AI could possibly lead to intelligent machines, but it would take much more work than people like Bostrom imagine. And even if it could happen, intelligence will not necessarily lead to sentience. Extrapolating from the state of AI today to suggest that superintelligence is looming is “comparable to seeing more efficient internal combustion engines appearing and jumping to the conclusion that warp drives are just around the corner,” Brooks wrote recently on Edge.org. “Malevolent AI” is nothing to worry about, he says, for a few hundred years at least.

Insurance policy
Even if the odds of a superintelligence arising are very long, perhaps it’s irresponsible to take the chance. One person who shares Bostrom’s concerns is Stuart J. Russell, a professor of computer science at the University of California, Berkeley. Russell is the author, with Peter Norvig (a peer of Kurzweil’s at Google), of Artificial Intelligence: A Modern Approach, which has been the standard AI textbook for two decades.

“There are a lot of supposedly smart public intellectuals who just haven’t a clue,” Russell told me. He pointed out that AI has advanced tremendously in the last decade, and that while the public might understand progress in terms of Moore’s Law (faster computers are doing more), in fact recent AI work has been fundamental, with techniques like deep learning laying the groundwork for computers that can automatically increase their understanding of the world around them.

Because Google, Facebook, and other companies are actively looking to create an intelligent, “learning” machine, he reasons, “I would say that one of the things we ought not to do is to press full steam ahead on building superintelligence without giving thought to the potential risks. It just seems a bit daft.” Russell made an analogy: “It’s like fusion research. If you ask a fusion researcher what they do, they say they work on containment. If you want unlimited energy you’d better contain the fusion reaction.” Similarly, he says, if you want unlimited intelligence, you’d better figure out how to align computers with human needs.

Bostrom’s book is a research proposal for doing so. A superintelligence would be godlike, but would it be animated by wrath or by love? It’s up to us (that is, the engineers). Like any parent, we must give our child a set of values. And not just any values, but those that are in the best interest of humanity. We’re basically telling a god how we’d like to be treated. How to proceed?

Bostrom draws heavily on an idea from a thinker named Eliezer Yudkowsky, who talks about “coherent extrapolated volition”—the consensus-derived “best self” of all people. AI would, we hope, wish to give us rich, happy, fulfilling lives: fix our sore backs and show us how to get to Mars. And since humans will never fully agree on anything, we’ll sometimes need it to decide for us—to make the best decisions for humanity as a whole. How, then, do we program those values into our (potential) superintelligences? What sort of mathematics can define them? These are the problems, Bostrom believes, that researchers should be solving now. Bostrom says it is “the essential task of our age.”

For the civilian, there’s no reason to lose sleep over scary robots. We have no technology that is remotely close to superintelligence. Then again, many of the largest corporations in the world are deeply invested in making their computers more intelligent; a true AI would give any one of these companies an unbelievable advantage. They also should be attuned to its potential downsides and figuring out how to avoid them.

This somewhat more nuanced suggestion—without any claims of a looming AI-mageddon—is the basis of an open letter on the website of the Future of Life Institute, the group that got Musk’s donation. Rather than warning of existential disaster, the letter calls for more research into reaping the benefits of AI “while avoiding potential pitfalls.” This letter is signed not just by AI outsiders such as Hawking, Musk, and Bostrom but also by prominent computer scientists (including Demis Hassabis, a top AI researcher). You can see where they’re coming from. After all, if they develop an artificial intelligence that doesn’t share the best human values, it will mean they weren’t smart enough to control their own creations.

Source: MIT Technology Review

Whose story are you telling?

In an Age When ‘Story Telling’ is king, Is Yours Based on Substance?

OnTheGoAs researchers, we love a good quote. We use them to emphasize a point in the findings. They allow us to tell stories and dimensionalize our implications. But sometimes quotes taken from research can take on a life of their own. Without the right context, they can be misinterpreted. It might be difficult for clients to trust that a verbatim is representative of the sample (let alone the greater population). And then there’s bias … how easy it is for anyone human to create a biased narrative with just a little data and some imagination.

Yes, a biased narrative. The plague that can hit anyone in the research field. Researchers must mitigate the risk of biases in their research. They work hard to avoid selection bias, especially challenging within segmentation research but necessary. They avoid cognitive bias, analytical bias. And then there’s confirmation bias.

Need a refresher? Confirmation bias is the tendency to interpret new evidence as confirmation of one’s existing beliefs or theories. With little information, an observer can create a very complex narrative about motivations and emotions that might not be accurately representative of the research. (An example of how easy it is to create narratives is the Heider-Simmel experiment [Heider-Simmel Animation]. Go ahead, watch and reflect).

How does this relate to open ended survey questions?
Traditionally, researchers must read — or must rely on others to read and report to them — every open-ended response in a study and create a narrative of the key point and themes contained in that unstructured text. Their reports include verbatims or quotes from respondents that support their findings and conclusions. That’s why text analytics is so helpful.  A researcher, who knows all about biases and does the best they can to mitigate them, can rely on the very statistical measures they tout to help them do their work better. With text analytics researchers, can find the important themes in the open-ended responses and quantify/validate that data.

Text analytics – and in particular OdinText – allows researchers that want to add both power and validity to their selected verbatims. Avoid bias by confirming that the stories they tell from the unstructured data are actually supported by the data.

In an age where so much emphasis is put on story telling, let’s not forget that stories are just that unless they are supported by data.

Next time you find yourself overwhelmed with unstructured data, and are tempted to just “tell a story”, please reach out. I’d love to show you with your own data how modern text analytics can ensure your story is based on fact!

Source: odintext.com

Deep Learning Is Going to Teach Us All the Lesson of Our Lives: Jobs Are for Machines

OnTheGo

On December 2nd, 1942, a team of scientists led by Enrico Fermi came back from lunch and watched as humanity created the first self-sustaining nuclear reaction inside a pile of bricks and wood underneath a football field at the University of Chicago. Known to history as Chicago Pile-1, it was celebrated in silence with a single bottle of Chianti, for those who were there understood exactly what it meant for humankind, without any need for words.

Now, something new has occurred that, again, quietly changed the world forever. Like a whispered word in a foreign language, it was quiet in that you may have heard it, but its full meaning may not have been comprehended. However, it’s vital we understand this new language, and what it’s increasingly telling us, for the ramifications are set to alter everything we take for granted about the way our globalized economy functions, and the ways in which we as humans exist within it.

The language is a new class of machine learning known as deep learning, and the “whispered word” was a computer’s use of it to seemingly out of nowhere defeat three-time European Go champion Fan Hui, not once but five times in a row without defeat. Many who read this news, considered that as impressive, but in no way comparable to a match against Lee Se-dol instead, who many consider to be one of the world’s best living Go players, if not the best. Imagining such a grand duel of man versus machine, China’s top Go player predicted that Lee would not lose a single game, and Lee himself confidently expected to possibly lose one at the most.

What actually ended up happening when they faced off? Lee went on to lose all but one of their match’s five games. An AI named AlphaGo is now a better Go player than any human and has been granted the “divine” rank of 9 dan. In other words, its level of play borders on godlike. Go has officially fallen to machine, just as Jeopardy did before it to Watson, and chess before that to Deep Blue.

So, what is Go? Very simply, think of Go as Super Ultra Mega Chess. This may still sound like a small accomplishment, another feather in the cap of machines as they continue to prove themselves superior in the fun games we play, but it is no small accomplishment, and what’s happening is no game.

AlphaGo’s historic victory is a clear signal that we’ve gone from linear to parabolic. Advances in technology are now so visibly exponential in nature that we can expect to see a lot more milestones being crossed long before we would otherwise expect. These exponential advances, most notably in forms of artificial intelligence limited to specific tasks, we are entirely unprepared for as long as we continue to insist upon employment as our primary source of income.

This may all sound like exaggeration, so let’s take a few decade steps back, and look at what computer technology has been actively doing to human employment so far:
OnTheGo

Let the above chart sink in. Do not be fooled into thinking this conversation about the automation of labor is set in the future. It’s already here. Computer technology is already eating jobs and has been since 1990.

Routine Work
All work can be divided into four types: routine and nonroutine, cognitive and manual. Routine work is the same stuff day in and day out, while nonroutine work varies. Within these two varieties, is the work that requires mostly our brains (cognitive) and the work that requires mostly our bodies (manual). Where once all four types saw growth, the stuff that is routine stagnated back in 1990. This happened because routine labor is easiest for technology to shoulder. Rules can be written for work that doesn’t change, and that work can be better handled by machines.

Distressingly, it’s exactly routine work that once formed the basis of the American middle class. It’s routine manual work that Henry Ford transformed by paying people middle class wages to perform, and it’s routine cognitive work that once filled US office spaces. Such jobs are now increasingly unavailable, leaving only two kinds of jobs with rosy outlooks: jobs that require so little thought, we pay people little to do them, and jobs that require so much thought, we pay people well to do them.

If we can now imagine our economy as a plane with four engines, where it can still fly on only two of them as long as they both keep roaring, we can avoid concerning ourselves with crashing. But what happens when our two remaining engines also fail? That’s what the advancing fields of robotics and AI represent to those final two engines, because for the first time, we are successfully teaching machines to learn.

Neural Networks
I’m a writer at heart, but my educational background happens to be in psychology and physics. I’m fascinated by both of them so my undergraduate focus ended up being in the physics of the human brain, otherwise known as cognitive neuroscience. I think once you start to look into how the human brain works, how our mass of interconnected neurons somehow results in what we describe as the mind, everything changes. At least it did for me.

As a quick primer in the way our brains function, they’re a giant network of interconnected cells. Some of these connections are short, and some are long. Some cells are only connected to one other, and some are connected to many. Electrical signals then pass through these connections, at various rates, and subsequent neural firings happen in turn. It’s all kind of like falling dominoes, but far faster, larger, and more complex. The result amazingly is us, and what we’ve been learning about how we work, we’ve now begun applying to the way machines work.

One of these applications is the creation of deep neural networks – kind of like pared-down virtual brains. They provide an avenue to machine learning that’s made incredible leaps that were previously thought to be much further down the road, if even possible at all. How? It’s not just the obvious growing capability of our computers and our expanding knowledge in the neurosciences, but the vastly growing expanse of our collective data, aka big data.

Big Data
Big data isn’t just some buzzword. It’s information, and when it comes to information, we’re creating more and more of it every day. In fact we’re creating so much that a 2013 report by SINTEF estimated that 90% of all information in the world had been created in the prior two years. This incredible rate of data creation is even doubling every 1.5 years thanks to the Internet, where in 2015 every minute we were liking 4.2 million things on Facebook, uploading 300 hours of video to YouTube, and sending 350,000 tweets. Everything we do is generating data like never before, and lots of data is exactly what machines need in order to learn to learn. Why?

Imagine programming a computer to recognize a chair. You’d need to enter a ton of instructions, and the result would still be a program detecting chairs that aren’t, and not detecting chairs that are. So how did we learn to detect chairs? Our parents pointed at a chair and said, “chair.” Then we thought we had that whole chair thing all figured out, so we pointed at a table and said “chair”, which is when our parents told us that was “table.” This is called reinforcement learning. The label “chair” gets connected to every chair we see, such that certain neural pathways are weighted and others aren’t. For “chair” to fire in our brains, what we perceive has to be close enough to our previous chair encounters. Essentially, our lives are big data filtered through our brains.

Deep Learning
The power of deep learning is that it’s a way of using massive amounts of data to get machines to operate more like we do without giving them explicit instructions. Instead of describing “chairness” to a computer, we instead just plug it into the Internet and feed it millions of pictures of chairs. It can then have a general idea of “chairness.” Next we test it with even more images. Where it’s wrong, we correct it, which further improves its “chairness” detection. Repetition of this process results in a computer that knows what a chair is when it sees it, for the most part as well as we can. The important difference though is that unlike us, it can then sort through millions of images within a matter of seconds.

This combination of deep learning and big data has resulted in astounding accomplishments just in the past year. Aside from the incredible accomplishment of AlphaGo, Google’s DeepMind AI learned how to read and comprehend what it read through hundreds of thousands of annotated news articles. DeepMind also taught itself to play dozens of Atari 2600 video games better than humans, just by looking at the screen and its score, and playing games repeatedly. An AI named Giraffe taught itself how to play chess in a similar manner using a dataset of 175 million chess positions, attaining International Master level status in just 72 hours by repeatedly playing itself. In 2015, an AI even passed a visual Turing test by learning to learn in a way that enabled it to be shown an unknown character in a fictional alphabet, then instantly reproduce that letter in a way that was entirely indistinguishable from a human given the same task. These are all major milestones in AI.

However, despite all these milestones, when asked to estimate when a computer would defeat a prominent Go player, the answer even just months prior to the announcement by Google of AlphaGo’s victory, was by experts essentially, “Maybe in another ten years.” A decade was considered a fair guess because Go is a game so complex I’ll just let Ken Jennings of Jeopardy fame, another former champion human defeated by AI, describe it:

Go is famously a more complex game than chess, with its larger board, longer games, and many more pieces. Google’s DeepMind artificial intelligence team likes to say that there are more possible Go boards than atoms in the known universe, but that vastly understates the computational problem. There are about 10¹⁷⁰ board positions in Go, and only 10⁸⁰ atoms in the universe. That means that if there were as many parallel universes as there are atoms in our universe (!), then the total number of atoms in all those universes combined would be close to the possibilities on a single Go board.

Such confounding complexity makes impossible any brute-force approach to scan every possible move to determine the next best move. But deep neural networks get around that barrier in the same way our own minds do, by learning to estimate what feels like the best move. We do this through observation and practice, and so did AlphaGo, by analyzing millions of professional games and playing itself millions of times. So the answer to when the game of Go would fall to machines wasn’t even close to ten years. The correct answer ended up being, “Any time now.”

Nonroutine Automation
Any time now. That’s the new go-to response in the 21st century for any question involving something new machines can do better than humans, and we need to try to wrap our heads around it.

We need to recognize what it means for exponential technological change to be entering the labor market space for nonroutine jobs for the first time ever. Machines that can learn mean nothing humans do as a job is uniquely safe anymore. From hamburgers to healthcare, machines can be created to successfully perform such tasks with no need or less need for humans, and at lower costs than humans.

Amelia is just one AI out there currently being beta-tested in companies right now. Created by IPsoft over the past 16 years, she’s learned how to perform the work of call center employees. She can learn in seconds what takes us months, and she can do it in 20 languages. Because she’s able to learn, she’s able to do more over time. In one company putting her through the paces, she successfully handled one of every ten calls in the first week, and by the end of the second month, she could resolve six of ten calls. Because of this, it’s been estimated that she can put 250 million people out of a job, worldwide.

Viv is an AI coming soon from the creators of Siri who’ll be our own personal assistant. She’ll perform tasks online for us, and even function as a Facebook News Feed on steroids by suggesting we consume the media she’ll know we’ll like best. In doing all of this for us, we’ll see far fewer ads, and that means the entire advertising industry — that industry the entire Internet is built upon — stands to be hugely disrupted.

A world with Amelia and Viv — and the countless other AI counterparts coming online soon — in combination with robots like Boston Dynamics’ next generation Atlas portends, is a world where machines can do all four types of jobs and that means serious societal reconsiderations. If a machine can do a job instead of a human, should any human be forced at the threat of destitution to perform that job? Should income itself remain coupled to employment, such that having a job is the only way to obtain income, when jobs for many are entirely unobtainable? If machines are performing an increasing percentage of our jobs for us, and not getting paid to do them, where does that money go instead? And what does it no longer buy? Is it even possible that many of the jobs we’re creating don’t need to exist at all, and only do because of the incomes they provide? These are questions we need to start asking, and fast.

Decoupling Income From Work
Fortunately, people are beginning to ask these questions, and there’s an answer that’s building up momentum. The idea is to put machines to work for us, but empower ourselves to seek out the forms of remaining work we as humans find most valuable, by simply providing everyone a monthly paycheck independent of work. This paycheck would be granted to all citizens unconditionally, and its name is universal basic income. By adopting UBI, aside from immunizing against the negative effects of automation, we’d also be decreasing the risks inherent in entrepreneurship, and the sizes of bureaucracies necessary to boost incomes. It’s for these reasons, it has cross-partisan support, and is even now in the beginning stages of possible implementation in countries like Switzerland, Finland, the Netherlands, and Canada.

The future is a place of accelerating changes. It seems unwise to continue looking at the future as if it were the past, where just because new jobs have historically appeared, they always will. The WEF started 2016 off by estimating the creation by 2020 of 2 million new jobs alongside the elimination of 7 million. That’s a net loss, not a net gain of 5 million jobs. In a frequently cited paper, an Oxford study estimated the automation of about half of all existing jobs by 2033. Meanwhile self-driving vehicles, again thanks to machine learning, have the capability of drastically impacting all economies — especially the US economy as I wrote last year about automating truck driving — by eliminating millions of jobs within a short span of time.

And now even the White House, in a stunning report to Congress, has put the probability at 83 percent that a worker making less than $20 an hour in 2010 will eventually lose their job to a machine. Even workers making as much as $40 an hour face odds of 31 percent. To ignore odds like these is tantamount to our now laughable “duck and cover” strategies for avoiding nuclear blasts during the Cold War.

All of this is why it’s those most knowledgeable in the AI field who are now actively sounding the alarm for basic income. During a panel discussion at the end of 2015 at Singularity University, prominent data scientist Jeremy Howard asked “Do you want half of people to starve because they literally can’t add economic value, or not?” before going on to suggest, ”If the answer is not, then the smartest way to distribute the wealth is by implementing a universal basic income.”

AI pioneer Chris Eliasmith, director of the Centre for Theoretical Neuroscience, warned about the immediate impacts of AI on society in an interview with Futurism, “AI is already having a big impact on our economies… My suspicion is that more countries will have to follow Finland’s lead in exploring basic income guarantees for people.”

Moshe Vardi expressed the same sentiment after speaking at the 2016 annual meeting of the American Association for the Advancement of Science about the emergence of intelligent machines, “we need to rethink the very basic structure of our economic system… we may have to consider instituting a basic income guarantee.”

Even Baidu’s chief scientist and founder of Google’s “Google Brain” deep learning project, Andrew Ng, during an onstage interview at this year’s Deep Learning Summit, expressed the shared notion that basic income must be “seriously considered” by governments, citing “a high chance that AI will create massive labor displacement.”

When those building the tools begin warning about the implications of their use, shouldn’t those wishing to use those tools listen with the utmost attention, especially when it’s the very livelihoods of millions of people at stake? If not then, what about when Nobel prize winning economists begin agreeing with them in increasing numbers?

No nation is yet ready for the changes ahead. High labor force non-participation leads to social instability, and a lack of consumers within consumer economies leads to economic instability. So let’s ask ourselves, what’s the purpose of the technologies we’re creating? What’s the purpose of a car that can drive for us, or artificial intelligence that can shoulder 60% of our workload? Is it to allow us to work more hours for even less pay? Or is it to enable us to choose how we work, and to decline any pay/hours we deem insufficient because we’re already earning the incomes that machines aren’t?

What’s the big lesson to learn, in a century when machines can learn?

I offer it’s that jobs are for machines, and life is for people.

Source: medium.com

4 Critical Factors in Digital Transformation Success

OnTheGo

The term ‘Digital Transformation’ has taken its place in today’s business vocabulary. It’s on the lips of virtually every IT vendor, most management consultants and an increasing number of executives.

Much of the buzz around digital transformation has substance. Some is just plain hype. It is clear that the combination of social and mobile technologies, analytics & big data and the cloud – popularly known by the acronym SMAC – represent powerful forces that will shape the nature of competition in the foreseeable future. This trend is being further accelerated by the evolution of cognitive computing and robotics. Yet – it’s true that there are a lot of misconceptions around digital transformation. Digital transformation occurs when businesses are focused on integrating digital technologies, such as social, mobile, analytics and cloud, to transform how their businesses create value for customers. If there’s not a clear emphasis on customer value creation it’s not transformation. If the effort is simply focused on applying IT tools to the same business model – it’s not transformation. If the initiative is largely around cost reduction – in the context of the existing business model – then it is surely not digital transformation.

There is good reason to consider jumping on the digital transformation bandwagon. The pace of change today is astounding. According to research conducted by Singularity University, the average half-life of a business competency dropped from 30 years in 1984, to a low of 5 years in 2014. Equally noteworthy is that nearly 90% of the Fortune 500 companies which were listed in 1955 were no longer on the list in 2014 and the average life of an S&P company has decreased from 67 years to 15 years. Perhaps even more significant is the prediction by the authors of Exponential Organizations that as many as 40% of the companies listed on the S&P 500 in 2014 will cease to exist by 2024.

A new breed of upstart organizations are X times faster and better than traditional firms – these are the so called “exponential organizations” that deploy digital technologies in a new and novel way. These companies have demonstrated the ability to reach a market cap of a billion dollars much faster than the typical Fortune 500 Company or even Google for that matter and include firms such as Facebook, Tesla, Uber and WhatsApp.

The cumulative effect of digital technologies, exponential organizations, and external threats means that more traditional organizations will have to change the way they do business to compete in the evolving environment. That’s why it will become increasingly important for companies to focus on large scale change. As a 2015 MIT Sloan Management Review global study found “strategy, not technology, drives digital transformation.” Successful companies understood the importance of integrating digital technologies such as social, mobile, analytics, big data, the cloud and even cognitive computing in transforming how their business operated – while less advanced firms stressed solving discrete business problems using individual technologies.

Given this background, let’s consider the four critical factors in digital transformation:
1.Measure what matters to customers
2.Challenge the current operating model
3.Adopt an end-to-end business process based view of work
4.Make it easier for employees to serve customers

The fastest and arguably the most effective way of drawing attention to performance problems is to measure what matters to customers. This typically involves metrics around the timeliness and quality of the products and services provided – and these set the stage for viewing the business in the context of the value creating, cross functional business processes. Don’t underestimate the needed shift in management attention that this involves as most companies still tend to focus on financial measure of performance and place insufficient emphasis on critical to customer metrics such as perfect order performance, variance to promise and responsiveness.

Then, in addition to thinking about just customer touching actions – it’s useful to model both the customer journey and a model of the high level the cross functional business processes that create value for customers. As the old saying goes about pictures being worth a thousand words, and most companies simply do not have pictures of the flow of work that creates value for customers.

The case of Tokyo Electron America (TEA), a subsidiary of semiconductor manufacturer Tokyo Electron Limited represents one example of an organization that reflected on these 4 success factors of digital transformation success. TEA set the objective of moving from a product to customer-outcome focused support organization.

They realized a key success factor would be to transform how the field service group – a 500-strong team of field service engineers (FSEs) – worked in creating value for customers. Their digital initiative gave this group real-time access to relevant customer, product and service information.

TEA monitored critical to customer metrics such as “mean time to repair.” As a result of this program TEA’s field service team realized major benefits such as a drop in mean time to repair times and costs per service inquiry, reduced equipment downtime, and customer satisfaction scores increase*.

While the specific factors will from one organization to the next, challenging the current business model, measuring what matters to customers, viewing work in terms of large cross functional business processes and making it easier for employees to serve customers are invariably four of the critical factors in digital transformation success.

Source: bpminstitute.org