Past, Present and Future of AI / Machine Learning (Google I/O ’17)

 

We are in the middle of a major shift in computing that’s transitioning us from a mobile-first world into one that’s AI-first. AI will touch every industry and transform the products and services we use daily. Breakthroughs in machine learning have enabled dramatic improvements in the quality of Google Translate, made your photos easier to organize with Google Photos, and enabled improvements in Search, Maps, YouTube, and more.

 

Our machines now have knowledge well never understand

The new availability of huge amounts of data, along with the statistical tools to crunch these numbers, offers a whole new way of understanding the world. Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.

OnTheGo

So wrote Wired’s Chris Anderson in 2008. It kicked up a little storm at the time, as Anderson, the magazine’s editor, undoubtedly intended. For example, an article in a journal of molecular biology asked, “…if we stop looking for models and hypotheses, are we still really doing science?” The answer clearly was supposed to be: “No.”

But today — not even a decade since Anderson’s article — the controversy sounds quaint. Advances in computer software, enabled by our newly capacious, networked hardware, are enabling computers not only to start without models — rule sets that express how the elements of a system affect one another — but to generate their own, albeit ones that may not look much like what humans would create. It’s even becoming a standard method, as any self-respecting tech company has now adopted a “machine-learning first” ethic.

We are increasingly relying on machines that derive conclusions from models that they themselves have created, models that are often beyond human comprehension, models that “think” about the world differently than we do.

But this comes with a price. This infusion of alien intelligence is bringing into question the assumptions embedded in our long Western tradition. We thought knowledge was about finding the order hidden in the chaos. We thought it was about simplifying the world. It looks like we were wrong. Knowing the world may require giving up on understanding it.

Models Beyond Understanding

In a series on machine learning, Adam Geitgey explains the basics, from which this new way of “thinking” is emerging:

[T]here are generic algorithms that can tell you something interesting about a set of data without you having to write any custom code specific to the problem. Instead of writing code, you feed data to the generic algorithm and it builds its own logic based on the data.”

For example, you give a machine learning system thousands of scans of sloppy, handwritten 8s and it will learn to identify 8s in a new scan. It does so, not by deriving a recognizable rule, such as “An 8 is two circles stacked vertically,” but by looking for complex patterns of darker and lighter pixels, expressed as matrices of numbers — a task that would stymie humans. In a recent agricultural example, the same technique of numerical patterns taught a computer how to sort cucumbers.

Then you can take machine learning further by creating an artificial neural network that models in software how the human brain processes signals.[1] Nodes in an irregular mesh turn on or off depending on the data coming to them from the nodes connected to them; those connections have different weights, so some are more likely to flip their neighbors than others. Although artificial neural networks date back to the 1950s, they are truly coming into their own only now because of advances in computing power, storage, and mathematics. The results from this increasingly sophisticated branch of computer science can be deep learning that produces outcomes based on so many different variables under so many different conditions being transformed by so many layers of neural networks that humans simply cannot comprehend the model the computer has built for itself.

Yet it works. It’s how Google’s AlphaGo program came to defeat the third-highest ranked Go player in the world. Programming a machine to play Go is more than a little daunting than sorting cukes, given that the game has 10^350 possible moves; there are 10^123 possible moves in chess, and 10^80 atoms in the universe. Google’s hardware wasn’t even as ridiculously overpowered as it might have been: It had only 48 processors, plus eight graphics processors that happen to be well-suited for the required calculations.

AlphaGo was trained on thirty million board positions that occurred in 160,000 real-life games, noting the moves taken by actual players, along with an understanding of what constitutes a legal move and some other basics of play. Using deep learning techniques that refine the patterns recognized by the layer of the neural network above it, the system trained itself on which moves were most likely to succeed.

Although AlphaGo has proven itself to be a world class player, it can’t spit out practical maxims from which a human player can learn. The program works not by developing generalized rules of play — e.g., “Never have more than four sets of unconnected stones on the board” — but by analyzing which play has the best chance of succeeding given a precise board configuration. In contrast, Deep Blue, the dedicated IBM chess-playing computer, has been programmed with some general principles of good play. As Christof Koch writes in Scientific American, AlphaGo’s intelligence is in the weights of all those billions of connections among its simulated neurons. It creates a model that enables it to make decisions, but that model is ineffably complex and conditional. Nothing emerges from this mass of contingencies, except victory against humans.

As a consequence, if you, with your puny human brain, want to understand why AlphaGo chose a particular move, the “explanation” may well consist of the networks of weighted connections that then pass their outcomes to the next layer of the neural network. Your brain can’t remember all those weights, and even if it could, it couldn’t then perform the calculation that resulted in the next state of the neural network. And even if it could, you would have learned nothing about how to play Go, or, in truth, how AlphaGo plays Go—just as internalizing a schematic of the neural states of a human player would not constitute understanding how she came to make any particular move.

Go is just a game, so it may not seem to matter that we can’t follow AlphaGo’s decision path. But what do we say about the neural networks that are enabling us to analyze the interactions of genes in two-locus genetic diseases? How about the use of neural networks to discriminate the decay pattern of single and multiple particles at the Large Hadron Collider? How the use of machine learning to help identify which of the 20 climate change models tracked by the Intergovernmental Panel on Climate Change is most accurate at any point? Such machines give us good results — for example: “Congratulations! You just found a Higgs boson!” — but we cannot follow their “reasoning.”

Clearly our computers have surpassed us in their power to discriminate, find patterns, and draw conclusions. That’s one reason we use them. Rather than reducing phenomena to fit a relatively simple model, we can now let our computers make models as big as they need to. But this also seems to mean that what we know depends upon the output of machines the functioning of which we cannot follow, explain, or understand.

Since we first started carving notches in sticks, we have used things in the world to help us to know that world. But never before have we relied on things that did not mirror human patterns of reasoning — we knew what each notch represented — and that we could not later check to see how our non-sentient partners in knowing came up with those answers. If knowing has always entailed being able to explain and justify our true beliefs — Plato’s notion, which has persisted for over two thousand years — what are we to make of a new type of knowledge, in which that task of justification is not just difficult or daunting but impossible?

Source: backchannel.com

Alexa learns to talk like a human with whispers, pauses & emotion

OnTheGo

Amazon’s Alexa is going to sound more human. The company announced this week the addition of a new set of speaking skills for the virtual assistant, which will allow her to do things like whisper, take a breath to pause for emphasis, adjust the rate, pitch and volume of her speech, and more. She’ll even be able to “bleep” out words – which may not be all that human, actually, but is certainly clever.

These new tools were provided to Alexa app developers in the form of a standardized markup language called Speech Synthesis Markup Language, or SSML, which will let them code Alexa’s speech patterns into their applications. This will allow for the creation of voice apps – “Skills” on the Alexa platform – where developers can control the pronunciation, intonation, timing and emotion of their Skill’s text responses.

Alexa today already has a lot of personality – something that can help endear people to their voice assistants. Having taken a note from how Apple’s Siri surprises people with her humorous responses, Alexa responds to questions about herself, tells jokes, answers to “I love you,” and will even sing you a song if you ask. But her voice can still sound robotic at times – especially if she’s reading out longer phrases and sentences where there should be natural breaks and changes in tone.

As Amazon explains, developers could have used these new tools to make Alexa talk like E.T., but that’s not really the point. To ensure developers make use of the tools as intended – to humanize Alexa’s speaking patterns – Amazon has set limits on the amount of change developers are able to apply to the rate, pitch, and volume. (There will be no high-pitched, squeaks and screams, I guess.)

In total, there are five new SSML tags that can be put into practice, including whispers, expletive beeps, emphasis, sub (which lets Alexa say something other than what’s written), and prosody. That last one is about controlling the volume, pitch and rate of speech.

To show how these changes could work in a real Alexa app, Amazon created a quiz game template that uses the new tags, but can also be modified by developers to test out Alexa’s new voice tricks.

In addition to the tags, Amazon also introduced “speechcons” to developers in the U.K. and Germany. These are special words and phrases that Alexa knows to express in a more colorful way to make her interactions engaging and personal. Some speechcons were already available in the U.S., for a number of words, like “abracadabra!,” “ahem,” “aloha,” “eureka!,” “gotcha,” “kapow,” “yay,” and many more.

But with their arrival in the new markets, Alexa Skill developers can use regionally specific terms such as “Blimey” and “Bob’s your uncle,” in the U.K. and “Da lachen ja die Hühner” and “Donnerwetter” in Germany.

There are now over 12,000 Alexa Skills on the marketplace but it’s unknown how many developers will actually put the new voice tags to work.

After all, this humanization of Alexa relies on having an active developer community. And that’s something that requires Amazon to do more than build out clever tricks to be put to use – it has to be able to support an app economy, where developers don’t just build things for fun, but because there are real businesses that can be run atop Amazon’s voice computing infrastructure.

Source: techcrunch.com

Machine learning algorithms surpass doctors at predicting heart attacks

Between 15 and 20 million people die every year from heart attacks and related illnesses worldwide, but now, artificial intelligence could help reduce that number with better predictive abilities.

OnTheGo

Doctors are not clairvoyant, but it looks like technology is getting awfully close. Thanks to a team of researchers at the University of Nottingham in the United Kingdom, we could be closer than ever before to predicting the future when it comes to patients’ health risks. The scientists have managed to develop an algorithm that outperforms medical doctors when it comes to predicting heart attacks. And this, experts say, could save thousands or even millions of lives every year.

As it stands, around 20 million people fall victim to cardiovascular disease, which includes heart attacks, strokes, and blocked arteries. Today, doctors depend on guidelines similar to those of the American College of Cardiology/American Heart Association (ACC/AHA) in order to predict individuals’ risks. These guidelines include factors like age, cholesterol level, and blood pressure.

Unfortunately, that’s often insufficient. “There’s a lot of interaction in biological systems,” Stephen Weng, an epidemiologist at the University of Nottingham, told Science Magazine. And some of them make less sense than others. “That’s the reality of the human body,” Weng continued. “What computer science allows us to do is to explore those associations.”

In employing computer science, Weng took the ACC/AHA guidelines and compared them to four machine-learning algorithms: random forest, logistic regression, gradient boosting, and neural networks. The artificially intelligent algorithms began to train themselves using existing data to look for patterns and create their own “rules.” Then, they began testing these guidelines against other records. And as it turns out, all four of these methods “performed significantly better than the ACC/AHA guidelines,” Science reports.

The most successful algorithm, the neural network, actually was correct 7.6 percent more often than the ACC/AHA method, and resulted in 1.6 percent fewer false positives. That means that in a sample size of around 83,000 patient records, 355 additional lives could have been saved.

“I can’t stress enough how important it is,” Elsie Ross, a vascular surgeon at Stanford University in Palo Alto, California, who was not involved with the work, told Science, “and how much I really hope that doctors start to embrace the use of artificial intelligence to assist us in care of patients.”

Source: digitaltrends.com

Machine learning creates living atlas of the planet

Machine learning, combined with satellite imagery and Cloud computing, is enabling understanding of the world and making the food supply chain more efficient.

OnTheGo

There are more than 7 billion people on Earth now, and roughly one in eight people do not have enough to eat. According to the World Bank, the human population will hit an astounding 9 billion by 2050. With rapidly increasing population, the growing need for food is becoming a grave concern.

The burden is now on technology to make up for the looming food crises in the coming decades. But fortunately there is no shortage of ideas and innovative minds are seeking solutions to combat this problem.

Machine learning to the rescue
Descartes Labs, a Los Alamos, New Mexico-based start-up is using machine learning to analyze satellite imagery to predict food supplies months in advance of current methods employed by the US government, a technique that could help predict food crises before they happen.

Descartes Labs pulls images from public databases like NASA’s Landsat and MODIS, ESA’s Sentinel missions and other private satellite imagery providers, including Planet. It also keeps a check on Google Earth and Amazon Web Services public datasets. This continuous up-to-date imagery is referred to as the ‘Living Atlas of the Plant’.

The commercial atlas, designed to provide real-time forecasts of commodity agriculture, uses decades of remotely sensed images stored on the Cloud to offer land use and land change analysis.

Descartes Labs cross-references the satellite information with other relevant data such as weather forecasts and prices of agricultural products. This data is then entered into the machine learning software, tracking and calculating future food supplies with amazing accuracy. By processing these images and data via their advanced machine learning algorithm, Descartes Labs collect remarkably in-depth information such as being able to distinguish individual crop fields and determining the specific field’s crop by analyzing how the sun’s light is reflecting off its surface. After the type of crop has been established, the machine learning program then monitors the field’s production levels.

“With machine learning techniques, we look at tons of pixels from satellites, and that tells us what’s growing,” says Mark Johnson, CEO and Co-founder, Descartes Labs.

How to tackle a data deluge
The total database includes approximately a petabyte — or 1015 bytes — of data. Descartes has actually reprocessed the whole 40-year archive starting with the first Landsat satellite imagery to offer completely Cloud-free view of land use and land change to create this ‘Living Atlas of the Planet’.

The data platform is said to have analyzed over 2.8 quadrillion multispectral pixels for this. It enables processing at petabytes per day rates using multi-source data to produce calibrated, georeferenced imagery stacks at desired points in time and space that can be used for pixel level or global scale analysis or for visualizing or measure changes such as floods, or changes in the condition of crops. “The platform is built for analysis. It is not built to store the data. This is a vastly different philosophy than traditional data platforms,” says Daniela Moody, Remote Sensing and Machine Learning Specialist, Descartes Labs.

The platform churns out imageries at specific locations for specific time at different wavelengths, thus offering unique insights into land cover changes over broad swaths of land. For instance, the NDVI (normalized difference vegetation index) reveals live green vegetation using a combination of red and near-infrared spectral bands (Figure 2). Combining NDVI with visible spectral bands allows a user to examine the landscape through many lenses. The platform offers both Web and API interfaces. While the Web interface offers options for visualizing data, whereas the API allows the user to interact directly with the data for specific analysis. The platform’s scalable Cloud infrastructure quickly ingests, analyzes, and creates predictions from the imagery.

Change is the only constant
The ability to have such fine-grained data on agricultural production will help in making the food supply chain more efficient. As Descartes Labs adds more geospatial data to its already robust database of earth imagery, these models will get even more accurate. Cloud computing and storage, combined with recent advances in machine learning and open software, are enabling understanding of the world at an unprecedented scale and detail.

Earth is not a static place, and researchers who study it need tools that keep up with the constant change. “We designed this platform to answer the problems of commodity agriculture,” Moody adds, “and in doing so we created a platform that is incredible and allows us to have a living atlas of the world.”

Source: geospatialworld.net

Deep Learning Is Going to Teach Us All the Lesson of Our Lives: Jobs Are for Machines

OnTheGo

On December 2nd, 1942, a team of scientists led by Enrico Fermi came back from lunch and watched as humanity created the first self-sustaining nuclear reaction inside a pile of bricks and wood underneath a football field at the University of Chicago. Known to history as Chicago Pile-1, it was celebrated in silence with a single bottle of Chianti, for those who were there understood exactly what it meant for humankind, without any need for words.

Now, something new has occurred that, again, quietly changed the world forever. Like a whispered word in a foreign language, it was quiet in that you may have heard it, but its full meaning may not have been comprehended. However, it’s vital we understand this new language, and what it’s increasingly telling us, for the ramifications are set to alter everything we take for granted about the way our globalized economy functions, and the ways in which we as humans exist within it.

The language is a new class of machine learning known as deep learning, and the “whispered word” was a computer’s use of it to seemingly out of nowhere defeat three-time European Go champion Fan Hui, not once but five times in a row without defeat. Many who read this news, considered that as impressive, but in no way comparable to a match against Lee Se-dol instead, who many consider to be one of the world’s best living Go players, if not the best. Imagining such a grand duel of man versus machine, China’s top Go player predicted that Lee would not lose a single game, and Lee himself confidently expected to possibly lose one at the most.

What actually ended up happening when they faced off? Lee went on to lose all but one of their match’s five games. An AI named AlphaGo is now a better Go player than any human and has been granted the “divine” rank of 9 dan. In other words, its level of play borders on godlike. Go has officially fallen to machine, just as Jeopardy did before it to Watson, and chess before that to Deep Blue.

So, what is Go? Very simply, think of Go as Super Ultra Mega Chess. This may still sound like a small accomplishment, another feather in the cap of machines as they continue to prove themselves superior in the fun games we play, but it is no small accomplishment, and what’s happening is no game.

AlphaGo’s historic victory is a clear signal that we’ve gone from linear to parabolic. Advances in technology are now so visibly exponential in nature that we can expect to see a lot more milestones being crossed long before we would otherwise expect. These exponential advances, most notably in forms of artificial intelligence limited to specific tasks, we are entirely unprepared for as long as we continue to insist upon employment as our primary source of income.

This may all sound like exaggeration, so let’s take a few decade steps back, and look at what computer technology has been actively doing to human employment so far:
OnTheGo

Let the above chart sink in. Do not be fooled into thinking this conversation about the automation of labor is set in the future. It’s already here. Computer technology is already eating jobs and has been since 1990.

Routine Work
All work can be divided into four types: routine and nonroutine, cognitive and manual. Routine work is the same stuff day in and day out, while nonroutine work varies. Within these two varieties, is the work that requires mostly our brains (cognitive) and the work that requires mostly our bodies (manual). Where once all four types saw growth, the stuff that is routine stagnated back in 1990. This happened because routine labor is easiest for technology to shoulder. Rules can be written for work that doesn’t change, and that work can be better handled by machines.

Distressingly, it’s exactly routine work that once formed the basis of the American middle class. It’s routine manual work that Henry Ford transformed by paying people middle class wages to perform, and it’s routine cognitive work that once filled US office spaces. Such jobs are now increasingly unavailable, leaving only two kinds of jobs with rosy outlooks: jobs that require so little thought, we pay people little to do them, and jobs that require so much thought, we pay people well to do them.

If we can now imagine our economy as a plane with four engines, where it can still fly on only two of them as long as they both keep roaring, we can avoid concerning ourselves with crashing. But what happens when our two remaining engines also fail? That’s what the advancing fields of robotics and AI represent to those final two engines, because for the first time, we are successfully teaching machines to learn.

Neural Networks
I’m a writer at heart, but my educational background happens to be in psychology and physics. I’m fascinated by both of them so my undergraduate focus ended up being in the physics of the human brain, otherwise known as cognitive neuroscience. I think once you start to look into how the human brain works, how our mass of interconnected neurons somehow results in what we describe as the mind, everything changes. At least it did for me.

As a quick primer in the way our brains function, they’re a giant network of interconnected cells. Some of these connections are short, and some are long. Some cells are only connected to one other, and some are connected to many. Electrical signals then pass through these connections, at various rates, and subsequent neural firings happen in turn. It’s all kind of like falling dominoes, but far faster, larger, and more complex. The result amazingly is us, and what we’ve been learning about how we work, we’ve now begun applying to the way machines work.

One of these applications is the creation of deep neural networks – kind of like pared-down virtual brains. They provide an avenue to machine learning that’s made incredible leaps that were previously thought to be much further down the road, if even possible at all. How? It’s not just the obvious growing capability of our computers and our expanding knowledge in the neurosciences, but the vastly growing expanse of our collective data, aka big data.

Big Data
Big data isn’t just some buzzword. It’s information, and when it comes to information, we’re creating more and more of it every day. In fact we’re creating so much that a 2013 report by SINTEF estimated that 90% of all information in the world had been created in the prior two years. This incredible rate of data creation is even doubling every 1.5 years thanks to the Internet, where in 2015 every minute we were liking 4.2 million things on Facebook, uploading 300 hours of video to YouTube, and sending 350,000 tweets. Everything we do is generating data like never before, and lots of data is exactly what machines need in order to learn to learn. Why?

Imagine programming a computer to recognize a chair. You’d need to enter a ton of instructions, and the result would still be a program detecting chairs that aren’t, and not detecting chairs that are. So how did we learn to detect chairs? Our parents pointed at a chair and said, “chair.” Then we thought we had that whole chair thing all figured out, so we pointed at a table and said “chair”, which is when our parents told us that was “table.” This is called reinforcement learning. The label “chair” gets connected to every chair we see, such that certain neural pathways are weighted and others aren’t. For “chair” to fire in our brains, what we perceive has to be close enough to our previous chair encounters. Essentially, our lives are big data filtered through our brains.

Deep Learning
The power of deep learning is that it’s a way of using massive amounts of data to get machines to operate more like we do without giving them explicit instructions. Instead of describing “chairness” to a computer, we instead just plug it into the Internet and feed it millions of pictures of chairs. It can then have a general idea of “chairness.” Next we test it with even more images. Where it’s wrong, we correct it, which further improves its “chairness” detection. Repetition of this process results in a computer that knows what a chair is when it sees it, for the most part as well as we can. The important difference though is that unlike us, it can then sort through millions of images within a matter of seconds.

This combination of deep learning and big data has resulted in astounding accomplishments just in the past year. Aside from the incredible accomplishment of AlphaGo, Google’s DeepMind AI learned how to read and comprehend what it read through hundreds of thousands of annotated news articles. DeepMind also taught itself to play dozens of Atari 2600 video games better than humans, just by looking at the screen and its score, and playing games repeatedly. An AI named Giraffe taught itself how to play chess in a similar manner using a dataset of 175 million chess positions, attaining International Master level status in just 72 hours by repeatedly playing itself. In 2015, an AI even passed a visual Turing test by learning to learn in a way that enabled it to be shown an unknown character in a fictional alphabet, then instantly reproduce that letter in a way that was entirely indistinguishable from a human given the same task. These are all major milestones in AI.

However, despite all these milestones, when asked to estimate when a computer would defeat a prominent Go player, the answer even just months prior to the announcement by Google of AlphaGo’s victory, was by experts essentially, “Maybe in another ten years.” A decade was considered a fair guess because Go is a game so complex I’ll just let Ken Jennings of Jeopardy fame, another former champion human defeated by AI, describe it:

Go is famously a more complex game than chess, with its larger board, longer games, and many more pieces. Google’s DeepMind artificial intelligence team likes to say that there are more possible Go boards than atoms in the known universe, but that vastly understates the computational problem. There are about 10¹⁷⁰ board positions in Go, and only 10⁸⁰ atoms in the universe. That means that if there were as many parallel universes as there are atoms in our universe (!), then the total number of atoms in all those universes combined would be close to the possibilities on a single Go board.

Such confounding complexity makes impossible any brute-force approach to scan every possible move to determine the next best move. But deep neural networks get around that barrier in the same way our own minds do, by learning to estimate what feels like the best move. We do this through observation and practice, and so did AlphaGo, by analyzing millions of professional games and playing itself millions of times. So the answer to when the game of Go would fall to machines wasn’t even close to ten years. The correct answer ended up being, “Any time now.”

Nonroutine Automation
Any time now. That’s the new go-to response in the 21st century for any question involving something new machines can do better than humans, and we need to try to wrap our heads around it.

We need to recognize what it means for exponential technological change to be entering the labor market space for nonroutine jobs for the first time ever. Machines that can learn mean nothing humans do as a job is uniquely safe anymore. From hamburgers to healthcare, machines can be created to successfully perform such tasks with no need or less need for humans, and at lower costs than humans.

Amelia is just one AI out there currently being beta-tested in companies right now. Created by IPsoft over the past 16 years, she’s learned how to perform the work of call center employees. She can learn in seconds what takes us months, and she can do it in 20 languages. Because she’s able to learn, she’s able to do more over time. In one company putting her through the paces, she successfully handled one of every ten calls in the first week, and by the end of the second month, she could resolve six of ten calls. Because of this, it’s been estimated that she can put 250 million people out of a job, worldwide.

Viv is an AI coming soon from the creators of Siri who’ll be our own personal assistant. She’ll perform tasks online for us, and even function as a Facebook News Feed on steroids by suggesting we consume the media she’ll know we’ll like best. In doing all of this for us, we’ll see far fewer ads, and that means the entire advertising industry — that industry the entire Internet is built upon — stands to be hugely disrupted.

A world with Amelia and Viv — and the countless other AI counterparts coming online soon — in combination with robots like Boston Dynamics’ next generation Atlas portends, is a world where machines can do all four types of jobs and that means serious societal reconsiderations. If a machine can do a job instead of a human, should any human be forced at the threat of destitution to perform that job? Should income itself remain coupled to employment, such that having a job is the only way to obtain income, when jobs for many are entirely unobtainable? If machines are performing an increasing percentage of our jobs for us, and not getting paid to do them, where does that money go instead? And what does it no longer buy? Is it even possible that many of the jobs we’re creating don’t need to exist at all, and only do because of the incomes they provide? These are questions we need to start asking, and fast.

Decoupling Income From Work
Fortunately, people are beginning to ask these questions, and there’s an answer that’s building up momentum. The idea is to put machines to work for us, but empower ourselves to seek out the forms of remaining work we as humans find most valuable, by simply providing everyone a monthly paycheck independent of work. This paycheck would be granted to all citizens unconditionally, and its name is universal basic income. By adopting UBI, aside from immunizing against the negative effects of automation, we’d also be decreasing the risks inherent in entrepreneurship, and the sizes of bureaucracies necessary to boost incomes. It’s for these reasons, it has cross-partisan support, and is even now in the beginning stages of possible implementation in countries like Switzerland, Finland, the Netherlands, and Canada.

The future is a place of accelerating changes. It seems unwise to continue looking at the future as if it were the past, where just because new jobs have historically appeared, they always will. The WEF started 2016 off by estimating the creation by 2020 of 2 million new jobs alongside the elimination of 7 million. That’s a net loss, not a net gain of 5 million jobs. In a frequently cited paper, an Oxford study estimated the automation of about half of all existing jobs by 2033. Meanwhile self-driving vehicles, again thanks to machine learning, have the capability of drastically impacting all economies — especially the US economy as I wrote last year about automating truck driving — by eliminating millions of jobs within a short span of time.

And now even the White House, in a stunning report to Congress, has put the probability at 83 percent that a worker making less than $20 an hour in 2010 will eventually lose their job to a machine. Even workers making as much as $40 an hour face odds of 31 percent. To ignore odds like these is tantamount to our now laughable “duck and cover” strategies for avoiding nuclear blasts during the Cold War.

All of this is why it’s those most knowledgeable in the AI field who are now actively sounding the alarm for basic income. During a panel discussion at the end of 2015 at Singularity University, prominent data scientist Jeremy Howard asked “Do you want half of people to starve because they literally can’t add economic value, or not?” before going on to suggest, ”If the answer is not, then the smartest way to distribute the wealth is by implementing a universal basic income.”

AI pioneer Chris Eliasmith, director of the Centre for Theoretical Neuroscience, warned about the immediate impacts of AI on society in an interview with Futurism, “AI is already having a big impact on our economies… My suspicion is that more countries will have to follow Finland’s lead in exploring basic income guarantees for people.”

Moshe Vardi expressed the same sentiment after speaking at the 2016 annual meeting of the American Association for the Advancement of Science about the emergence of intelligent machines, “we need to rethink the very basic structure of our economic system… we may have to consider instituting a basic income guarantee.”

Even Baidu’s chief scientist and founder of Google’s “Google Brain” deep learning project, Andrew Ng, during an onstage interview at this year’s Deep Learning Summit, expressed the shared notion that basic income must be “seriously considered” by governments, citing “a high chance that AI will create massive labor displacement.”

When those building the tools begin warning about the implications of their use, shouldn’t those wishing to use those tools listen with the utmost attention, especially when it’s the very livelihoods of millions of people at stake? If not then, what about when Nobel prize winning economists begin agreeing with them in increasing numbers?

No nation is yet ready for the changes ahead. High labor force non-participation leads to social instability, and a lack of consumers within consumer economies leads to economic instability. So let’s ask ourselves, what’s the purpose of the technologies we’re creating? What’s the purpose of a car that can drive for us, or artificial intelligence that can shoulder 60% of our workload? Is it to allow us to work more hours for even less pay? Or is it to enable us to choose how we work, and to decline any pay/hours we deem insufficient because we’re already earning the incomes that machines aren’t?

What’s the big lesson to learn, in a century when machines can learn?

I offer it’s that jobs are for machines, and life is for people.

Source: medium.com

How Machine Learning May Help Tackle Depression

By detecting trends that humans are unable to spot, researchers hope to treat the disorder more effectively.

OnTheGo

Depression is a simple-sounding condition with complex origins that aren’t fully understood. Now, machine learning may enable scientists to unpick some of its mysteries in order to provide better treatment.

For patients to be diagnosed with Major Depressive Disorder, which is thought to be the result of a blend of genetic, environmental, and psychological factors, they have to display several of a long list of symptoms, such as fatigue or lack of concentration. Once diagnosed, they may receive cognitive behavioral therapy or medication to help ease their condition. But not every treatment works for every patient, as symptoms can vary widely.

Recently, many artificial intelligence researchers have begun to develop ways to apply machine learning to medical situations. Such approaches are able to spot trends and details across huge data sets that humans would never be able to, teasing out results that can be used to diagnose other patients. The New Yorker recently ran a particularly interesting essay about using the technique to make diagnoses from medical scans.

Similar approaches are being used to shed light on depression. A study published in Psychiatry Research earlier this year showed that MRI scans can be analyzed by machine-learning algorithms to establish the likelihood of someone suffering from the condition. By identifying subtle differences in scans of people who were and were not sufferers, the team found that they were able to identify which unseen patients were suffering with major depressive disorder from MRI scans with roughly 75 percent accuracy.

Perhaps more interestingly, Vox reports that researchers from Weill Cornell Medical College are following a similar tack to identify different types of depression. By having machine-learning algorithms interrogate data captured when the brain is in a resting state, the scientists have been able to categorize four different subtypes of the condition that manifest as different mixtures of anxiety and lack of pleasure.

Not all attempts to infer such fine-grained diagnoses from MRI scans have been successful in the past, of course. But the use of AI does provide much better odds of spotting a signal than when individual doctors pore over scans. At the very least, the experiments lend weight to the notion that there are different types of depression.

The approach could be just one part of a broader effort to use machine learning to spot subtle clues related to the condition. Researchers at New York University’s Langone Medical Center, for instance, are using machine-learning techniques to pick out vocal patterns that are particular to people with depression, as well as conditions like PTSD.

And the idea that there may be many types of depression could prove useful, according to Vox. It notes another recent study carried out by researchers at Emory University that found that machine learning was able to identify different patterns of brain activity in fMRI scans that correlated with the effectiveness of different forms of treatment.

In other words, it may be possible not just to use AI to identify unique types of depression, but also to establish how best to treat them. Such approaches are still a long way from providing clinically relevant results, but they do show that it may be possible to identify better ways to help sufferers in the future.

In the meantime, some researchers are also trying to develop AIs to ensure that depression doesn’t lead to tragic outcomes like self-harm or suicide. Last month, for instance, Wired reported that scientists at Florida State University had developed machine-learning software that analyzes patterns in health records to flag patients that may be at risk of suicidal thoughts. And Facebook claims it can do something similar by analyzing user content—but it remains to be seen how effective its interventions might be.

Source: MIT Technology Review