Work In Retail? There’s A Robot Getting Ready To Take Your Job

Cashiers–74% of whom are women–are likely to be the first to be overtaken by the automation wave. 

OnTheGo

As retailers install self-checkout systems, proximity beacons that flash offers to shoppers’ phones, and invest in robots that replenish shelves, they’re likely to need fewer and fewer workers in the coming decade. A new analysis finds that up to 7.5 million jobs are at risk in U.S. retail, with women and rural areas particularly affected.

The last two years has seen a string of retail bankruptcies and store closures, with once storied names like J.C. Penney, RadioShack, Macy’s, and Sears under pressure as never before. Now analysts say retailers are likely to turn to automation as they try to end the so-called “Great Retail Apocalypse.”

“Labor productivity has been stagnant in the retail industry for a long time and now we’re seeing minimum wage increases around the country and a tight labor market that’s forcing up wages,” says John Wilson, head of research at Cornerstone Capital, a financial services firm that focuses on sustainable investing. “That’s putting pressure on companies to solve these problems at a time when a lot of these technologies are coming into play.”

The research was commissioned by Investor Responsibility Research Center Institute, a nonprofit group, and prepared by Cornerstone Capital. The job loss estimates are based on well-known research from Oxford University and figures from the U.S. Bureau of Labor Statistics. The U.S. retail industry employs about 10% of the total workforce.

Wilson says cashiers–74% of whom are women–are likely to the first overtaken by the automation wave. Also likely to be affected are retail salespeople, who may not be needed as shoppers increasingly consult their phones for information about sizes, colors, and availability. “Smartphones have all kinds of information about the products you want to buy, so the need for salespeople is considerably less,” he says.

For example, Bloomingdale’s has tested smart fitting rooms with wall-mounted tablets allowing customers to scan items and view other colors and sizes and receive recommendations to “complete the look.” Home Depot says four self-checkout systems occupy the space of three normal aisles and obviate the need for two human cashiers. Amazon’s Go concept stores have no cashiers at all, enabling shoppers to pay for everything through their phones.

Worryingly, the report says automation could affect areas where unemployment is already higher than the national average. “WalMart and other large retailers have greater market share in communities with less than 500,000 people,” it says. “If employment trends correlate to market share location, retail automation by retailers could disproportionately impact these smaller communities.”

Wilson cautions retailers against going all in for convenience at the expense of retail experience, lest they simply become higher cost versions of online stores. “If the technology simply allows you to reduce costs by reducing the number of employees, that may not be a winning strategy,” he says. “They [may need] to create an experience. You go into the store and it’s fun. You have a relationship with the people who work there and you’re discovering new products. Most companies are headed in that direction and that requires an investment in both labor and technology.”

Source: Fast Company

Google is now using deep learning to measure store visits

OnTheGo

Google is announcing a major update to its store visits measurement tool today at its Google Marketing Next conference. Google has used anonymized location and contextual data since 2014 to estimate brick and mortar store visits spurred by online ads. The company is augmenting its existing models with deep learning to bring insights to even more customers.

Omnichannel marketing is as big of a buzzword as they come. As obnoxious as the term is, the problem it underscores is a serious one for marketers. How does anyone combine data from old world physical retail stores with data from online shopping in a way that actually informs business decisions?

Google has gotten fairly good at using wifi signal, location, mapping and calibration data to estimate store visits, but the company still struggles to deliver insights to customers that operate in dense cities and multi-story malls. Long tail use cases like these elude traditional estimation techniques.

To address the unreliability, Google is turning to deep learning. Its hope is that it can restore accuracy by funneling a greater amount of diverse training data into a deep learning model to account for more use cases.

“We do this with machine learning at the core,” said Jerry Dischler, VP of product management for AdWords in an interview. “We couldn’t measure store visits without ML.”

In just three years, Google says it has measured five billion store visits. In an effort to boost this number even higher, the company is expanding its compatible campaigns to include YouTube TrueView. This is a logical next step in a world that is increasingly driven by video.

In addition to adding YouTube support, Google is also announcing to marketers that it plans to push store sales management onto the device and campaign level. Integrating point of sale data into Adwords will further help to differentiate a visit from a conversion.

Source: techcrunch.com

MindMaze offers VR treatment for stroke victims in the U.S.

OnTheGo

Swiss startup MindMaze, which raised $100 million in funding, has launched its MindMaze Pro virtual reality treatment for stroke recovery patients in the U.S.

The launch shows that VR is for more than just games, as it stands at the intersection of neuroscience and entertainment. The launch is the latest example of how VR is spreading far beyond games and entertainment. And that’s important, as consumer VR sales have been slower than expected.

Based on a decade of research and testing for “neuro VR,” the MindMaze Pro is an upper-limb neurorehabilitation platform that uses proprietary 3D motion-tracking cameras to help patients recovering from traumatic injuries and those suffering from acute and chronic strokes.

MindMaze introduced MindMotion Pro to the European market in 2013, and hundreds of patients have used the platform for the rehabilitation therapy.

Lausanne, Switzerland-based MindMaze already delivered a thought-powered virtual reality, augmented reality, and motion capture game system in MindLeap. The company, whose board includes some of the world’s leading doctors and neuroscientists, will be applying its multisensory computing platform to numerous new fields, which include robotics and transportation.

The U.S. Food and Drug Administration has granted MindMaze 510(k) clearance. Mindmaze also said that it has completed 261 patient trials of MindMotion Go, a portable neurotechnology device that uses VR to continue therapy after a patient leaves the hospital. Those trials were conducted in the United Kingdom, Germany, and Switzerland.

Each year in the U.S., about 800,000 people suffer a stroke, resulting in debilitating health effects as well as direct and indirect losses of economic activity of $65 billion according to research published in The American Journal of Managed Care.

MindMotion Pro works by mapping a patient’s movements onto 3D avatars in customized interactive exercises that are based on standardized neurorehabilitation principles of upper limb rehabilitation and cognitive paradigms. By doing so, it reactivates damaged neural pathways and activates new ones.

MindMaze’s technology is specifically designed to help stroke patients and those with traumatic brain injuries start recovery early and continue it for maximum gains.

MindMotion Pro uses VR games to keep patients engaged with therapies for recovery. It has custom tracking technology that gives real-time and accurate patient tracking in both bedside and wheelchair uses. It also helps the therapist in supporting the patient during the activities.

“Our work at the forefront of neuroscience and virtual reality allows patients to recover faster and return more fully to the life they lived before injury,” said Tadi in a statement. “Over the last decade, we’ve honed this therapy to be cost-effective for both patients and healthcare providers.”

Due to the motivating effects of the virtual-reality based games, patients can engage in 10 to 15 times more exercising repetitions than with standard rehabilitation programs, and because the system offers real-time multisensory feedback, therapists can assess progress and tailor therapy to patient performance.

Research from the leading rehabilitation facility Clinique Romande de Réadaptation/EPFL in Sion, Switzerland, found that 90 percent of chronic stroke patients using MindMotion Pro reported heightened motivation to perform rehab, increased the potential for motor function recovery and that their training intensity doubled within the first 10 sessions of using the platform. Additionally, Lausanne University Hospital-CHUV reported 100 percent of patients forgot they were in a hospital.

Source: venturebeat.com

The Very Strange and Fascinating Ideas Behind Quantum Computing

In 1952, Remington Rand’s UNIVAC computer debuted on CBS to forecast the 1952 election as early results came in. By 8:30, the “electronic brain” was predicting a landslide, with Eisenhower taking 438 electoral votes to Stevenson’s 93. The CBS brass scoffed at the unlikely result, but by the end of the night UNIVAC proved to be uncannily accurate.

It was that night that the era of digital computing truly began and it was a big blow to IBM, the leader in punch card calculators at the time. It’s Research division, however, was already working on more advanced digital technology. In 1964, it launched its System 360 and dominated the industry for the next two decades.

Today, we’ve reached a similar inflection point. Moore’s law, the paradigm which has driven computing for half a century will reach its limits in about five years. And much like back in the 1950’s, IBM has been working on a new quantum computer that may dominate the industry for decades to come. If that sounds unlikely, wait till you hear the ideas behind it.

A 90 Year-Old Argument
In the early 20th century, one of the fundamental assumptions was an idea, sometimes known as Laplace’s demon, that the universe was perfectly deterministic. In other words, if you knew the precise location and momentum of every particle in the universe, you could calculate all of their past and future values. Every effect has a cause, or so it was thought.

Yet by the 1920’s, many began to question that idea and the issue came to a head in a series of debates between Albert Einstein and Niels Bohr. It was then that Einstein famously said, “God does not play dice with the universe.” To which Bohr cleverly retorted, “Einstein, stop telling God what to do!”

At issue were two ideas in particular. The first was quantum superposition, or the principle that particles can take on an almost ghostly combination of many states at the same time. The second is quantum entanglement, which says that it is possible for one particle with unpredictable behavior to allow you to perfectly predict the behavior of another one.

These are hard ideas to accept because they run counter to what we experience in normal life. Everyday physical objects don’t simply appear and disappear, or start jetting off in one direction for no particular reason. Einstein, who certainly did not lack imagination, could never accept them and devised an experiment, called the EPR paradox to disprove them.

Yet it is exactly these ideas that IBM is betting on now. To help me wrap my head around it all, I spent several hours talking to Charlie Bennett, an IBM Fellow considered to be one of the founders of quantum information theory.

A Geek Before Geeks Were Cool
Growing up in the quiet Westchester village of Croton-on-Hudson, about a half hour from IBM’s headquarters in Armonk NY, Bennett was, as he put it to me, “a geek before geeks were cool.” While other teenage boys were riding bikes and playing baseball, he usually had his head buried in a copy of Scientific American, wrapping himself in its world of crazy ideas.

And in the 1950’s, there were more than enough fantastical discoveries to go around. Many things we take for granted today, like computers that work as “electronic brains” and nuclear energy, were novel back then and just beginning to be understood. However, what enthralled him the most at the time was Watson and Crick’s discovery of the structure of DNA.

So when he went of to college at Brandeis, Bennett was determined to become a biochemist. Unfortunately, the university didn’t offer that as a major, so he got his degree in chemistry and then went to Harvard to study molecular dynamics under David Turnbull and Berni Alder, two giants in the field.

Yet even that heady work was unable to quench his curiosity, so Bennett branched out. He took a course about mathematical logic and the theory of computing, which introduced him to the ideas of Kurt Gödel and Alan Turing, while at the same time working as a teaching assistant for James Watson, who won the Nobel prize for the discovery of the structure and function of DNA just a few years earlier.

Oddly, he found his two extracurricular activities to be two sides of the same coin, with the DNA transcription machinery eerily similar to a Turing’s ideas about a universal computer. It was that insight—that the world of computation could be more than a sequence of ones and zeros—that set him on his course. He began to see strange forms of computation almost everywhere he looked.

A Witches Brew of Crazy Ideas
As a graduate student, Bennett went to see a talk by an IBM scientist named Rolf Landauer and learned about his principle that if bits are not erased, then energy can be conserved. With his background in chemistry, Bennett was able to further Landauer’s work and make important breakthroughs in reversible computing. Bennett was soon thoroughly hooked on computing—and on IBM.

Although he had planned on a career in academia, he found that, “being at the Yorktown lab gave me the opportunity, within one building, to interact with physicists, engineers, and computer scientists and learn about their fields. Over the subsequent 44 years, I’ve had the freedom to think about what I wanted, and to visit and collaborate with scientists at universities and laboratories all over the world.”

It was that ability to explore new horizons without limits that drove Bennett’s work. For example, his friend Stephen Wiesner came up with the idea of quantum money that, because of the rules of quantum mechanics, would be impossible to counterfeit. It was the first time someone had a concrete plan to use quantum mechanics for informational purposes.

Weisner’s insight led Bennett, along with Gilles Brassard, to develop the concept of quantum cryptography, which has a similar logic to it. Anybody attempting to eavesdrop on a message encrypted quantumly would destroy the message. These were breakthrough ideas, but what came next was even more impressive.

Einstein’s Last Stand
As noted above, Einstein could never bring himself to accept quantum mechanics, especially entanglement, because he thought that such “spooky action at a distance” violated the laws of physics. How could observing a particle in one place tell you about a particle in another place, without affecting it in some way?

Einstein felt so strongly about the idea that he devised an experiment, called the EPR paradox, to finally prove or disprove the concept. In a nutshell, he proposed to test the principle of entanglement by using one particle to predict the behavior of another one. John Bell showed this could be indeed be done and other scientists verified his results in a lab a few years later.

Armed with their insights quantum cryptography, Bennett and Brassard, along with a number of colleagues, took Bell’s work a step further in the famous quantum teleportation experiment carried out in 1993, which not only made clear that was Einstein wrong, but that quantum entanglement could actually be far more useful than anyone had dreamed.

Yet Bennett still had his sights set on an even bigger prize—using quantum states to compute, rather than just relay, information. What he was proposing seem almost incomprehensible at the time—a computer based on quantum states potentially millions of times more powerful than conventional technology. In 1993, he wrote down four laws that would guide the field.

A New Quantum Universe of Computing
To understand how a quantum computer works, we first have to think about how a classical computer, sometimes known as a Turing machine, works. In essence, today’s computers transform long series of ones and zeros — called bits — into logical statements and functions according to a set of rules called Boolean logic.

Now, ordinarily, this would be an incredibly foolish way to go about things because you need a lot of ones and zeros to explain anything, but today’s computers can do literally billions of calculations per second. So at this point, we are able to communicate with machines in a fairly reasonable way, such as typing on a keyboard or even talking into a microphone.

To get an understanding of how this works, let’s look at a character. Eight bits gives us 28, or 256, possible combinations, which is plenty of space to accommodate letters, numbers, punctuation and other symbols. With processors able to handle billions of bits per second, we can get quite a lot done even with basic, everyday machines.

The math of quantum computers works in a somewhat similar way, except because of superposition and entanglement, instead of combinations, it produces “states.” These states do not conform to any physical reality we would be familiar with, but roughly represent separate dimensions in which a quantum calculation may take place.

So an eight quantum bit (or qubit) computer can be in a superposition of 256 different states (or dimensions), while a 300 qubit computer can be simultaneously doing more calculations than there are atoms in the universe.

There is, however, a problem. These “states” represent only possibilities. To get a quantum computer to focus on a single concrete answer is a very complicated business. When the quantum computer is being used to answer a quantum question, such as how the human body interacts with a new drug, this focusing happens automatically.  But in other cases, such as when a quantum computer is used to answer a classical question, major difficulties arise.

The potential of quantum computing is immense, so computer scientists at IBM and elsewhere are working feverishly to smooth out the kinks—and making impressive progress. IBM has also made a prototype quantum computer available in the cloud, where even college students can learn how to program it.

We Are Entering a New Quantum Era
The ideas surrounding quantum computing are so strange that I must confess that while talking to Dr. Bennett, I sometimes wondered whether I had somehow wandered into a late night dorm room discussion that had gone on too long. As the legendary physicist Richard Feynman confessed, the ideas behind quantum mechanics are pretty hard to accept.

Yet as Feynman also pointed out, these are truths that we will have to accept, because they are truths inherent to the universe we live in. They are part of what I call the visceral abstract— unlikely ideas that violate our basic notions of common sense, but nevertheless play an important part in our lives.

We can, for example, deny Einstein’s notions about the relativity of time and space, but if our GPS navigators are not calibrated according to his equations, we’re going to have a hard time getting to where we’re going. We can protest all we want that it doesn’t make any sense, but the universe doesn’t give us a vote.

That’s what’s amazing about people like Charlie Bennett. Where most people would say, “Gee, that’s weird,” he sees a system of rules that he can exploit to create things few others could ever imagine, almost as if he was playing the George Clooney character in Ocean’s 11. But instead of scamming a casino, he’s gaming the universe for our benefit.

“Charlie is one of the deepest thinkers I know,” says IBM’s Heike Riel. “Today we can see that those theoretical concepts have come into fruition. We are on the path to a truly practical quantum computer, which, when it’s built, will be one of the greatest milestones not just for the IBM company, but in the history of information technology.”

So we now find ourselves in something much like those innocent days before 1952, when few could imagine something like UNIVAC could outsmart a team of human experts. In a decade or two, we’ll most likely have to explain to a new generation what it was like to live in a world without quantum computers, before the new era began.

Source: Innovationexcellence.com

Expeditions AR brings volcanoes and DNA molecules to the classroom

OnTheGo

Google’s popular education-focused Expeditions program has allowed over two million students to immerse themselves in new environments and get a close look at monuments and other items of interest using the Cardboard VR headsets. Now the program is moving from virtual to augmented reality.

Expeditions AR uses Tango-compatible smartphones like the Lenovo Phab 2 Pro to put the study subjects directly in the classroom.

Launching this fall through Google’s Pioneer Program, users will be able to point their AR-ready devices at specific points in the classroom and find volcanoes, the Statue of David, DNA molecules, and more awaiting them. The objects are fully interactive; Google’s demo video shows a volcano erupting, billowing out smoke and lava.

Much like the original Expeditions for VR, Expeditions AR looks to be an exciting new project that will undoubtedly get students more excited​and involved in their studies.

Source: 9to5google.com

Our machines now have knowledge well never understand

The new availability of huge amounts of data, along with the statistical tools to crunch these numbers, offers a whole new way of understanding the world. Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.

OnTheGo

So wrote Wired’s Chris Anderson in 2008. It kicked up a little storm at the time, as Anderson, the magazine’s editor, undoubtedly intended. For example, an article in a journal of molecular biology asked, “…if we stop looking for models and hypotheses, are we still really doing science?” The answer clearly was supposed to be: “No.”

But today — not even a decade since Anderson’s article — the controversy sounds quaint. Advances in computer software, enabled by our newly capacious, networked hardware, are enabling computers not only to start without models — rule sets that express how the elements of a system affect one another — but to generate their own, albeit ones that may not look much like what humans would create. It’s even becoming a standard method, as any self-respecting tech company has now adopted a “machine-learning first” ethic.

We are increasingly relying on machines that derive conclusions from models that they themselves have created, models that are often beyond human comprehension, models that “think” about the world differently than we do.

But this comes with a price. This infusion of alien intelligence is bringing into question the assumptions embedded in our long Western tradition. We thought knowledge was about finding the order hidden in the chaos. We thought it was about simplifying the world. It looks like we were wrong. Knowing the world may require giving up on understanding it.

Models Beyond Understanding

In a series on machine learning, Adam Geitgey explains the basics, from which this new way of “thinking” is emerging:

[T]here are generic algorithms that can tell you something interesting about a set of data without you having to write any custom code specific to the problem. Instead of writing code, you feed data to the generic algorithm and it builds its own logic based on the data.”

For example, you give a machine learning system thousands of scans of sloppy, handwritten 8s and it will learn to identify 8s in a new scan. It does so, not by deriving a recognizable rule, such as “An 8 is two circles stacked vertically,” but by looking for complex patterns of darker and lighter pixels, expressed as matrices of numbers — a task that would stymie humans. In a recent agricultural example, the same technique of numerical patterns taught a computer how to sort cucumbers.

Then you can take machine learning further by creating an artificial neural network that models in software how the human brain processes signals.[1] Nodes in an irregular mesh turn on or off depending on the data coming to them from the nodes connected to them; those connections have different weights, so some are more likely to flip their neighbors than others. Although artificial neural networks date back to the 1950s, they are truly coming into their own only now because of advances in computing power, storage, and mathematics. The results from this increasingly sophisticated branch of computer science can be deep learning that produces outcomes based on so many different variables under so many different conditions being transformed by so many layers of neural networks that humans simply cannot comprehend the model the computer has built for itself.

Yet it works. It’s how Google’s AlphaGo program came to defeat the third-highest ranked Go player in the world. Programming a machine to play Go is more than a little daunting than sorting cukes, given that the game has 10^350 possible moves; there are 10^123 possible moves in chess, and 10^80 atoms in the universe. Google’s hardware wasn’t even as ridiculously overpowered as it might have been: It had only 48 processors, plus eight graphics processors that happen to be well-suited for the required calculations.

AlphaGo was trained on thirty million board positions that occurred in 160,000 real-life games, noting the moves taken by actual players, along with an understanding of what constitutes a legal move and some other basics of play. Using deep learning techniques that refine the patterns recognized by the layer of the neural network above it, the system trained itself on which moves were most likely to succeed.

Although AlphaGo has proven itself to be a world class player, it can’t spit out practical maxims from which a human player can learn. The program works not by developing generalized rules of play — e.g., “Never have more than four sets of unconnected stones on the board” — but by analyzing which play has the best chance of succeeding given a precise board configuration. In contrast, Deep Blue, the dedicated IBM chess-playing computer, has been programmed with some general principles of good play. As Christof Koch writes in Scientific American, AlphaGo’s intelligence is in the weights of all those billions of connections among its simulated neurons. It creates a model that enables it to make decisions, but that model is ineffably complex and conditional. Nothing emerges from this mass of contingencies, except victory against humans.

As a consequence, if you, with your puny human brain, want to understand why AlphaGo chose a particular move, the “explanation” may well consist of the networks of weighted connections that then pass their outcomes to the next layer of the neural network. Your brain can’t remember all those weights, and even if it could, it couldn’t then perform the calculation that resulted in the next state of the neural network. And even if it could, you would have learned nothing about how to play Go, or, in truth, how AlphaGo plays Go—just as internalizing a schematic of the neural states of a human player would not constitute understanding how she came to make any particular move.

Go is just a game, so it may not seem to matter that we can’t follow AlphaGo’s decision path. But what do we say about the neural networks that are enabling us to analyze the interactions of genes in two-locus genetic diseases? How about the use of neural networks to discriminate the decay pattern of single and multiple particles at the Large Hadron Collider? How the use of machine learning to help identify which of the 20 climate change models tracked by the Intergovernmental Panel on Climate Change is most accurate at any point? Such machines give us good results — for example: “Congratulations! You just found a Higgs boson!” — but we cannot follow their “reasoning.”

Clearly our computers have surpassed us in their power to discriminate, find patterns, and draw conclusions. That’s one reason we use them. Rather than reducing phenomena to fit a relatively simple model, we can now let our computers make models as big as they need to. But this also seems to mean that what we know depends upon the output of machines the functioning of which we cannot follow, explain, or understand.

Since we first started carving notches in sticks, we have used things in the world to help us to know that world. But never before have we relied on things that did not mirror human patterns of reasoning — we knew what each notch represented — and that we could not later check to see how our non-sentient partners in knowing came up with those answers. If knowing has always entailed being able to explain and justify our true beliefs — Plato’s notion, which has persisted for over two thousand years — what are we to make of a new type of knowledge, in which that task of justification is not just difficult or daunting but impossible?

Source: backchannel.com

Artificial intelligence prevails at predicting Supreme Court decisions

OnTheGo

“See you in the Supreme Court!” President Donald Trump tweeted last week, responding to lower court holds on his national security policies. But is taking cases all the way to the highest court in the land a good idea? Artificial intelligence may soon have the answer. A new study shows that computers can do a better job than legal scholars at predicting Supreme Court decisions, even with less information.

Several other studies have guessed at justices’ behavior with algorithms. A 2011 project, for example, used the votes of any eight justices from 1953 to 2004 to predict the vote of the ninth in those same cases, with 83% accuracy. A 2004 paper tried seeing into the future, by using decisions from the nine justices who’d been on the court since 1994 to predict the outcomes of cases in the 2002 term. That method had an accuracy of 75%.

The new study draws on a much richer set of data to predict the behavior of any set of justices at any time. Researchers used the Supreme Court Database, which contains information on cases dating back to 1791, to build a general algorithm for predicting any justice’s vote at any time. They drew on 16 features of each vote, including the justice, the term, the issue, and the court of origin. Researchers also added other factors, such as whether oral arguments were heard.

For each year from 1816 to 2015, the team created a machine-learning statistical model called a random forest. It looked at all prior years and found associations between case features and decision outcomes. Decision outcomes included whether the court reversed a lower court’s decision and how each justice voted. The model then looked at the features of each case for that year and predicted decision outcomes. Finally, the algorithm was fed information about the outcomes, which allowed it to update its strategy and move on to the next year.

From 1816 until 2015, the algorithm correctly predicted 70.2% of the court’s 28,000 decisions and 71.9% of the justices’ 240,000 votes, the authors report in PLOS ONE. That bests the popular betting strategy of “always guess reverse,” which has been the case in 63% of Supreme Court cases over the last 35 terms. It’s also better than another strategy that uses rulings from the previous 10 years to automatically go with a “reverse” or an “affirm” prediction. Even knowledgeable legal experts are only about 66% accurate at predicting cases, the 2004 study found. “Every time we’ve kept score, it hasn’t been a terribly pretty picture for humans,” says the study’s lead author, Daniel Katz, a law professor at Illinois Institute of Technology in Chicago.

Roger Guimerà, a physicist at Rovira i Virgili University in Tarragona, Spain, and lead author of the 2011 study, says the new algorithm “is rigorous and well done.” Andrew Martin, a political scientist at the University of Michigan in Ann Arbor and an author of the 2004 study, commends the new team for producing an algorithm that works well over 2 centuries. “They’re curating really large data sets and using state-of-the-art methods,” he says. “That’s scientifically really important.”

Outside the lab, bankers and lawyers might put the new algorithm to practical use. Investors could bet on companies that might benefit from a likely ruling. And appellants could decide whether to take a case to the Supreme Court based on their chances of winning. “The lawyers who typically argue these cases are not exactly bargain basement priced,” Katz says.

Attorneys might also plug different variables into the model to forge their best path to a Supreme Court victory, including which lower court circuits are likely to rule in their favor, or the best type of plaintiff for a case. Michael Bommarito, a researcher at Chicago-Kent College of Law and study co-author, offers a real example in National Federation of Independent Business v. Sebelius, in which the Affordable Care Act was on the line: “One of the things that made that really interesting was: Was it about free speech, was it about taxation, was it about some kind of health rights issues?” The algorithm might have helped the plaintiffs decide which issue to highlight.

Future extensions of the algorithm could include the full text of oral arguments or even expert predictions. According to Katz: “We believe the blend of experts, crowds, and algorithms is the secret sauce for the whole thing.”

Source: Science Magazine