The Very Strange and Fascinating Ideas Behind Quantum Computing

In 1952, Remington Rand’s UNIVAC computer debuted on CBS to forecast the 1952 election as early results came in. By 8:30, the “electronic brain” was predicting a landslide, with Eisenhower taking 438 electoral votes to Stevenson’s 93. The CBS brass scoffed at the unlikely result, but by the end of the night UNIVAC proved to be uncannily accurate.

It was that night that the era of digital computing truly began and it was a big blow to IBM, the leader in punch card calculators at the time. It’s Research division, however, was already working on more advanced digital technology. In 1964, it launched its System 360 and dominated the industry for the next two decades.

Today, we’ve reached a similar inflection point. Moore’s law, the paradigm which has driven computing for half a century will reach its limits in about five years. And much like back in the 1950’s, IBM has been working on a new quantum computer that may dominate the industry for decades to come. If that sounds unlikely, wait till you hear the ideas behind it.

A 90 Year-Old Argument
In the early 20th century, one of the fundamental assumptions was an idea, sometimes known as Laplace’s demon, that the universe was perfectly deterministic. In other words, if you knew the precise location and momentum of every particle in the universe, you could calculate all of their past and future values. Every effect has a cause, or so it was thought.

Yet by the 1920’s, many began to question that idea and the issue came to a head in a series of debates between Albert Einstein and Niels Bohr. It was then that Einstein famously said, “God does not play dice with the universe.” To which Bohr cleverly retorted, “Einstein, stop telling God what to do!”

At issue were two ideas in particular. The first was quantum superposition, or the principle that particles can take on an almost ghostly combination of many states at the same time. The second is quantum entanglement, which says that it is possible for one particle with unpredictable behavior to allow you to perfectly predict the behavior of another one.

These are hard ideas to accept because they run counter to what we experience in normal life. Everyday physical objects don’t simply appear and disappear, or start jetting off in one direction for no particular reason. Einstein, who certainly did not lack imagination, could never accept them and devised an experiment, called the EPR paradox to disprove them.

Yet it is exactly these ideas that IBM is betting on now. To help me wrap my head around it all, I spent several hours talking to Charlie Bennett, an IBM Fellow considered to be one of the founders of quantum information theory.

A Geek Before Geeks Were Cool
Growing up in the quiet Westchester village of Croton-on-Hudson, about a half hour from IBM’s headquarters in Armonk NY, Bennett was, as he put it to me, “a geek before geeks were cool.” While other teenage boys were riding bikes and playing baseball, he usually had his head buried in a copy of Scientific American, wrapping himself in its world of crazy ideas.

And in the 1950’s, there were more than enough fantastical discoveries to go around. Many things we take for granted today, like computers that work as “electronic brains” and nuclear energy, were novel back then and just beginning to be understood. However, what enthralled him the most at the time was Watson and Crick’s discovery of the structure of DNA.

So when he went of to college at Brandeis, Bennett was determined to become a biochemist. Unfortunately, the university didn’t offer that as a major, so he got his degree in chemistry and then went to Harvard to study molecular dynamics under David Turnbull and Berni Alder, two giants in the field.

Yet even that heady work was unable to quench his curiosity, so Bennett branched out. He took a course about mathematical logic and the theory of computing, which introduced him to the ideas of Kurt Gödel and Alan Turing, while at the same time working as a teaching assistant for James Watson, who won the Nobel prize for the discovery of the structure and function of DNA just a few years earlier.

Oddly, he found his two extracurricular activities to be two sides of the same coin, with the DNA transcription machinery eerily similar to a Turing’s ideas about a universal computer. It was that insight—that the world of computation could be more than a sequence of ones and zeros—that set him on his course. He began to see strange forms of computation almost everywhere he looked.

A Witches Brew of Crazy Ideas
As a graduate student, Bennett went to see a talk by an IBM scientist named Rolf Landauer and learned about his principle that if bits are not erased, then energy can be conserved. With his background in chemistry, Bennett was able to further Landauer’s work and make important breakthroughs in reversible computing. Bennett was soon thoroughly hooked on computing—and on IBM.

Although he had planned on a career in academia, he found that, “being at the Yorktown lab gave me the opportunity, within one building, to interact with physicists, engineers, and computer scientists and learn about their fields. Over the subsequent 44 years, I’ve had the freedom to think about what I wanted, and to visit and collaborate with scientists at universities and laboratories all over the world.”

It was that ability to explore new horizons without limits that drove Bennett’s work. For example, his friend Stephen Wiesner came up with the idea of quantum money that, because of the rules of quantum mechanics, would be impossible to counterfeit. It was the first time someone had a concrete plan to use quantum mechanics for informational purposes.

Weisner’s insight led Bennett, along with Gilles Brassard, to develop the concept of quantum cryptography, which has a similar logic to it. Anybody attempting to eavesdrop on a message encrypted quantumly would destroy the message. These were breakthrough ideas, but what came next was even more impressive.

Einstein’s Last Stand
As noted above, Einstein could never bring himself to accept quantum mechanics, especially entanglement, because he thought that such “spooky action at a distance” violated the laws of physics. How could observing a particle in one place tell you about a particle in another place, without affecting it in some way?

Einstein felt so strongly about the idea that he devised an experiment, called the EPR paradox, to finally prove or disprove the concept. In a nutshell, he proposed to test the principle of entanglement by using one particle to predict the behavior of another one. John Bell showed this could be indeed be done and other scientists verified his results in a lab a few years later.

Armed with their insights quantum cryptography, Bennett and Brassard, along with a number of colleagues, took Bell’s work a step further in the famous quantum teleportation experiment carried out in 1993, which not only made clear that was Einstein wrong, but that quantum entanglement could actually be far more useful than anyone had dreamed.

Yet Bennett still had his sights set on an even bigger prize—using quantum states to compute, rather than just relay, information. What he was proposing seem almost incomprehensible at the time—a computer based on quantum states potentially millions of times more powerful than conventional technology. In 1993, he wrote down four laws that would guide the field.

A New Quantum Universe of Computing
To understand how a quantum computer works, we first have to think about how a classical computer, sometimes known as a Turing machine, works. In essence, today’s computers transform long series of ones and zeros — called bits — into logical statements and functions according to a set of rules called Boolean logic.

Now, ordinarily, this would be an incredibly foolish way to go about things because you need a lot of ones and zeros to explain anything, but today’s computers can do literally billions of calculations per second. So at this point, we are able to communicate with machines in a fairly reasonable way, such as typing on a keyboard or even talking into a microphone.

To get an understanding of how this works, let’s look at a character. Eight bits gives us 28, or 256, possible combinations, which is plenty of space to accommodate letters, numbers, punctuation and other symbols. With processors able to handle billions of bits per second, we can get quite a lot done even with basic, everyday machines.

The math of quantum computers works in a somewhat similar way, except because of superposition and entanglement, instead of combinations, it produces “states.” These states do not conform to any physical reality we would be familiar with, but roughly represent separate dimensions in which a quantum calculation may take place.

So an eight quantum bit (or qubit) computer can be in a superposition of 256 different states (or dimensions), while a 300 qubit computer can be simultaneously doing more calculations than there are atoms in the universe.

There is, however, a problem. These “states” represent only possibilities. To get a quantum computer to focus on a single concrete answer is a very complicated business. When the quantum computer is being used to answer a quantum question, such as how the human body interacts with a new drug, this focusing happens automatically.  But in other cases, such as when a quantum computer is used to answer a classical question, major difficulties arise.

The potential of quantum computing is immense, so computer scientists at IBM and elsewhere are working feverishly to smooth out the kinks—and making impressive progress. IBM has also made a prototype quantum computer available in the cloud, where even college students can learn how to program it.

We Are Entering a New Quantum Era
The ideas surrounding quantum computing are so strange that I must confess that while talking to Dr. Bennett, I sometimes wondered whether I had somehow wandered into a late night dorm room discussion that had gone on too long. As the legendary physicist Richard Feynman confessed, the ideas behind quantum mechanics are pretty hard to accept.

Yet as Feynman also pointed out, these are truths that we will have to accept, because they are truths inherent to the universe we live in. They are part of what I call the visceral abstract— unlikely ideas that violate our basic notions of common sense, but nevertheless play an important part in our lives.

We can, for example, deny Einstein’s notions about the relativity of time and space, but if our GPS navigators are not calibrated according to his equations, we’re going to have a hard time getting to where we’re going. We can protest all we want that it doesn’t make any sense, but the universe doesn’t give us a vote.

That’s what’s amazing about people like Charlie Bennett. Where most people would say, “Gee, that’s weird,” he sees a system of rules that he can exploit to create things few others could ever imagine, almost as if he was playing the George Clooney character in Ocean’s 11. But instead of scamming a casino, he’s gaming the universe for our benefit.

“Charlie is one of the deepest thinkers I know,” says IBM’s Heike Riel. “Today we can see that those theoretical concepts have come into fruition. We are on the path to a truly practical quantum computer, which, when it’s built, will be one of the greatest milestones not just for the IBM company, but in the history of information technology.”

So we now find ourselves in something much like those innocent days before 1952, when few could imagine something like UNIVAC could outsmart a team of human experts. In a decade or two, we’ll most likely have to explain to a new generation what it was like to live in a world without quantum computers, before the new era began.

Source: Innovationexcellence.com

Expeditions AR brings volcanoes and DNA molecules to the classroom

OnTheGo

Google’s popular education-focused Expeditions program has allowed over two million students to immerse themselves in new environments and get a close look at monuments and other items of interest using the Cardboard VR headsets. Now the program is moving from virtual to augmented reality.

Expeditions AR uses Tango-compatible smartphones like the Lenovo Phab 2 Pro to put the study subjects directly in the classroom.

Launching this fall through Google’s Pioneer Program, users will be able to point their AR-ready devices at specific points in the classroom and find volcanoes, the Statue of David, DNA molecules, and more awaiting them. The objects are fully interactive; Google’s demo video shows a volcano erupting, billowing out smoke and lava.

Much like the original Expeditions for VR, Expeditions AR looks to be an exciting new project that will undoubtedly get students more excited​and involved in their studies.

Source: 9to5google.com

Our machines now have knowledge well never understand

The new availability of huge amounts of data, along with the statistical tools to crunch these numbers, offers a whole new way of understanding the world. Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.

OnTheGo

So wrote Wired’s Chris Anderson in 2008. It kicked up a little storm at the time, as Anderson, the magazine’s editor, undoubtedly intended. For example, an article in a journal of molecular biology asked, “…if we stop looking for models and hypotheses, are we still really doing science?” The answer clearly was supposed to be: “No.”

But today — not even a decade since Anderson’s article — the controversy sounds quaint. Advances in computer software, enabled by our newly capacious, networked hardware, are enabling computers not only to start without models — rule sets that express how the elements of a system affect one another — but to generate their own, albeit ones that may not look much like what humans would create. It’s even becoming a standard method, as any self-respecting tech company has now adopted a “machine-learning first” ethic.

We are increasingly relying on machines that derive conclusions from models that they themselves have created, models that are often beyond human comprehension, models that “think” about the world differently than we do.

But this comes with a price. This infusion of alien intelligence is bringing into question the assumptions embedded in our long Western tradition. We thought knowledge was about finding the order hidden in the chaos. We thought it was about simplifying the world. It looks like we were wrong. Knowing the world may require giving up on understanding it.

Models Beyond Understanding

In a series on machine learning, Adam Geitgey explains the basics, from which this new way of “thinking” is emerging:

[T]here are generic algorithms that can tell you something interesting about a set of data without you having to write any custom code specific to the problem. Instead of writing code, you feed data to the generic algorithm and it builds its own logic based on the data.”

For example, you give a machine learning system thousands of scans of sloppy, handwritten 8s and it will learn to identify 8s in a new scan. It does so, not by deriving a recognizable rule, such as “An 8 is two circles stacked vertically,” but by looking for complex patterns of darker and lighter pixels, expressed as matrices of numbers — a task that would stymie humans. In a recent agricultural example, the same technique of numerical patterns taught a computer how to sort cucumbers.

Then you can take machine learning further by creating an artificial neural network that models in software how the human brain processes signals.[1] Nodes in an irregular mesh turn on or off depending on the data coming to them from the nodes connected to them; those connections have different weights, so some are more likely to flip their neighbors than others. Although artificial neural networks date back to the 1950s, they are truly coming into their own only now because of advances in computing power, storage, and mathematics. The results from this increasingly sophisticated branch of computer science can be deep learning that produces outcomes based on so many different variables under so many different conditions being transformed by so many layers of neural networks that humans simply cannot comprehend the model the computer has built for itself.

Yet it works. It’s how Google’s AlphaGo program came to defeat the third-highest ranked Go player in the world. Programming a machine to play Go is more than a little daunting than sorting cukes, given that the game has 10^350 possible moves; there are 10^123 possible moves in chess, and 10^80 atoms in the universe. Google’s hardware wasn’t even as ridiculously overpowered as it might have been: It had only 48 processors, plus eight graphics processors that happen to be well-suited for the required calculations.

AlphaGo was trained on thirty million board positions that occurred in 160,000 real-life games, noting the moves taken by actual players, along with an understanding of what constitutes a legal move and some other basics of play. Using deep learning techniques that refine the patterns recognized by the layer of the neural network above it, the system trained itself on which moves were most likely to succeed.

Although AlphaGo has proven itself to be a world class player, it can’t spit out practical maxims from which a human player can learn. The program works not by developing generalized rules of play — e.g., “Never have more than four sets of unconnected stones on the board” — but by analyzing which play has the best chance of succeeding given a precise board configuration. In contrast, Deep Blue, the dedicated IBM chess-playing computer, has been programmed with some general principles of good play. As Christof Koch writes in Scientific American, AlphaGo’s intelligence is in the weights of all those billions of connections among its simulated neurons. It creates a model that enables it to make decisions, but that model is ineffably complex and conditional. Nothing emerges from this mass of contingencies, except victory against humans.

As a consequence, if you, with your puny human brain, want to understand why AlphaGo chose a particular move, the “explanation” may well consist of the networks of weighted connections that then pass their outcomes to the next layer of the neural network. Your brain can’t remember all those weights, and even if it could, it couldn’t then perform the calculation that resulted in the next state of the neural network. And even if it could, you would have learned nothing about how to play Go, or, in truth, how AlphaGo plays Go—just as internalizing a schematic of the neural states of a human player would not constitute understanding how she came to make any particular move.

Go is just a game, so it may not seem to matter that we can’t follow AlphaGo’s decision path. But what do we say about the neural networks that are enabling us to analyze the interactions of genes in two-locus genetic diseases? How about the use of neural networks to discriminate the decay pattern of single and multiple particles at the Large Hadron Collider? How the use of machine learning to help identify which of the 20 climate change models tracked by the Intergovernmental Panel on Climate Change is most accurate at any point? Such machines give us good results — for example: “Congratulations! You just found a Higgs boson!” — but we cannot follow their “reasoning.”

Clearly our computers have surpassed us in their power to discriminate, find patterns, and draw conclusions. That’s one reason we use them. Rather than reducing phenomena to fit a relatively simple model, we can now let our computers make models as big as they need to. But this also seems to mean that what we know depends upon the output of machines the functioning of which we cannot follow, explain, or understand.

Since we first started carving notches in sticks, we have used things in the world to help us to know that world. But never before have we relied on things that did not mirror human patterns of reasoning — we knew what each notch represented — and that we could not later check to see how our non-sentient partners in knowing came up with those answers. If knowing has always entailed being able to explain and justify our true beliefs — Plato’s notion, which has persisted for over two thousand years — what are we to make of a new type of knowledge, in which that task of justification is not just difficult or daunting but impossible?

Source: backchannel.com

Artificial intelligence prevails at predicting Supreme Court decisions

OnTheGo

“See you in the Supreme Court!” President Donald Trump tweeted last week, responding to lower court holds on his national security policies. But is taking cases all the way to the highest court in the land a good idea? Artificial intelligence may soon have the answer. A new study shows that computers can do a better job than legal scholars at predicting Supreme Court decisions, even with less information.

Several other studies have guessed at justices’ behavior with algorithms. A 2011 project, for example, used the votes of any eight justices from 1953 to 2004 to predict the vote of the ninth in those same cases, with 83% accuracy. A 2004 paper tried seeing into the future, by using decisions from the nine justices who’d been on the court since 1994 to predict the outcomes of cases in the 2002 term. That method had an accuracy of 75%.

The new study draws on a much richer set of data to predict the behavior of any set of justices at any time. Researchers used the Supreme Court Database, which contains information on cases dating back to 1791, to build a general algorithm for predicting any justice’s vote at any time. They drew on 16 features of each vote, including the justice, the term, the issue, and the court of origin. Researchers also added other factors, such as whether oral arguments were heard.

For each year from 1816 to 2015, the team created a machine-learning statistical model called a random forest. It looked at all prior years and found associations between case features and decision outcomes. Decision outcomes included whether the court reversed a lower court’s decision and how each justice voted. The model then looked at the features of each case for that year and predicted decision outcomes. Finally, the algorithm was fed information about the outcomes, which allowed it to update its strategy and move on to the next year.

From 1816 until 2015, the algorithm correctly predicted 70.2% of the court’s 28,000 decisions and 71.9% of the justices’ 240,000 votes, the authors report in PLOS ONE. That bests the popular betting strategy of “always guess reverse,” which has been the case in 63% of Supreme Court cases over the last 35 terms. It’s also better than another strategy that uses rulings from the previous 10 years to automatically go with a “reverse” or an “affirm” prediction. Even knowledgeable legal experts are only about 66% accurate at predicting cases, the 2004 study found. “Every time we’ve kept score, it hasn’t been a terribly pretty picture for humans,” says the study’s lead author, Daniel Katz, a law professor at Illinois Institute of Technology in Chicago.

Roger Guimerà, a physicist at Rovira i Virgili University in Tarragona, Spain, and lead author of the 2011 study, says the new algorithm “is rigorous and well done.” Andrew Martin, a political scientist at the University of Michigan in Ann Arbor and an author of the 2004 study, commends the new team for producing an algorithm that works well over 2 centuries. “They’re curating really large data sets and using state-of-the-art methods,” he says. “That’s scientifically really important.”

Outside the lab, bankers and lawyers might put the new algorithm to practical use. Investors could bet on companies that might benefit from a likely ruling. And appellants could decide whether to take a case to the Supreme Court based on their chances of winning. “The lawyers who typically argue these cases are not exactly bargain basement priced,” Katz says.

Attorneys might also plug different variables into the model to forge their best path to a Supreme Court victory, including which lower court circuits are likely to rule in their favor, or the best type of plaintiff for a case. Michael Bommarito, a researcher at Chicago-Kent College of Law and study co-author, offers a real example in National Federation of Independent Business v. Sebelius, in which the Affordable Care Act was on the line: “One of the things that made that really interesting was: Was it about free speech, was it about taxation, was it about some kind of health rights issues?” The algorithm might have helped the plaintiffs decide which issue to highlight.

Future extensions of the algorithm could include the full text of oral arguments or even expert predictions. According to Katz: “We believe the blend of experts, crowds, and algorithms is the secret sauce for the whole thing.”

Source: Science Magazine

Why Big Data Will Revolutionize B2B Marketing Strategies

B2B, or business to business marketing, involves selling of a company’s services or products to another company. Consumer marketing and B2B marketing are really not that different. Basically, B2B uses the same principles to market its product but the execution is a little different. B2B buyers make their purchases solely based on price and profit
OnTheGo
Why Big Data Will Revolutionize B2B Marketing Strategies | Innovation Management

B2B, or business to business marketing, involves selling of a company’s services or products to another company. Consumer marketing and B2B marketing are really not that different. Basically, B2B uses the same principles to market its product but the execution is a little different. B2B buyers make their purchases solely based on price and profit potential while consumers make their purchases based on emotional triggers, status, popularity, and price. B2B is a large industry.

The fact that more than 50 percent of all economic activity in the United States is made up of purchases made by institutions, government agencies, and business gives you a perspective of the size of this industry. Technological advancements and the internet has given B2Bs new ways to make sense of their big data, learn about prospects, and improve their conversion rates. Innovations such as marketing automation platforms and marketing technology — sometimes referred to as ‘martech’ — will revolutionize the way B2B companies market their products. They will be able to deliver mass personalization and nurture leads through the buyer’s journey.

In the next few years, these firms will be spending 73% more on marketing analytics. What does this mean for B2B marketing? The effects of new technology on B2B marketing will be more pronounced in some key areas. These are:

Lead Generation

In the old days, businesses had to spend fortunes on industry reports and market research to find how and to whom to market their products. They had to build their marketing efforts based on what their existing customer base seems to like. However, growing access to technology and analytics has made revenue attribution and lead nurturing a predictable, measurable, and a more structured process. While demand generation is an abstraction or a form of art (largely depends on who you ask), lead generation is a repeatable scientific process. This means less guesswork and more revenue.

Small Businesses

Thanks to SaaS (software-as-a-service) revolution, technologies once only available to elite firms—revenue reporting, real-time web analytics, and marketing automation — are now accessible and affordable to businesses of all sizes. Instead of attempting to build economies of scale, smaller businesses are using the power of these innovations to give the bigger competition a run for their dough. With SaaS, small businesses can now narrow their approaches and zero in on key accounts.

In the context of business to business marketing, this means that instead of trying to attract unqualified, uncommitted top-tier leads, these companies will go after matched stakeholders and accounts and earn their loyalty by providing exceptional customer experiences.

Data Analytics

A few years ago, data was the most underutilized asset in the hands of a marketer. That has since changed. Marketers are quickly coming to the realization that when it comes to their trade, big data is now more valuable than ever — measuring results, targeting prospects, and improving campaigns — and are in search of more ways to exploit it. B2B marketing is laden with new tools that capitalize on data points. These firms use data scraping techniques and tools to customize their sites for their target audiences. Business can even use predictive lead scoring to gauge the performance of leads in the future. Apache Kafka provides a Distributed Streaming Platform for building a real-time data pipeline in addition to streaming mobile apps.

Revenue

The integration of marketing automation and CRM has made it easier for B2Bs to track and measure marketing campaign efforts through revenue marketing. It has always been hard for firms to calculate their return on marketing investment (ROMI).

Technological advancements have some exciting parallels in the B2B industry. In order to exploit this technology and gain a competitive edge, companies have to stay up to date. The risk involved is very minimal so these firms have absolutely nothing to worry about.

Source: Innovation Management

The battle to build chips for the AI boom is about to get serious

OnTheGo

This month, the MIT Technology Review analyzes the blooming of the AI. As machine learning has blossomed, the technique has become the hot ticket for businesses keen to innovate (or at least, sound like they plan to). That’s proven to be good news for anyone building hardware that runs AI software—and until now, that really meant Nvidia, which happily found that the graphics processors it had been making for years were surprisingly well-suited to crunching AI problems. But our own Tom Simonite explains that Nvidia’s dominance may be about to slide.