Apple’s AR platform: These demos show what ARKit can do in iOS 11

 

AnalyticsAnywhere

Apple sees a lot of potential in augmented reality.

Ever since Pokemon Go exploded in popularity last summer and subsequently revived interest in both Apple’s App Store and mobile gaming, Apple has said several times that it is embracing the technology, which is commonly called AR, especially now that it offers the ARKit platform. Here’s everything you need to know about ARKit, including what it can do and examples of its power in action.

What is AR?

Augmented reality isn’t a new technology. But Apple is now jumping into AR, so everyone’s been talking about it. You see, while virtual reality immerses you into a space, essentially replacing everything you see in a physical world, AR takes the world around you and adds virtual objects to it. You can look with your phone, for instance, and see a Pokemon standing in your living room.

What is Apple ARKit?

With iOS 11, which debuted at WWDC 2017, Apple is officially acknowledging AR. It has introduced the ARKit development platform, allowing app developers to quickly and easily build AR experiences into their apps and games. It will launch alongside iOS 11 this autumn. When it’s finally live, it’ll use your iOS device’s camera, processors, and motion sensors to create some immersive interactions.

It also uses a technology called Visual Inertial Odometry in order to track the world around your iPad or iPhone. This functionality allows your iOS device to sense how it moves in a room. ARKit will use that data to not only analyse a room’s layout, but also detect horizontal planes like tables and floors and serve up virtual objects to be placed upon those surfaces in your physical room.

What’s the point of ARKit?

Developers are free to create all kinds of experiences using ARKit, some of which are already being shown off on Twitter. IKEA even announced it is developing a new AR app built on ARKit that will let customers to preview IKEA products in their own homes before making a purchase. IKEA said that Apple’s new platform will allow AR to “play a key role” in new product lines.

That last bit is key. For Apple, ARKit opens up an entirely new category of apps that would run on every iPhone and iPad. It essentially wants to recreate and multiply the success of Pokemon Go. Plus, it opens up so many long-term possibilities. The company is rumoured to be working on an AR headset, for instance. Imagine wearing Apple AR glasses capable of augmenting you world every day.

Does ARKit face any competition?

Let’s also not forget that ARKit allows Apple to compete with Microsoft’s Hololens and Google’s Tango AR kit. But while Hololens and Tango are designed to be aware of multiple physical spaces and all of the shapes contained within, ARKit is more about detecting flat surfaces and drawing on those flat surfaces. In other words, it’s more limited, but we’re still in early-days territory right now.

We actually think ARKit’s capabilities, as of July 2017, reminds us of the AR effects found inside Snapchat or even the Facebook Camera app. The potential of Apple’s AR platform will likely improve as we move closer to the launch of iOS 11, however.

Which iOS devices can handle ARKit apps?

Any iPhone or iPad capable of running iOS 11 will be able to install ARKit apps. However, we’re assuming newer devices will handle the apps better. For instance, the new 10.5-inch and 12.5-inch iPads Pro tablets that debuted during WWDC 2017 have bumped-up display refresh rates of 120hz, which means what you see through the camera should seem much more impressive on those devices.

How do you get started with ARKit?

If you’re interested in building ARKit apps for iOS 11, go to the Apple Developer site, which has forums for building AR apps and beta downloads. If you’re a consumer who is just excited to play, you can go get the new iPad Pro and install the iOS 11 public beta to try out some of the early demos for AR. Otherwise, wait for iOS 11 to officially release alongside new AR apps in the App Store.

Source: pocket-lint.com

Advertisements

3 Technologies You Need To Start Paying Attention To Right Now

AnalyticsAnywhere

At any given time, a technology or two captures the zeitgeist. A few years ago it was social media and mobile that everybody was talking about. These days it’s machine learning and block chain. Everywhere you look, consulting firms are issuing reports, conferences are being held and new “experts” are being anointed.

In a sense, there’s nothing wrong with that. Social media and mobile computing really did change the world and, clearly, the impact of artificial intelligence and distributed database architectures will be substantial. Every enterprise needs to understand these technologies and how they will impact its business.

Still we need to remember that we always get disrupted by what we can’t see. The truth is that the next big thing always starts out looking like nothing at all. That’s why it’s so disruptive. If we saw it coming, it wouldn’t be. So here are three technologies you may not of heard about, but you should start paying attention to. The fate of your business may depend on it.

1. New Computing Architectures

In the April 19th issue of Electronics in 1965, Intel Co-Founder Gordon Moore published an article that observed the number of transistors on a silicon chip were doubling roughly every two years. Over the past half century, that consistent doubling of computing power, now known as Moore’s Law, has driven the digital revolution.

Today, however, that process has slowed and it will soon it come to a complete halt. There are only so many transistors you can cram onto a silicon wafer before subatomic effects come into play and make it impossible for the technology to function. Experts disagree on exactly when this will happen, but it’s pretty clear that it will be sometime within the next five years.

There are, of course, a number of ways to improve chip performance other than increasing the number of transistors, such as FPGA, ASIC and 3D stacking. Yet those are merely stopgaps and are unlikely to take us more than a decade or so into the future. To continue to advance technology over the next 50 years, we need fundamentally new architectures like quantum computing and neuromorphic chips.

The good news is that these architectures are very advanced in their development and we should start seeing a commercial impact within 5-10 years. The bad news is that, being fundamentally new architectures, nobody really knows how to use them yet. We are, in a sense, back to the early days of computing, with tons of potential but little idea how to actualize it.

2. Genetic Engineering

While computer scientists have been developing software languages over the past 50 years, biologist have been trying to understand a far more pervasive kind of code, the genetic code. For the most part, things have gone slowly. Although there has been significant scientific progress, the impact of that advancement has been relatively paltry.

That began to change in 2003 with the completion of the Human Genome Project. For the first time, we began to truly understand how DNA interacts with our biology, which led to other efforts, such as the Cancer Genome Atlas, as well as tangible advancements in agriculture. For the first time, genomics became more than mere scientific inquiry, but a source of new applications

Now, a new technology called CRISPR, is allowing scientists to edit genes at will. In fact, because the technology is simple enough for even amateur biologists to use, we can expect genetic engineering to become much more widespread across industries. Early applications include liquid fuels from sunshine and genomic vaccines.

“CRISPR is accelerating everything we do with genomics,” Megan Hochstrasser of the Innovative Genomics Initiative at Cal Berkeley told me, “from cancer research to engineering disease resistant crops and many other applications that haven’t yet come to the fore. Probably the most exciting aspect is that CRISPR is so cheap and easy to use, it will have a democratizing effect, where more can be done with less. We’re really just getting started.”

3. Materials Science

Traditionally, the way you improved a material to build a product has been a process of trial and error. You changed the ingredients or the process by which you made it and saw what happened. For example, at some point a medieval blacksmith figured out that annealing iron would make better swords.

Today, coming up with better materials is a multi-billion business. Consider the challenges that Boeing faced when designing its new Dreamliner. How do you significantly increase the performance of an airplane, a decades old technology? Yet by discovering new composite materials, the company was able to reduce weight by 40,000 pounds and fuel use by 20%.

With this in mind, the Materials Genome Initiative is building databases of material properties like strength, density and other things, and also includes computer models to predict what processes will achieve the qualities a manufacturer is looking for. As a government program, it is also able to make the data widely available for anyone who wants to use it, not just billion dollar companies like Boeing.

“Our goal is to speed up the development of new materials by making clear the relationship between materials, how they are processed and what properties are likely to result,” Jim Warren, Director of the Materials Genome program told me. “My hope is that the Materials Genome will accelerate innovation in just about every industry America competes in.”

It’s Better To Prepare Than Adapt

For the past few decades, great emphasis has been put on agility and adaptation. When a new technology, like social media, mobile computing or artificial intelligence begins to disrupt the marketplace, firms rush to figure out what it means and adapt their strategies accordingly. If they could do that a bit faster than the competition, they would win.

Today, however, we’re entering a new era of innovation that will look much more like the 50s and 60s than it will the 90s and aughts. The central challenge will no longer be to dream up new applications based on improved versions of old technologies, but to understand fundamentally new paradigms.

That’s why over the next few decades, it will be more important to prepare than adapt. How will you work with new computing architectures? How will fast, cheap genetic engineering affect your industry? What should you be doing to explore new materials that can significantly increase performance and lower costs? These are just some of the questions we will grapple with.

Not all who wander are lost. The challenge is to wander with purpose.

Source: Digital Tonto

AI Models For Investing: Lessons From Petroleum (And Edgar Allan Poe)

AnalyticsAnywhere

A decade ago, at a NY conference, an analyst put up slides showing his model of the short-term oil price (variables like inventories, production and demand trends, and so forth). I turned to the colleague next to me and said, “I just want to ask him, ‘How old are you?’” I worked on a computer model of the world oil market from 1977, when the model was run from a remote terminal and the output had to be picked up on the other side of campus. (Yes, by dinosaurs.) Although I haven’t done formal modeling in recent years, my experiences might provide some insight into the current fashion for using computer models in investing (among other things).

About two centuries ago, Baron von Maelzel toured the U.S. with an amazing clockwork automaton (invented by Baron Kempelen), a chess-playing “Turk” in the form of a mannequin at a desk with a chess board. The mannequin was dressed up as a Turk, given perceptions at the time of their perceived superior wisdom. The automaton could not only play chess very well, but solve problems presented to it that experts found difficult. Viewers were amazed, given the complexity of chess, and the level of play was not matched by modern computers for nearly two centuries. None of the Turk’s observers could initially explain the mechanism by which such feats were performed.

This is reminiscent of the 1970s when Uri Geller made claims to have paranormal abilities, which physicists from SRI found they could not explain. Because he wasn’t committing acts of physics, but sleight of hand, as demonstrated by the Amazing Randi who was not a scientist but rather an expert in the latter craft. (Similarly, peak oil advocates are often amazed at techniques done by scientists that are actually statistical in nature—and done wrong.)

Edgar Allan Poe considered the case and proved to be the Amazing Randi of his day. The chess-playing Turk was the result of “the wonderful mechanical genius of Baron Kempelen [that] could invent the necessary means for shutting a door or slipping aside a panel with a human agent too at his service…” in Poe’s words. He noted that the Baron would open one panel on the desk, show no one behind it, close it and open the other, again revealing no human agent; but this is just a standard magician’s trick, where the subject simply moves from one side to the other. Indeed, others claimed to have seen a chess player exit the desk after the audience had left.

Computer models often fall into this category. No matter how scientific and objective they appear, there is always a human agent behind them. In oil market modeling in the 1970s, this took the form of the price mechanism. NYU Professor Dermot Gately had suggested that prices moved according to capacity utilization in OPEC, as in the following figure (later used by the Energy Information Administration, among many others). If utilization was above 80%, prices would rise sharply, below 80% they would taper off.

AnalyticsAnywhereThis made sense, given that many industries use a similar conceptual model to predict inflation: high utilization in the steel industry results in higher steel prices, etc. And the model certainly seems to fit the existing data.

At least until 1986. After 1985, the data points no longer fit the curve, and the EIA stopped publishing the figure after 1987; for the last two years the model was well off. Subsequently, EIA ceased to publish the figure, although they used the formula for some time to come.

What had become obscured by the supposed success of the formula was that it was intended for use of short-term price changes. High steel capacity utilization would mean higher steel prices, but lead to investment and more capacity, so that prices would stabilize and even drop.

But oil models couldn’t capture this, because much of the capacity was in OPEC and it was assumed that OPEC would not necessarily invest in response to higher prices. Instead, the programmer had to choose numbers for OPEC’s future capacity and input them into the machine, meaning the programmer had control over the price forecast by simply modifying the capacity numbers. Despite the ‘scientific’ appearance of the computer model, there really was a man in the machine making the moves.

People have long sought to reduce the influence of fallible humans, whether replacing workers with machines or putting control of our nuclear weapons in the hands of Colossus, a giant computer that would avoid an accidental nuclear war. (1970 movie Forbin, the Colossus Project, fourteen years before Terminator’s Skynet). Ignoring that there is a always human element, even if only in the design.

Without any expertise in the field of artificial intelligence, it nonetheless seems to me that AI trading programs might learn, but won’t they learn at they are taught to do? Will this not simply be an extension of algorithms used by others in the financial world, at whose core as simply comparison of current with historical data and trends?

And this, after all, is what led to the financial meltdown described so aptly in When Genius Failed, the story of Long Term Capital Management and the way it nearly crashed the world economy. Recognizing patterns of behavior preceding an OPEC meeting, such as the way prices move in response to comments by member country ministers, can be useful, but will novel cases such as the SARS epidemic or the 2008 financial crisis catch the programs flat-footed, possibly triggering massive losses?

The answer, as it often does, comes down to gearing. LTCM’s model failed, but the problem was the huge amount of money they had at risk, far outreaching their capital. For a few small traders to use AI programs, or an investment bank to risk a fraction of its commodities’ funds would not be a concern. But if such programs become widespread, and they all programs are drawing the same conclusions from historical data, could there be a huge amount of money making the same bet?

For individuals, of course, the answer is diversify, one of the first investing lessons. I wonder how many AI programs will practice the same.

Source: Forbes

Want to understand your DNA? There’s an app for that

Helix will sequence your genes for $80 and lure app developers to sell you access to different parts of it.AnalyticsAnywhere

A Silicon Valley startup called Helix is betting on the notion that not only do people want to learn more about their DNA, but they’ll also pay to keep interacting with it.

Today the company, which was founded in 2015 with $100 million from genomics giant Illumina, is launching its much-anticipated online hub where people can digitally explore their genetic code by downloading different applications on their computers or mobile devices. Think of it as an app store for your genome.

Personalized genetic information has become an affordable commodity. The early success of leaders like 23andMe and AncestryDNA, which sell DNA testing kits for $200 or less, has ushered in a wave of new companies offering direct-to-consumer genetic tests for everything from ancestry to the wine you should drink based on your DNA.

Most of these genetic testing kits are one-time deals. You spit in a tube, and your saliva is sent off to a lab to be analyzed. A few weeks later you get a long, detailed report of your genetic makeup. Helix CEO Robin Thurston says all that information can be daunting, and most people don’t come back to the data again and again.

With Helix, people will be able to choose the things about their genome they want to learn about. For an initial $80, Helix sequences the most important part of the genome—about 20,000 genes plus some other bits—called the exome. That information is digitized and stored by Helix, which doles out pieces of the information to companies selling other apps through Helix. “It’s our goal that someone will have a lifelong relationship with their DNA data,” Thurston says.

Other direct-to-consumer testing companies like 23andMe and AncestryDNA use a technology called genotyping to analyze a customer’s genes. Helix uses a more detailed method known as DNA sequencing, which yields about 100 times more information. So far, most people who have gotten exome sequencing, which can cost several hundred to more than a thousand dollars elsewhere, have been patients with rare or unknown medical conditions who hope their genes can provide more answers. Exome sequencing for healthy people is a new, untapped market.

From the consumer side, people will have to get their genes sequenced only once, then they can choose from different apps in categories like ancestry, fitness, health, and nutrition and pay as they go. About a dozen companies are debuting apps on Helix today, and each app is designed to tell you something different about your genome. Some are more medically relevant, like those that estimate risk for inherited cholesterol and heart problems, test for food sensitivity, or check to see if you could pass a serious genetic condition on to your child. Only the apps people buy will have access to their personal information.

One company, Exploragen, says it can tell you about your sleep patterns—like whether you’re a morning person or a night owl—just by looking at your DNA (in case you needed help knowing that one). Another company, Dot One, will examine the tiny portion of your genes that makes you different from everyone else and print that unique code onto a customized fabric scarf (because, why not?).

A third company, Insitome, has an app that will determine what percentage of your DNA you inherited from Neanderthals and what traits you inherited from them. Insitome CEO Spencer Wells says this initial app will cost $30.

Wells, who previously led the National Geographic Society’s Genographic Project, which mapped human migration throughout history by analyzing people’s DNA samples, says he likes the idea of Helix’s platform because it means that companies can develop additional apps as new scientific discoveries are made about the human genome.

Helix has also managed to attract major medical institutions like the Mayo Clinic and Mount Sinai Health System to develop apps for its store. Eventually, Thurston wants to offer hundreds of apps. He estimates the average customer will buy three to five apps each.

But having access to all these DNA apps might not be a good thing for consumers. Daniel MacArthur, a scientist at Massachusetts General Hospital and Harvard Medical School who studies the human genome, says there’s a danger associated with mixing medically serious tests, such as disease carrier testing, with a range of lifestyle, nutrition, and wellness tests that have little scientific evidence to support them.

“Promoting tests with little or no scientific backing runs the risk of inflating customer expectations and ultimately undermining consumer confidence in genuinely clinically useful genetic tests,” he says.

Direct-to-consumer genetic tests, including ones that claim to predict disease risk, are loosely regulated in the U.S. That worries Stephen Montgomery, a geneticist at Stanford University, who says the Helix platform creates a bigger opportunity for companies to develop products that don’t provide much value to people.

“Helix will have to think very carefully about what apps to allow on the platform,” he says. The average customer probably can’t discern which products are based on sound science from those that aren’t, so he hopes Helix will have some way of evaluating the quality of information the apps provide.

Source: MIT Technology Review

Machine Learning and Prediction in Medicine — Beyond the Peak of Inflated Expectations

Big data, we have all heard, promise to transform health care with the widespread capture of electronic health records and high-volume data streams from sources ranging from insurance claims and registries to personal genomics and biosensors. Artificial-intelligence and machine-learning predictive algorithms, which can already automatically drive cars, recognize spoken language, and detect credit card fraud, are the keys to unlocking the data that can precisely inform real-time decisions. But in the “hype cycle” of emerging technologies, machine learning now rides atop the “peak of inflated expectations.”

Prediction is not new to medicine. From risk scores to guide anticoagulation (CHADS) and the use of cholesterol medications (ASCVD) to risk stratification of patients in the intensive care unit (APACHE), data-driven clinical predictions are routine in medical practice. In combination with modern machine learning, clinical data sources enable us to rapidly generate prediction models for thousands of similar clinical questions. From early-warning systems for sepsis to superhuman imaging diagnostics, the potential applicability of these approaches is substantial.

Yet there are problems with real-world data sources. Whereas conventional approaches are largely based on data from cohorts that are carefully constructed to mitigate bias, emerging data sources are typically less structured, since they were designed to serve a different purpose (e.g., clinical care and billing). Issues ranging from patient self-selection to confounding by indication to inconsistent availability of outcome data can result in inadvertent bias, and even racial profiling, in machine predictions. Awareness of such challenges may keep the hype from outpacing the hope for how data analytics can improve medical decision making.

Machine-learning methods are particularly suited to predictions based on existing data, but precise predictions about the distant future are often fundamentally impossible. Prognosis models for HER-negative breast cancer had to be inverted in the face of targeted therapies, and the predicted efficacy of influenza vaccination varies with disease prevalence and community immunization rates. Given that the practice of medicine is constantly evolving in response to new technology, epidemiology, and social phenomena, we will always be chasing a moving target.

The rise and fall of Google Flu remind us that forecasting an annual event on the basis of 1 year of data is effectively using only a single data point and thus runs into fundamental time-series problems. Yet if the future will not necessarily resemble the past, simply accumulating mass data over time has diminishing returns. Research into decision-support algorithms that automatically learn inpatient medical practice patterns from electronic health records reveals that accumulating multiple years of historical data is worse than simply using the most recent year of data. When our goal is learning how medicine should be practiced in the future, the relevance of clinical data decays with an effective “half-life” of about 4 months. To assess the usefulness of prediction models, we must evaluate them not on their ability to recapitulate historical trends, but instead on their accuracy in predicting future events.

Although machine-learning algorithms can improve the accuracy of prediction over the use of conventional regression models by capturing complex, nonlinear relationships in the data, no amount of algorithmic finesse or computing power can squeeze out information that is not present. That’s why clinical data alone have relatively limited predictive power for hospital readmissions that may have more to do with social determinants of health.

The apparent solution is to pile on greater varieties of data, including anything from sociodemographics to personal genomics to mobile-sensor readouts to a patient’s credit history and Web-browsing logs. Incorporating the correct data stream can substantially improve predictions, but even with a deterministic (nonrandom) process, chaos theory explains why even simple nonlinear systems cannot be precisely predicted into the distant future. The so-called butterfly effect refers to the future’s extreme sensitivity to initial conditions. Tiny variations, which seem dismissible as trivial rounding errors in measurements, can accumulate into massively different future events. Identical twins with the same observable demographic characteristics, lifestyle, medical care, and genetics necessarily generate the same predictions — but can still end up with completely different real outcomes.

Though no method can precisely predict the date you will die, for example, that level of precision is generally not necessary for predictions to be useful. By reframing complex phenomena in terms of limited multiple-choice questions (e.g., Will you have a heart attack within 10 years? Are you more or less likely than average to end up back in the hospital within 30 days?), predictive algorithms can operate as diagnostic screening tests to stratify patient populations by risk and inform discrete decision making.

Research continues to improve the accuracy of clinical predictions, but even a perfectly calibrated prediction model may not translate into better clinical care. An accurate prediction of a patient outcome does not tell us what to do if we want to change that outcome — in fact, we cannot even assume that it’s possible to change the predicted outcomes.

Machine-learning approaches are powered by identification of strong, but theory-free, associations in the data. Confounding makes it a substantial leap in causal inference to identify modifiable factors that will actually alter outcomes. It is true, for instance, that palliative care consults and norepinephrine infusions are highly predictive of patient death, but it would be irrational to conclude that stopping either will reduce mortality. Models accurately predict that a patient with heart failure, coronary artery disease, and renal failure is at high risk for postsurgical complications, but they offer no opportunity for reducing that risk (other than forgoing the surgery). Moreover, many such predictions are “highly accurate” mainly for cases whose likely outcome is already obvious to practicing clinicians. The last mile of clinical implementation thus ends up being the far more critical task of predicting events early enough for a relevant intervention to influence care decisions and outcomes.

With machine learning situated at the peak of inflated expectations, we can soften a subsequent crash into a “trough of disillusionment” by fostering a stronger appreciation of the technology’s capabilities and limitations. Before we hold computerized systems (or humans) up against an idealized and unrealizable standard of perfection, let our benchmark be the real-world standards of care whereby doctors grossly misestimate the positive predictive value of screening tests for rare diagnoses, routinely overestimate patient life expectancy by a factor of 3, and deliver care of widely varied intensity in the last 6 months of life.

Although predictive algorithms cannot eliminate medical uncertainty, they already improve allocation of scarce health care resources, helping to avert hospitalization for patients with low-risk pulmonary embolisms (PESI) and fairly prioritizing patients for liver transplantation by means of MELD scores. Early-warning systems that once would have taken years to create can now be rapidly developed and optimized from real-world data, just as deep-learning neural networks routinely yield state-of-the-art image-recognition capabilities previously thought to be impossible.

Whether such artificial-intelligence systems are “smarter” than human practitioners makes for a stimulating debate — but is largely irrelevant. Combining machine-learning software with the best human clinician “hardware” will permit delivery of care that outperforms what either can do alone. Let’s move past the hype cycle and on to the “slope of enlightenment,” where we use every information and data resource to consistently improve our collective health.

Source: The New England Journal of Medicine

As Robots Take Over We Will Need More Innovators

AnalyticsAnywhere

The Hadrian X robot is made by Fastbrick Robotics from Australia. It can lay 1000 house bricks in an hour (video below). The average bricklayer lays around 500 bricks a day. We will soon see robots doing much of the standard work in building assembly with a small number of skilled craftsmen supervising them, applying finishing touches or completing tricky tasks. McDonald’s is trialing a “Create Your Taste” kiosk – an automatic system that lets customers order and collect their own configuration of burger meal with no assistant needed.

But it is not just manual labour which will be affected by the inexorable roll out of robots, automation and artificial intelligence. The impact will be felt widely across skilled middle class jobs including lawyers, accountants, analysts and technicians. In many financial trading centres traders have already been replaced by algorithms. The world’s first ‘robot lawyer’ is now available in 50 states.

The World Economic Forum predicts that robotic automation will result in the net loss of more than 5m jobs across 15 developed nations by 2020. Many think the numbers will be much higher. A report by the consultancy firm PWC found that 30% of jobs were potentially under threat from breakthroughs in artificial intelligence. In some sectors half the jobs could go.

The rise of the robots will lead to an increase in the demand for those with the skills to program, maintain and supervise the machines. Most companies will have a Chief Robotics Officer and a department dedicated to automation. However, the human jobs created will be small fraction of the jobs which the robots will replace.

Any job that involves the use of knowledge, analysis and systematic decision making is at risk. Robots can not only absorb a large body of knowledge and rules. They can also adapt and learn on the job.

Where does that leave the displaced humans? The standard answer is education. Policy makers advise that people should retrain into higher skilled professions. The problem is most training simply provides more knowledge and skills which can also be replaced by automation.

“So what jobs can robots not do? Einstein said, ‘Imagination is more important than knowledge.’ It is in the application of imagination that humans have the clear advantage.”

Here are some things which robots do not do well:
1.Ask searching questions.
2.Challenge assumptions about how things are done.
3.Conceive new business models and approaches.
4.Understand and appeal to people’s feelings and emotions
5.Design humorous, provocative or eye-catching marketing campaigns.
6.Deliberately break the rules.
7.Inspire and motivate people.
8.Set a novel strategy or direction.
9.Do anything spontaneous, entertaining or unexpected.
10.Anticipate future trends and needs.
11.Approach problems from entirely new directions
12.Imagine a better future.

Let’s leave the routine knowledge jobs to the robots and focus on developing our creative skills. The most successful organisations will be those that combine automation efficiency with ingenious and appealing new initiatives. We will need more imaginative theorists, more lateral thinkers, more people who can question and challenge. We will need more innovators.

Source:innovationexcellence.com

The meaning of life in a world without work

As technology renders jobs obsolete, what will keep us busy? Sapiens author Yuval Noah Harari examines ‘the useless class’ and a new quest for purpose.

AnalyticsAnywhere

Most jobs that exist today might disappear within decades. As artificial intelligence outperforms humans in more and more tasks, it will replace humans in more and more jobs. Many new professions are likely to appear: virtual-world designers, for example. But such professions will probably require more creativity and flexibility, and it is unclear whether 40-year-old unemployed taxi drivers or insurance agents will be able to reinvent themselves as virtual-world designers (try to imagine a virtual world created by an insurance agent!). And even if the ex-insurance agent somehow makes the transition into a virtual-world designer, the pace of progress is such that within another decade he might have to reinvent himself yet again.

The crucial problem isn’t creating new jobs. The crucial problem is creating new jobs that humans perform better than algorithms. Consequently, by 2050 a new class of people might emerge – the useless class. People who are not just unemployed, but unemployable.

The same technology that renders humans useless might also make it feasible to feed and support the unemployable masses through some scheme of universal basic income. The real problem will then be to keep the masses occupied and content. People must engage in purposeful activities, or they go crazy. So what will the useless class do all day?

One answer might be computer games. Economically redundant people might spend increasing amounts of time within 3D virtual reality worlds, which would provide them with far more excitement and emotional engagement than the “real world” outside. This, in fact, is a very old solution. For thousands of years, billions of people have found meaning in playing virtual reality games. In the past, we have called these virtual reality games “religions”.

What is a religion if not a big virtual reality game played by millions of people together? Religions such as Islam and Christianity invent imaginary laws, such as “don’t eat pork”, “repeat the same prayers a set number of times each day”, “don’t have sex with somebody from your own gender” and so forth. These laws exist only in the human imagination. No natural law requires the repetition of magical formulas, and no natural law forbids homosexuality or eating pork. Muslims and Christians go through life trying to gain points in their favorite virtual reality game. If you pray every day, you get points. If you forget to pray, you lose points. If by the end of your life you gain enough points, then after you die you go to the next level of the game (aka heaven).

As religions show us, the virtual reality need not be encased inside an isolated box. Rather, it can be superimposed on the physical reality. In the past this was done with the human imagination and with sacred books, and in the 21st century it can be done with smartphones.

Some time ago I went with my six-year-old nephew Matan to hunt for Pokémon. As we walked down the street, Matan kept looking at his smartphone, which enabled him to spot Pokémon all around us. I didn’t see any Pokémon at all, because I didn’t carry a smartphone. Then we saw two others kids on the street who were hunting the same Pokémon, and we almost got into a fight with them. It struck me how similar the situation was to the conflict between Jews and Muslims about the holy city of Jerusalem. When you look at the objective reality of Jerusalem, all you see are stones and buildings. There is no holiness anywhere. But when you look through the medium of smartbooks (such as the Bible and the Qur’an), you see holy places and angels everywhere.

The idea of finding meaning in life by playing virtual reality games is of course common not just to religions, but also to secular ideologies and lifestyles. Consumerism too is a virtual reality game. You gain points by acquiring new cars, buying expensive brands and taking vacations abroad, and if you have more points than everybody else, you tell yourself you won the game.

You might object that people really enjoy their cars and vacations. That’s certainly true. But the religious really enjoy praying and performing ceremonies, and my nephew really enjoys hunting Pokémon. In the end, the real action always takes place inside the human brain. Does it matter whether the neurons are stimulated by observing pixels on a computer screen, by looking outside the windows of a Caribbean resort, or by seeing heaven in our mind’s eyes? In all cases, the meaning we ascribe to what we see is generated by our own minds. It is not really “out there”. To the best of our scientific knowledge, human life has no meaning. The meaning of life is always a fictional story created by us humans.

In his groundbreaking essay, Deep Play: Notes on the Balinese Cockfight (1973), the anthropologist Clifford Geertz describes how on the island of Bali, people spent much time and money betting on cockfights. The betting and the fights involved elaborate rituals, and the outcomes had substantial impact on the social, economic and political standing of both players and spectators.

The cockfights were so important to the Balinese that when the Indonesian government declared the practice illegal, people ignored the law and risked arrest and hefty fines. For the Balinese, cockfights were “deep play” – a made-up game that is invested with so much meaning that it becomes reality. A Balinese anthropologist could arguably have written similar essays on football in Argentina or Judaism in Israel.

Indeed, one particularly interesting section of Israeli society provides a unique laboratory for how to live a contented life in a post-work world. In Israel, a significant percentage of ultra-orthodox Jewish men never work. They spend their entire lives studying holy scriptures and performing religion rituals. They and their families don’t starve to death partly because the wives often work, and partly because the government provides them with generous subsidies. Though they usually live in poverty, government support means that they never lack for the basic necessities of life.

That’s universal basic income in action. Though they are poor and never work, in survey after survey these ultra-orthodox Jewish men report higher levels of life-satisfaction than any other section of Israeli society. In global surveys of life satisfaction, Israel is almost always at the very top, thanks in part to the contribution of these unemployed deep players.

You don’t need to go all the way to Israel to see the world of post-work. If you have at home a teenage son who likes computer games, you can conduct your own experiment. Provide him with a minimum subsidy of Coke and pizza, and then remove all demands for work and all parental supervision. The likely outcome is that he will remain in his room for days, glued to the screen. He won’t do any homework or housework, will skip school, skip meals and even skip showers and sleep. Yet he is unlikely to suffer from boredom or a sense of purposelessness. At least not in the short term.

Hence virtual realities are likely to be key to providing meaning to the useless class of the post-work world. Maybe these virtual realities will be generated inside computers. Maybe they will be generated outside computers, in the shape of new religions and ideologies. Maybe it will be a combination of the two. The possibilities are endless, and nobody knows for sure what kind of deep plays will engage us in 2050.

In any case, the end of work will not necessarily mean the end of meaning, because meaning is generated by imagining rather than by working. Work is essential for meaning only according to some ideologies and lifestyles. Eighteenth-century English country squires, present-day ultra-orthodox Jews, and children in all cultures and eras have found a lot of interest and meaning in life even without working. People in 2050 will probably be able to play deeper games and to construct more complex virtual worlds than in any previous time in history.

But what about truth? What about reality? Do we really want to live in a world in which billions of people are immersed in fantasies, pursuing make-believe goals and obeying imaginary laws? Well, like it or not, that’s the world we have been living in for thousands of years already.

Source: The Guardian