5 Questions to Assess Digital Transformation at the Enterprise Level

AnalyticsAnywhere

Digital transformation is still one of the business buzzwords of the year. It is estimated that 89% of organizations have digital transformation as a business priority. But if you feel like you’ve come to a standstill in your digital transformation efforts, you are not alone. As many as 84% of digital transformation efforts fail to achieve desired results. And that statistic would likely be higher if we examined only the larger, enterprise level efforts.

What exactly is digital transformation? According to researchers at MIT Sloan, digital transformation occurs when businesses are focused on integrating digital technologies, such as social, mobile, analytics and cloud, in the service of transforming how their businesses work. The preoccupation with digital transformation makes sense given the pace of change. Richard Foster, at the Yale School of Management, found that the average lifespan of an S&P company dropped from 67 years in the 1920s to 15 years today.

Creating digital products receives a lot of press. For example, the 2017 Ford GT supercar’s digital instrument display has been advertised as the dashboard of the future featuring a state-of-the-art 10-inch digital instrument display that helps reduce driver distraction. Yet, Ford’s share price is down nearly 30% over the past 3 years. On the other hand, the design of the Airbus 380 aircraft had some exciting digital innovations, but Airbus also leveraged big data to improve customer experience with very positive results on the company’s share price over the past 3 years. GE is another example of a company that has pursued digital transformation to reinvent its own industrial operations through digital technology, and then leveraged those learnings to help its customers do likewise. While the product innovations are sometimes impressive, more than purely product related innovations are needed for digital transformation at the enterprise level.

There’s no doubt that the digital tools which includes social, mobile, analytics and cloud (sometimes referred to as the “SMAC” acronym) creates value – but digital transformation at the enterprise level must go beyond just the tools.

Having a transformative purpose or vision and a process based view is recognized as being important. In “Leading digital,” the authors found that firms with a strong vision and mature processes for digital transformation were more profitable on average, had higher revenues, and achieved a bigger market valuation than competitors without a strong vision. Yet more reason to emphasize that while technology is integral to digital transformation – it can’t just be about technology. If we go back to the early days of the research on digital transformation, it was proposed that true digital transformation at the enterprise level needs to embrace fundamental change is three areas: customer experience, operational processes, and business models.

Focusing on customer experience is central to success. According to the Altimeter Group in 2014, around 88% of companies reported undergoing digital transformation – yet only 25% of respondents indicated that they had mapped the customer journey. The 2016 update to this research, based on survey data from 528 leaders, found that the number of companies which mapped customer journey had risen to 54% – indicating a positive trend – but still a way to go.

Focusing on improving the organization’s ability in improving end to end business processes is also needed for success with digital transformation. Where does your organization stand in terms of its process maturity? Are you just beginning the process improvement and management journey or is the organization well on the way to modeling, improving, measuring and managing its key business processes to achieve business goals? If there is room to improve your people’s skill in areas such as BPM, customer experience and change management, then you may wish to explore the training programs offered on these topics at: http://www.bpminstitute.org/learning-paths.

Further, the answers to the following questions may provide you with additional insight on your organization’s situation on its enterprise digital transformation journey:

  1. To what extent is your company strategy driving the digital transformation program?
  2. To what extent are you actively challenging the elements of your business model (i.e. value proposition, delivery channels, etc.)?
  3. To what extent are you exploring new digital business and digitally modified businesses?
  4. To what extent do your leaders have a shared understanding of the entire customer journey?
  5. To what extent are you deploying digital to redesign end to end business processes?

Recall the power of the one page principle. This involves in having a high level schematic – just one page for your customer journey map, one page for your business model, and one page for your process relationship map. That’s what drives discussion and collaboration and storytelling. Of course, some of these high level schematics need to be developed at a more granular level of detail – but the one page view is what captures attention and drives dialogue.

The vast majority of digital transformation efforts at the enterprise level are led from the top. Leading by example is part of the success formula as well as defining clear priorities and managing the cross-functional interdependencies that many digital solutions often involve. Chances for success are amplified when employees believe that their leaders have the skills to lead the digital strategy and understand the major digital trends – and that is augmented with stories.

How can you get started on the journey? The following were some of the tips presented by Gartner at the Program & Portfolio Management Summit (PPM) in Orlando:

• Assess your organization’s appetite for risk taking
• Be introspective
• Introduce innovation into every project
• Find a project that can be monetized with digital
• Engage in experiments and communicate lessons learned

One of the keynotes at the 2017 Gartner PPM also emphasized that digital business is an entirely new game, the rules of which are not yet written. Whatever road you choose for your digital transformation journey, it will be important to take into account the central role of customer experience, the power of process management, and the importance of having clear priorities.

Source: BPM Institute

Advertisements

Apple’s AR platform: These demos show what ARKit can do in iOS 11

 

AnalyticsAnywhere

Apple sees a lot of potential in augmented reality.

Ever since Pokemon Go exploded in popularity last summer and subsequently revived interest in both Apple’s App Store and mobile gaming, Apple has said several times that it is embracing the technology, which is commonly called AR, especially now that it offers the ARKit platform. Here’s everything you need to know about ARKit, including what it can do and examples of its power in action.

What is AR?

Augmented reality isn’t a new technology. But Apple is now jumping into AR, so everyone’s been talking about it. You see, while virtual reality immerses you into a space, essentially replacing everything you see in a physical world, AR takes the world around you and adds virtual objects to it. You can look with your phone, for instance, and see a Pokemon standing in your living room.

What is Apple ARKit?

With iOS 11, which debuted at WWDC 2017, Apple is officially acknowledging AR. It has introduced the ARKit development platform, allowing app developers to quickly and easily build AR experiences into their apps and games. It will launch alongside iOS 11 this autumn. When it’s finally live, it’ll use your iOS device’s camera, processors, and motion sensors to create some immersive interactions.

It also uses a technology called Visual Inertial Odometry in order to track the world around your iPad or iPhone. This functionality allows your iOS device to sense how it moves in a room. ARKit will use that data to not only analyse a room’s layout, but also detect horizontal planes like tables and floors and serve up virtual objects to be placed upon those surfaces in your physical room.

What’s the point of ARKit?

Developers are free to create all kinds of experiences using ARKit, some of which are already being shown off on Twitter. IKEA even announced it is developing a new AR app built on ARKit that will let customers to preview IKEA products in their own homes before making a purchase. IKEA said that Apple’s new platform will allow AR to “play a key role” in new product lines.

That last bit is key. For Apple, ARKit opens up an entirely new category of apps that would run on every iPhone and iPad. It essentially wants to recreate and multiply the success of Pokemon Go. Plus, it opens up so many long-term possibilities. The company is rumoured to be working on an AR headset, for instance. Imagine wearing Apple AR glasses capable of augmenting you world every day.

Does ARKit face any competition?

Let’s also not forget that ARKit allows Apple to compete with Microsoft’s Hololens and Google’s Tango AR kit. But while Hololens and Tango are designed to be aware of multiple physical spaces and all of the shapes contained within, ARKit is more about detecting flat surfaces and drawing on those flat surfaces. In other words, it’s more limited, but we’re still in early-days territory right now.

We actually think ARKit’s capabilities, as of July 2017, reminds us of the AR effects found inside Snapchat or even the Facebook Camera app. The potential of Apple’s AR platform will likely improve as we move closer to the launch of iOS 11, however.

Which iOS devices can handle ARKit apps?

Any iPhone or iPad capable of running iOS 11 will be able to install ARKit apps. However, we’re assuming newer devices will handle the apps better. For instance, the new 10.5-inch and 12.5-inch iPads Pro tablets that debuted during WWDC 2017 have bumped-up display refresh rates of 120hz, which means what you see through the camera should seem much more impressive on those devices.

How do you get started with ARKit?

If you’re interested in building ARKit apps for iOS 11, go to the Apple Developer site, which has forums for building AR apps and beta downloads. If you’re a consumer who is just excited to play, you can go get the new iPad Pro and install the iOS 11 public beta to try out some of the early demos for AR. Otherwise, wait for iOS 11 to officially release alongside new AR apps in the App Store.

Source: pocket-lint.com

3 Technologies You Need To Start Paying Attention To Right Now

AnalyticsAnywhere

At any given time, a technology or two captures the zeitgeist. A few years ago it was social media and mobile that everybody was talking about. These days it’s machine learning and block chain. Everywhere you look, consulting firms are issuing reports, conferences are being held and new “experts” are being anointed.

In a sense, there’s nothing wrong with that. Social media and mobile computing really did change the world and, clearly, the impact of artificial intelligence and distributed database architectures will be substantial. Every enterprise needs to understand these technologies and how they will impact its business.

Still we need to remember that we always get disrupted by what we can’t see. The truth is that the next big thing always starts out looking like nothing at all. That’s why it’s so disruptive. If we saw it coming, it wouldn’t be. So here are three technologies you may not of heard about, but you should start paying attention to. The fate of your business may depend on it.

1. New Computing Architectures

In the April 19th issue of Electronics in 1965, Intel Co-Founder Gordon Moore published an article that observed the number of transistors on a silicon chip were doubling roughly every two years. Over the past half century, that consistent doubling of computing power, now known as Moore’s Law, has driven the digital revolution.

Today, however, that process has slowed and it will soon it come to a complete halt. There are only so many transistors you can cram onto a silicon wafer before subatomic effects come into play and make it impossible for the technology to function. Experts disagree on exactly when this will happen, but it’s pretty clear that it will be sometime within the next five years.

There are, of course, a number of ways to improve chip performance other than increasing the number of transistors, such as FPGA, ASIC and 3D stacking. Yet those are merely stopgaps and are unlikely to take us more than a decade or so into the future. To continue to advance technology over the next 50 years, we need fundamentally new architectures like quantum computing and neuromorphic chips.

The good news is that these architectures are very advanced in their development and we should start seeing a commercial impact within 5-10 years. The bad news is that, being fundamentally new architectures, nobody really knows how to use them yet. We are, in a sense, back to the early days of computing, with tons of potential but little idea how to actualize it.

2. Genetic Engineering

While computer scientists have been developing software languages over the past 50 years, biologist have been trying to understand a far more pervasive kind of code, the genetic code. For the most part, things have gone slowly. Although there has been significant scientific progress, the impact of that advancement has been relatively paltry.

That began to change in 2003 with the completion of the Human Genome Project. For the first time, we began to truly understand how DNA interacts with our biology, which led to other efforts, such as the Cancer Genome Atlas, as well as tangible advancements in agriculture. For the first time, genomics became more than mere scientific inquiry, but a source of new applications

Now, a new technology called CRISPR, is allowing scientists to edit genes at will. In fact, because the technology is simple enough for even amateur biologists to use, we can expect genetic engineering to become much more widespread across industries. Early applications include liquid fuels from sunshine and genomic vaccines.

“CRISPR is accelerating everything we do with genomics,” Megan Hochstrasser of the Innovative Genomics Initiative at Cal Berkeley told me, “from cancer research to engineering disease resistant crops and many other applications that haven’t yet come to the fore. Probably the most exciting aspect is that CRISPR is so cheap and easy to use, it will have a democratizing effect, where more can be done with less. We’re really just getting started.”

3. Materials Science

Traditionally, the way you improved a material to build a product has been a process of trial and error. You changed the ingredients or the process by which you made it and saw what happened. For example, at some point a medieval blacksmith figured out that annealing iron would make better swords.

Today, coming up with better materials is a multi-billion business. Consider the challenges that Boeing faced when designing its new Dreamliner. How do you significantly increase the performance of an airplane, a decades old technology? Yet by discovering new composite materials, the company was able to reduce weight by 40,000 pounds and fuel use by 20%.

With this in mind, the Materials Genome Initiative is building databases of material properties like strength, density and other things, and also includes computer models to predict what processes will achieve the qualities a manufacturer is looking for. As a government program, it is also able to make the data widely available for anyone who wants to use it, not just billion dollar companies like Boeing.

“Our goal is to speed up the development of new materials by making clear the relationship between materials, how they are processed and what properties are likely to result,” Jim Warren, Director of the Materials Genome program told me. “My hope is that the Materials Genome will accelerate innovation in just about every industry America competes in.”

It’s Better To Prepare Than Adapt

For the past few decades, great emphasis has been put on agility and adaptation. When a new technology, like social media, mobile computing or artificial intelligence begins to disrupt the marketplace, firms rush to figure out what it means and adapt their strategies accordingly. If they could do that a bit faster than the competition, they would win.

Today, however, we’re entering a new era of innovation that will look much more like the 50s and 60s than it will the 90s and aughts. The central challenge will no longer be to dream up new applications based on improved versions of old technologies, but to understand fundamentally new paradigms.

That’s why over the next few decades, it will be more important to prepare than adapt. How will you work with new computing architectures? How will fast, cheap genetic engineering affect your industry? What should you be doing to explore new materials that can significantly increase performance and lower costs? These are just some of the questions we will grapple with.

Not all who wander are lost. The challenge is to wander with purpose.

Source: Digital Tonto

AI Models For Investing: Lessons From Petroleum (And Edgar Allan Poe)

AnalyticsAnywhere

A decade ago, at a NY conference, an analyst put up slides showing his model of the short-term oil price (variables like inventories, production and demand trends, and so forth). I turned to the colleague next to me and said, “I just want to ask him, ‘How old are you?’” I worked on a computer model of the world oil market from 1977, when the model was run from a remote terminal and the output had to be picked up on the other side of campus. (Yes, by dinosaurs.) Although I haven’t done formal modeling in recent years, my experiences might provide some insight into the current fashion for using computer models in investing (among other things).

About two centuries ago, Baron von Maelzel toured the U.S. with an amazing clockwork automaton (invented by Baron Kempelen), a chess-playing “Turk” in the form of a mannequin at a desk with a chess board. The mannequin was dressed up as a Turk, given perceptions at the time of their perceived superior wisdom. The automaton could not only play chess very well, but solve problems presented to it that experts found difficult. Viewers were amazed, given the complexity of chess, and the level of play was not matched by modern computers for nearly two centuries. None of the Turk’s observers could initially explain the mechanism by which such feats were performed.

This is reminiscent of the 1970s when Uri Geller made claims to have paranormal abilities, which physicists from SRI found they could not explain. Because he wasn’t committing acts of physics, but sleight of hand, as demonstrated by the Amazing Randi who was not a scientist but rather an expert in the latter craft. (Similarly, peak oil advocates are often amazed at techniques done by scientists that are actually statistical in nature—and done wrong.)

Edgar Allan Poe considered the case and proved to be the Amazing Randi of his day. The chess-playing Turk was the result of “the wonderful mechanical genius of Baron Kempelen [that] could invent the necessary means for shutting a door or slipping aside a panel with a human agent too at his service…” in Poe’s words. He noted that the Baron would open one panel on the desk, show no one behind it, close it and open the other, again revealing no human agent; but this is just a standard magician’s trick, where the subject simply moves from one side to the other. Indeed, others claimed to have seen a chess player exit the desk after the audience had left.

Computer models often fall into this category. No matter how scientific and objective they appear, there is always a human agent behind them. In oil market modeling in the 1970s, this took the form of the price mechanism. NYU Professor Dermot Gately had suggested that prices moved according to capacity utilization in OPEC, as in the following figure (later used by the Energy Information Administration, among many others). If utilization was above 80%, prices would rise sharply, below 80% they would taper off.

AnalyticsAnywhereThis made sense, given that many industries use a similar conceptual model to predict inflation: high utilization in the steel industry results in higher steel prices, etc. And the model certainly seems to fit the existing data.

At least until 1986. After 1985, the data points no longer fit the curve, and the EIA stopped publishing the figure after 1987; for the last two years the model was well off. Subsequently, EIA ceased to publish the figure, although they used the formula for some time to come.

What had become obscured by the supposed success of the formula was that it was intended for use of short-term price changes. High steel capacity utilization would mean higher steel prices, but lead to investment and more capacity, so that prices would stabilize and even drop.

But oil models couldn’t capture this, because much of the capacity was in OPEC and it was assumed that OPEC would not necessarily invest in response to higher prices. Instead, the programmer had to choose numbers for OPEC’s future capacity and input them into the machine, meaning the programmer had control over the price forecast by simply modifying the capacity numbers. Despite the ‘scientific’ appearance of the computer model, there really was a man in the machine making the moves.

People have long sought to reduce the influence of fallible humans, whether replacing workers with machines or putting control of our nuclear weapons in the hands of Colossus, a giant computer that would avoid an accidental nuclear war. (1970 movie Forbin, the Colossus Project, fourteen years before Terminator’s Skynet). Ignoring that there is a always human element, even if only in the design.

Without any expertise in the field of artificial intelligence, it nonetheless seems to me that AI trading programs might learn, but won’t they learn at they are taught to do? Will this not simply be an extension of algorithms used by others in the financial world, at whose core as simply comparison of current with historical data and trends?

And this, after all, is what led to the financial meltdown described so aptly in When Genius Failed, the story of Long Term Capital Management and the way it nearly crashed the world economy. Recognizing patterns of behavior preceding an OPEC meeting, such as the way prices move in response to comments by member country ministers, can be useful, but will novel cases such as the SARS epidemic or the 2008 financial crisis catch the programs flat-footed, possibly triggering massive losses?

The answer, as it often does, comes down to gearing. LTCM’s model failed, but the problem was the huge amount of money they had at risk, far outreaching their capital. For a few small traders to use AI programs, or an investment bank to risk a fraction of its commodities’ funds would not be a concern. But if such programs become widespread, and they all programs are drawing the same conclusions from historical data, could there be a huge amount of money making the same bet?

For individuals, of course, the answer is diversify, one of the first investing lessons. I wonder how many AI programs will practice the same.

Source: Forbes

Want to understand your DNA? There’s an app for that

Helix will sequence your genes for $80 and lure app developers to sell you access to different parts of it.AnalyticsAnywhere

A Silicon Valley startup called Helix is betting on the notion that not only do people want to learn more about their DNA, but they’ll also pay to keep interacting with it.

Today the company, which was founded in 2015 with $100 million from genomics giant Illumina, is launching its much-anticipated online hub where people can digitally explore their genetic code by downloading different applications on their computers or mobile devices. Think of it as an app store for your genome.

Personalized genetic information has become an affordable commodity. The early success of leaders like 23andMe and AncestryDNA, which sell DNA testing kits for $200 or less, has ushered in a wave of new companies offering direct-to-consumer genetic tests for everything from ancestry to the wine you should drink based on your DNA.

Most of these genetic testing kits are one-time deals. You spit in a tube, and your saliva is sent off to a lab to be analyzed. A few weeks later you get a long, detailed report of your genetic makeup. Helix CEO Robin Thurston says all that information can be daunting, and most people don’t come back to the data again and again.

With Helix, people will be able to choose the things about their genome they want to learn about. For an initial $80, Helix sequences the most important part of the genome—about 20,000 genes plus some other bits—called the exome. That information is digitized and stored by Helix, which doles out pieces of the information to companies selling other apps through Helix. “It’s our goal that someone will have a lifelong relationship with their DNA data,” Thurston says.

Other direct-to-consumer testing companies like 23andMe and AncestryDNA use a technology called genotyping to analyze a customer’s genes. Helix uses a more detailed method known as DNA sequencing, which yields about 100 times more information. So far, most people who have gotten exome sequencing, which can cost several hundred to more than a thousand dollars elsewhere, have been patients with rare or unknown medical conditions who hope their genes can provide more answers. Exome sequencing for healthy people is a new, untapped market.

From the consumer side, people will have to get their genes sequenced only once, then they can choose from different apps in categories like ancestry, fitness, health, and nutrition and pay as they go. About a dozen companies are debuting apps on Helix today, and each app is designed to tell you something different about your genome. Some are more medically relevant, like those that estimate risk for inherited cholesterol and heart problems, test for food sensitivity, or check to see if you could pass a serious genetic condition on to your child. Only the apps people buy will have access to their personal information.

One company, Exploragen, says it can tell you about your sleep patterns—like whether you’re a morning person or a night owl—just by looking at your DNA (in case you needed help knowing that one). Another company, Dot One, will examine the tiny portion of your genes that makes you different from everyone else and print that unique code onto a customized fabric scarf (because, why not?).

A third company, Insitome, has an app that will determine what percentage of your DNA you inherited from Neanderthals and what traits you inherited from them. Insitome CEO Spencer Wells says this initial app will cost $30.

Wells, who previously led the National Geographic Society’s Genographic Project, which mapped human migration throughout history by analyzing people’s DNA samples, says he likes the idea of Helix’s platform because it means that companies can develop additional apps as new scientific discoveries are made about the human genome.

Helix has also managed to attract major medical institutions like the Mayo Clinic and Mount Sinai Health System to develop apps for its store. Eventually, Thurston wants to offer hundreds of apps. He estimates the average customer will buy three to five apps each.

But having access to all these DNA apps might not be a good thing for consumers. Daniel MacArthur, a scientist at Massachusetts General Hospital and Harvard Medical School who studies the human genome, says there’s a danger associated with mixing medically serious tests, such as disease carrier testing, with a range of lifestyle, nutrition, and wellness tests that have little scientific evidence to support them.

“Promoting tests with little or no scientific backing runs the risk of inflating customer expectations and ultimately undermining consumer confidence in genuinely clinically useful genetic tests,” he says.

Direct-to-consumer genetic tests, including ones that claim to predict disease risk, are loosely regulated in the U.S. That worries Stephen Montgomery, a geneticist at Stanford University, who says the Helix platform creates a bigger opportunity for companies to develop products that don’t provide much value to people.

“Helix will have to think very carefully about what apps to allow on the platform,” he says. The average customer probably can’t discern which products are based on sound science from those that aren’t, so he hopes Helix will have some way of evaluating the quality of information the apps provide.

Source: MIT Technology Review

Machine Learning and Prediction in Medicine — Beyond the Peak of Inflated Expectations

Big data, we have all heard, promise to transform health care with the widespread capture of electronic health records and high-volume data streams from sources ranging from insurance claims and registries to personal genomics and biosensors. Artificial-intelligence and machine-learning predictive algorithms, which can already automatically drive cars, recognize spoken language, and detect credit card fraud, are the keys to unlocking the data that can precisely inform real-time decisions. But in the “hype cycle” of emerging technologies, machine learning now rides atop the “peak of inflated expectations.”

Prediction is not new to medicine. From risk scores to guide anticoagulation (CHADS) and the use of cholesterol medications (ASCVD) to risk stratification of patients in the intensive care unit (APACHE), data-driven clinical predictions are routine in medical practice. In combination with modern machine learning, clinical data sources enable us to rapidly generate prediction models for thousands of similar clinical questions. From early-warning systems for sepsis to superhuman imaging diagnostics, the potential applicability of these approaches is substantial.

Yet there are problems with real-world data sources. Whereas conventional approaches are largely based on data from cohorts that are carefully constructed to mitigate bias, emerging data sources are typically less structured, since they were designed to serve a different purpose (e.g., clinical care and billing). Issues ranging from patient self-selection to confounding by indication to inconsistent availability of outcome data can result in inadvertent bias, and even racial profiling, in machine predictions. Awareness of such challenges may keep the hype from outpacing the hope for how data analytics can improve medical decision making.

Machine-learning methods are particularly suited to predictions based on existing data, but precise predictions about the distant future are often fundamentally impossible. Prognosis models for HER-negative breast cancer had to be inverted in the face of targeted therapies, and the predicted efficacy of influenza vaccination varies with disease prevalence and community immunization rates. Given that the practice of medicine is constantly evolving in response to new technology, epidemiology, and social phenomena, we will always be chasing a moving target.

The rise and fall of Google Flu remind us that forecasting an annual event on the basis of 1 year of data is effectively using only a single data point and thus runs into fundamental time-series problems. Yet if the future will not necessarily resemble the past, simply accumulating mass data over time has diminishing returns. Research into decision-support algorithms that automatically learn inpatient medical practice patterns from electronic health records reveals that accumulating multiple years of historical data is worse than simply using the most recent year of data. When our goal is learning how medicine should be practiced in the future, the relevance of clinical data decays with an effective “half-life” of about 4 months. To assess the usefulness of prediction models, we must evaluate them not on their ability to recapitulate historical trends, but instead on their accuracy in predicting future events.

Although machine-learning algorithms can improve the accuracy of prediction over the use of conventional regression models by capturing complex, nonlinear relationships in the data, no amount of algorithmic finesse or computing power can squeeze out information that is not present. That’s why clinical data alone have relatively limited predictive power for hospital readmissions that may have more to do with social determinants of health.

The apparent solution is to pile on greater varieties of data, including anything from sociodemographics to personal genomics to mobile-sensor readouts to a patient’s credit history and Web-browsing logs. Incorporating the correct data stream can substantially improve predictions, but even with a deterministic (nonrandom) process, chaos theory explains why even simple nonlinear systems cannot be precisely predicted into the distant future. The so-called butterfly effect refers to the future’s extreme sensitivity to initial conditions. Tiny variations, which seem dismissible as trivial rounding errors in measurements, can accumulate into massively different future events. Identical twins with the same observable demographic characteristics, lifestyle, medical care, and genetics necessarily generate the same predictions — but can still end up with completely different real outcomes.

Though no method can precisely predict the date you will die, for example, that level of precision is generally not necessary for predictions to be useful. By reframing complex phenomena in terms of limited multiple-choice questions (e.g., Will you have a heart attack within 10 years? Are you more or less likely than average to end up back in the hospital within 30 days?), predictive algorithms can operate as diagnostic screening tests to stratify patient populations by risk and inform discrete decision making.

Research continues to improve the accuracy of clinical predictions, but even a perfectly calibrated prediction model may not translate into better clinical care. An accurate prediction of a patient outcome does not tell us what to do if we want to change that outcome — in fact, we cannot even assume that it’s possible to change the predicted outcomes.

Machine-learning approaches are powered by identification of strong, but theory-free, associations in the data. Confounding makes it a substantial leap in causal inference to identify modifiable factors that will actually alter outcomes. It is true, for instance, that palliative care consults and norepinephrine infusions are highly predictive of patient death, but it would be irrational to conclude that stopping either will reduce mortality. Models accurately predict that a patient with heart failure, coronary artery disease, and renal failure is at high risk for postsurgical complications, but they offer no opportunity for reducing that risk (other than forgoing the surgery). Moreover, many such predictions are “highly accurate” mainly for cases whose likely outcome is already obvious to practicing clinicians. The last mile of clinical implementation thus ends up being the far more critical task of predicting events early enough for a relevant intervention to influence care decisions and outcomes.

With machine learning situated at the peak of inflated expectations, we can soften a subsequent crash into a “trough of disillusionment” by fostering a stronger appreciation of the technology’s capabilities and limitations. Before we hold computerized systems (or humans) up against an idealized and unrealizable standard of perfection, let our benchmark be the real-world standards of care whereby doctors grossly misestimate the positive predictive value of screening tests for rare diagnoses, routinely overestimate patient life expectancy by a factor of 3, and deliver care of widely varied intensity in the last 6 months of life.

Although predictive algorithms cannot eliminate medical uncertainty, they already improve allocation of scarce health care resources, helping to avert hospitalization for patients with low-risk pulmonary embolisms (PESI) and fairly prioritizing patients for liver transplantation by means of MELD scores. Early-warning systems that once would have taken years to create can now be rapidly developed and optimized from real-world data, just as deep-learning neural networks routinely yield state-of-the-art image-recognition capabilities previously thought to be impossible.

Whether such artificial-intelligence systems are “smarter” than human practitioners makes for a stimulating debate — but is largely irrelevant. Combining machine-learning software with the best human clinician “hardware” will permit delivery of care that outperforms what either can do alone. Let’s move past the hype cycle and on to the “slope of enlightenment,” where we use every information and data resource to consistently improve our collective health.

Source: The New England Journal of Medicine

As Robots Take Over We Will Need More Innovators

AnalyticsAnywhere

The Hadrian X robot is made by Fastbrick Robotics from Australia. It can lay 1000 house bricks in an hour (video below). The average bricklayer lays around 500 bricks a day. We will soon see robots doing much of the standard work in building assembly with a small number of skilled craftsmen supervising them, applying finishing touches or completing tricky tasks. McDonald’s is trialing a “Create Your Taste” kiosk – an automatic system that lets customers order and collect their own configuration of burger meal with no assistant needed.

But it is not just manual labour which will be affected by the inexorable roll out of robots, automation and artificial intelligence. The impact will be felt widely across skilled middle class jobs including lawyers, accountants, analysts and technicians. In many financial trading centres traders have already been replaced by algorithms. The world’s first ‘robot lawyer’ is now available in 50 states.

The World Economic Forum predicts that robotic automation will result in the net loss of more than 5m jobs across 15 developed nations by 2020. Many think the numbers will be much higher. A report by the consultancy firm PWC found that 30% of jobs were potentially under threat from breakthroughs in artificial intelligence. In some sectors half the jobs could go.

The rise of the robots will lead to an increase in the demand for those with the skills to program, maintain and supervise the machines. Most companies will have a Chief Robotics Officer and a department dedicated to automation. However, the human jobs created will be small fraction of the jobs which the robots will replace.

Any job that involves the use of knowledge, analysis and systematic decision making is at risk. Robots can not only absorb a large body of knowledge and rules. They can also adapt and learn on the job.

Where does that leave the displaced humans? The standard answer is education. Policy makers advise that people should retrain into higher skilled professions. The problem is most training simply provides more knowledge and skills which can also be replaced by automation.

“So what jobs can robots not do? Einstein said, ‘Imagination is more important than knowledge.’ It is in the application of imagination that humans have the clear advantage.”

Here are some things which robots do not do well:
1.Ask searching questions.
2.Challenge assumptions about how things are done.
3.Conceive new business models and approaches.
4.Understand and appeal to people’s feelings and emotions
5.Design humorous, provocative or eye-catching marketing campaigns.
6.Deliberately break the rules.
7.Inspire and motivate people.
8.Set a novel strategy or direction.
9.Do anything spontaneous, entertaining or unexpected.
10.Anticipate future trends and needs.
11.Approach problems from entirely new directions
12.Imagine a better future.

Let’s leave the routine knowledge jobs to the robots and focus on developing our creative skills. The most successful organisations will be those that combine automation efficiency with ingenious and appealing new initiatives. We will need more imaginative theorists, more lateral thinkers, more people who can question and challenge. We will need more innovators.

Source:innovationexcellence.com