AI-powered diagnostic device takes out Tricorder XPrize

OnTheGo

Launched in 2012, the Qualcomm Tricorder XPrize tasked competing teams with developing a portable and versatile medical diagnostics machine that would give people “unprecedented access” to information about their health. The contest has now been run and won, with an AI-powered device awarded top honors and US$2.5 million for its trouble.

This particular XPrize – a series of competitions aimed at solving global issues – was created to encourage the development of a device that mimicked the iconic Tricorder from Star Trek. More specifically, this meant the ability to diagnose 13 conditions including anemia, diabetes, sleep apnea and urinary tract infections, along with the ability to detect three of five additional diseases: HIV, hypertension, melanoma, shingles and strep throat.

The competition was whittled down to ten finalists in 2014, and then again to two in December last year. The Taiwan-based Dynamical Biomarkers Group took second place with its prototype for a smartphone-based diagnostics device, but was beaten out by Final Frontier Medical Devices from Pennsylvania.

The winning machine is called DxtER and uses artificial intelligence to teach itself to diagnose medical conditions. It does this by using a set of non-invasive sensors to check vital signs, body chemistry and biological functions and draws on data from clinical emergency medicine and actual patients. All this data is then synthesized by the AI engine and the device spits out a “quick and accurate assessment.”

In addition to the $2.5 million, the Final Frontier and Dynamical Biomarkers Group teams (which received a not-too-shabby $1 million for second place) will benefit from ongoing support and funding from XPrize and its partners. This includes R&D partnerships with the US Food and Drug Administration and the University of California San Diego. Meanwhile, Lowe’s Home Improvements has committed to distributing a consumer-ready version of the device, while the General Hospital of Maputo in Mozambique will provide it to its doctors, nurses and patients.

“We could not be more pleased with the quality of innovation and performance of the teams who competed, particularly with teams Final Frontier and Dynamical Biomarkers Group,” said Marcus Shingles, CEO of the XPrize Foundation. “Although this XPrize competition phase has ended, XPrize, Qualcomm Foundation, and a network of strategic partners are committed and excited to now be entering a new phase which will support these teams in their attempt to scale impact and the continued evolution of the Tricorder device through a series of new post-competition initiatives.”

Source: Newatlas.com

Our Fear of Artificial Intelligence

A true AI might ruin the world—but that assumes it’s possible at all.

OnTheGo

Years ago I had coffee with a friend who ran a startup. He had just turned 40. His father was ill, his back was sore, and he found himself overwhelmed by life. “Don’t laugh at me,” he said, “but I was counting on the singularity.”

My friend worked in technology; he’d seen the changes that faster microprocessors and networks had wrought. It wasn’t that much of a step for him to believe that before he was beset by middle age, the intelligence of machines would exceed that of humans—a moment that futurists call the singularity. A benevolent superintelligence might analyze the human genetic code at great speed and unlock the secret to eternal youth. At the very least, it might know how to fix your back.

But what if it wasn’t so benevolent? Nick Bostrom, a philosopher who directs the Future of Humanity Institute at the University of Oxford, describes the following scenario in his book Superintelligence, which has prompted a great deal of debate about the future of artificial intelligence. Imagine a machine that we might call a “paper-clip maximizer”—that is, a machine programmed to make as many paper clips as possible. Now imagine that this machine somehow became incredibly intelligent. Given its goals, it might then decide to create new, more efficient paper-clip-manufacturing machines—until, King Midas style, it had converted essentially everything to paper clips.

No worries, you might say: you could just program it to make exactly a million paper clips and halt. But what if it makes the paper clips and then decides to check its work? Has it counted correctly? It needs to become smarter to be sure. The superintelligent machine manufactures some as-yet-uninvented raw-computing material (call it “computronium”) and uses that to check each doubt. But each new doubt yields further digital doubts, and so on, until the entire earth is converted to computronium. Except for the million paper clips.

Bostrom does not believe that the paper-clip maximizer will come to be, exactly; it’s a thought experiment, one designed to show how even careful system design can fail to restrain extreme machine intelligence. But he does believe that superintelligence could emerge, and while it could be great, he thinks it could also decide it doesn’t need humans around. Or do any number of other things that destroy the world. The title of chapter 8 is: “Is the default outcome doom?”

If this sounds absurd to you, you’re not alone. Critics such as the robotics pioneer Rodney Brooks say that people who fear a runaway AI misunderstand what computers are doing when we say they’re thinking or getting smart. From this perspective, the putative superintelligence Bostrom describes is far in the future and perhaps impossible.

Yet a lot of smart, thoughtful people agree with Bostrom and are worried now. Why?

Volition
The question “Can a machine think?” has shadowed computer science from its beginnings. Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term “artificial intelligence” in 1955. As AI researchers in the 1960s and 1970s began to use computers to recognize images, translate between languages, and understand instructions in normal language and not just code, the idea that computers would eventually develop the ability to speak and think—and thus to do evil—bubbled into mainstream culture. Even beyond the oft-referenced HAL from 2001: A Space Odyssey, the 1970 movie Colossus: The Forbin Project featured a large blinking mainframe computer that brings the world to the brink of nuclear destruction; a similar theme was explored 13 years later in WarGames. The androids of 1973’s Westworld went crazy and started killing.

When AI research fell far short of its lofty goals, funding dried up to a trickle, beginning long “AI winters.” Even so, the torch of the intelligent machine was carried forth in the 1980s and ’90s by sci-fi authors like Vernor Vinge, who popularized the concept of the singularity; researchers like the roboticist Hans Moravec, an expert in computer vision; and the engineer/entrepreneur Ray Kurzweil, author of the 1999 book The Age of Spiritual Machines. Whereas Turing had posited a humanlike intelligence, Vinge, Moravec, and Kurzweil were thinking bigger: when a computer became capable of independently devising ways to achieve goals, it would very likely be capable of introspection—and thus able to modify its software and make itself more intelligent. In short order, such a computer would be able to design its own hardware.

As Kurzweil described it, this would begin a beautiful new era. Such machines would have the insight and patience (measured in picoseconds) to solve the outstanding problems of nanotechnology and spaceflight; they would improve the human condition and let us upload our consciousness into an immortal digital form. Intelligence would spread throughout the cosmos.

You can also find the exact opposite of such sunny optimism. Stephen Hawking has warned that because people would be unable to compete with an advanced AI, it “could spell the end of the human race.” Upon reading Superintelligence, the entrepreneur Elon Musk tweeted: “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” Musk then followed with a $10 million grant to the Future of Life Institute. Not to be confused with Bostrom’s center, this is an organization that says it is “working to mitigate existential risks facing humanity,” the ones that could arise “from the development of human-level artificial intelligence.”

No one is suggesting that anything like superintelligence exists now. In fact, we still have nothing approaching a general-purpose artificial intelligence or even a clear path to how it could be achieved. Recent advances in AI, from automated assistants such as Apple’s Siri to Google’s driverless cars, also reveal the technology’s severe limitations; both can be thrown off by situations that they haven’t encountered before. Artificial neural networks can learn for themselves to recognize cats in photos. But they must be shown hundreds of thousands of examples and still end up much less accurate at spotting cats than a child.

This is where skeptics such as Brooks, a founder of iRobot and Rethink Robotics, come in. Even if it’s impressive—relative to what earlier computers could manage—for a computer to recognize a picture of a cat, the machine has no volition, no sense of what cat-ness is or what else is happening in the picture, and none of the countless other insights that humans have. In this view, AI could possibly lead to intelligent machines, but it would take much more work than people like Bostrom imagine. And even if it could happen, intelligence will not necessarily lead to sentience. Extrapolating from the state of AI today to suggest that superintelligence is looming is “comparable to seeing more efficient internal combustion engines appearing and jumping to the conclusion that warp drives are just around the corner,” Brooks wrote recently on Edge.org. “Malevolent AI” is nothing to worry about, he says, for a few hundred years at least.

Insurance policy
Even if the odds of a superintelligence arising are very long, perhaps it’s irresponsible to take the chance. One person who shares Bostrom’s concerns is Stuart J. Russell, a professor of computer science at the University of California, Berkeley. Russell is the author, with Peter Norvig (a peer of Kurzweil’s at Google), of Artificial Intelligence: A Modern Approach, which has been the standard AI textbook for two decades.

“There are a lot of supposedly smart public intellectuals who just haven’t a clue,” Russell told me. He pointed out that AI has advanced tremendously in the last decade, and that while the public might understand progress in terms of Moore’s Law (faster computers are doing more), in fact recent AI work has been fundamental, with techniques like deep learning laying the groundwork for computers that can automatically increase their understanding of the world around them.

Because Google, Facebook, and other companies are actively looking to create an intelligent, “learning” machine, he reasons, “I would say that one of the things we ought not to do is to press full steam ahead on building superintelligence without giving thought to the potential risks. It just seems a bit daft.” Russell made an analogy: “It’s like fusion research. If you ask a fusion researcher what they do, they say they work on containment. If you want unlimited energy you’d better contain the fusion reaction.” Similarly, he says, if you want unlimited intelligence, you’d better figure out how to align computers with human needs.

Bostrom’s book is a research proposal for doing so. A superintelligence would be godlike, but would it be animated by wrath or by love? It’s up to us (that is, the engineers). Like any parent, we must give our child a set of values. And not just any values, but those that are in the best interest of humanity. We’re basically telling a god how we’d like to be treated. How to proceed?

Bostrom draws heavily on an idea from a thinker named Eliezer Yudkowsky, who talks about “coherent extrapolated volition”—the consensus-derived “best self” of all people. AI would, we hope, wish to give us rich, happy, fulfilling lives: fix our sore backs and show us how to get to Mars. And since humans will never fully agree on anything, we’ll sometimes need it to decide for us—to make the best decisions for humanity as a whole. How, then, do we program those values into our (potential) superintelligences? What sort of mathematics can define them? These are the problems, Bostrom believes, that researchers should be solving now. Bostrom says it is “the essential task of our age.”

For the civilian, there’s no reason to lose sleep over scary robots. We have no technology that is remotely close to superintelligence. Then again, many of the largest corporations in the world are deeply invested in making their computers more intelligent; a true AI would give any one of these companies an unbelievable advantage. They also should be attuned to its potential downsides and figuring out how to avoid them.

This somewhat more nuanced suggestion—without any claims of a looming AI-mageddon—is the basis of an open letter on the website of the Future of Life Institute, the group that got Musk’s donation. Rather than warning of existential disaster, the letter calls for more research into reaping the benefits of AI “while avoiding potential pitfalls.” This letter is signed not just by AI outsiders such as Hawking, Musk, and Bostrom but also by prominent computer scientists (including Demis Hassabis, a top AI researcher). You can see where they’re coming from. After all, if they develop an artificial intelligence that doesn’t share the best human values, it will mean they weren’t smart enough to control their own creations.

Source: MIT Technology Review

A Robot Took My Job – Was It a Robot or AI?

The argument in the popular press about robots taking our jobs fails in the most fundamental way to differentiate between robots and AI.  Here we try to identify how each contributes to job loss and what the future of AI Enhanced Robots means for employment.

OnTheGo

There’s been a lot of contradictory opinion in the press recently about future job loss from robotics and AI.  They range from Bill Gates’ hand wringing assertion that we should slow this down by taxing robots to Treasury Secretary Steve Mnuchin’s seemingly luddite observation “In terms of artificial intelligence taking over the jobs, I think we’re so far away from that that it’s not even on my radar screen.  I think it’s 50 or 100 more years.”

But these extreme end points of the conversation aren’t the worst of it.  There is essentially no effort in the popular press to differentiate ‘robot’ from ‘AI’.

AI enhanced robots are much in the press and much on the mind of data scientists.  So is it principally advances in deep learning (AI) that are the main cause of future job loss or robots as they currently exist?

Automation Always Meant Shifting Job Needs

Job loss, or more correctly task and skill redistribution due to automation has been going on at least since water wheels eliminated human labor in ancient civilizations.

Ned Ludd was the English apprentice who smashed two automated stocking frames in 1779 to protest the loss of manual work giving rise to the political movement and term Luddite. (Note that the automated stocking frame was invented in 1589 so almost 200 years elapsed before the social protest.)

Then there is John Maynard Keynes’s frequently cited prediction of widespread technological unemployment “due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour” (1933).

What has not changed is that skills we learned in our youth may not be valued labor throughout our working life spans.  As this speeds up this creates risk for the employed.  In the last 20 or 30 years it has caused a redistribution of employment into higher-pay high-skill jobs and an emptying out of repetitive manufacturing tasks into largely lower paying services jobs.

Several recent academic studies have put the percentage of US jobs at risk from automation in the range of 35% to 45%.  Keep in mind that these projections are based on the jobs available in the economy based roughly on a 2010 base line and don’t take into account how new or existing workers or industries will adjust their employment needs in light of a fast changing environment.  Jobs available in 2010 may simply not be needed (automation or not) in 2020 so the percentages will necessarily be somewhat overstated.

There are a number of ways to spot the vulnerable jobs and we discussed several techniques used by VCs in How to Put AI to Work.

Is the Cause Robotics, AI, or Both

The popular perception of robots is that they are getting smarter and much more capable fast.  Within data science we know that the hockey stick curve in capabilities we are currently experiencing is largely a function of deep learning techniques which are popularly grouped together as AI.

We also know that commercial applications of deep learning have been successfully adopted only since about 2015.  We pick that threshold date as the year when image classification by deep learning finally passed the human accuracy threshold (95% in the annual Image Net competition) and pretty much the same time that speech and text processing hit about 99% accuracy enabling the huge proliferation in chatbots.

Another candidate year might be 2011 marking IBM’s Watson victory on Jeopardy.  However, important as Watson is in AI it’s just beginning to gain commercial traction today.  Other deep learning technologies like reinforcement learning and generative adversarial neural nets are even further away from general commercial adoption.

The studies published about job loss speak in generalities about AI and ML but don’t attempt to show that adoption of AI enabled robots will accelerate job replacement.

It’s About the Robots – and they’re Not that Smart

Robots have been taking human jobs for decades.  The earliest industrial robot was designed and installed by General Motors (GM) in 1961.  By 1968 robots were available with 12 joints and could lift a person.  By 1974 robots were sufficiently sophisticated that they could assemble an automotive water pump.  Although these operated without human intervention and could be reprogramed to do other tasks, their capabilities did not advance significantly for another 20 years.

One of the most interesting and apocryphal stories about robotics is about the all-in adoption of industrial robots by GM in the 80’s.  GM fully committed to creating a ‘lights-out’ factory, meaning one that would operate completely without humans.  GM’s experiment was a failure and the economic loss was considerable.  The reason, they got ahead of what robots could actually deliver.  Although robots could be programmed to place, weld, paint, and screw they could not yet adapt to the small variations in the positioning of the materials versus the on-assembly line car.  Fit and finish, and the resultant rework were not up to the standards of human workers.

What changed in the 90’s was the arrival of machine vision.  Not the sophisticated image processing that we have today, but the ability of the robot to take in camera signals and adjust the physical positioning of the item ever so slightly to create repeatedly perfect alignment of parts.

In the late 90’s robots also gained sophistication in grasping with force feedback sensors allowing more delicate operations.

The significance is that none of this rises to the level of data science.  The use of controls based on predictive analytics that allowed industrial robots to make more sophisticated decisions using multiple inputs to decide on multiple outputs did not become common until the 2000’s with early IoT.  Even so, the economic demand for this level of capability is only a small fraction of total industrial robots.

What has happened is that the cost of industrial robots has been decreasing about 10% a year so that between 1990 and 2004 the price of robots fell 75% and continues to fall today.  If you are repetitively placing 10 or 20 fasteners in a manufactured device it no longer makes economic sense to pay $17 to $25 per hour to an employee to do that.  Either use a low cost robot or outsource the labor to a country where that average manufacturing wage is $3/hour.

Job disruption is a function of the continued adoption of mostly dumb industrial robots and not AI or even predictive analytics.

Will AI Enhanced Robots Accelerate Job Loss

Won’t AI enhanced robots make a difference in the future?  Yes they will.  However thinking about this requires splitting the question in several new ways.  The first split is over what we consider to be a robot.

Physical versus Virtual Robots

Robots by definition must operate without human intervention and be reprogrammable to different tasks.  The most important requirement however is that they be able to perform human like tasks replacing the need for the human.

Robots have always meant physical entities like giant mechanical arms or Rosie the Robot from the Jetsons.  But increasingly robots that are virtual are also able to act in the world and replace humans.   Particularly these virtual robots rely on the capabilities of deep learning for their capability.

  • Alexa, Siri, Hello Google and other chatbots can act on the physical world to operate lights, thermostats, locks, or even order goods over the internet.
  • Watson, a Question Answering Machine, inputs images, text, or speech and outputs images, text, or speech. Its current high value targets are medical diagnoses like interpreting radiology images.
  • Programmatic advertising recognizes and profiles millions of page views in real time and automatically places advertising on that page in a matter of milliseconds. It fulfills the economic contract between advertiser and media without human intervention.
  • Image processing enabled automatic error or damage detection.
  • Fully automated Customer Service Representatives reduce costs 60% to 80% compared to outsourced call centers.
  • Systems that automatically take action for fraud detection, condition monitoring, or anomaly detection are increasingly common.

In the physical world, the AI enhancement of robots has mostly to do with mobile robotics.  Industrial robots are almost universally bolted to the floor.  Mobile robotics however have use in both wage-replacing and personal use applications.

  • Self-driving personal use or shared service cars.
  • Self-driving trucks like Otto currently on the roads in Nevada delivering goods.
  • Mobile warehouse pick-and-pack robots.
  • Unmanned aerial vehicles, drones, and ships.
  • Robotic exo-skeletons for the disabled or to allow humans to perform with extra strength or stamina.
  • Even your Roomba is smart enough to fall in this category.

Increasingly in this conversation we need to expand our thinking about robots particularly to include AI enhanced virtual robots.

Labor Replacing Versus Personal Use Enhanced Robotics

The interesting thing about the examples above is that based on units deployed, far more AI enhanced robots, particularly in the virtual category are designed to enhance our personal lives by creating free time or leisure, and not to take our jobs.

We recently did a review of 30 ways in which Watson was being used by startups to create new applications.  Watson isn’t all of AI’s application to virtual robots but it’s interesting that 57% were aimed solely at consumers and those that could conceivably be used by business to reduce direct labor costs represented only 27%.

As entrepreneurs look at how to utilize AI enhanced virtual and physical robots they’ve approached these business opportunities with the classical mindset of identifying the pain points.  What are the things in our lives that compete for our time and make performing gainful labor more difficult: child care, elder care, health issues, home maintenance, commuting/driving, shopping, and more.  Automation of non-routine tasks is increasingly feasible with deep learning but these tasks are principally not industrial or wage driving.

It appears that the new phenomenon of AI enhanced robots will largely target making life easier and more time available.  There may be some applications where like current industrial robots they may automate human tasks and cause job displacement but it does not appear that AI enhancement will be an accelerator of this phenomenon.

Source: datasciencecentral.com

Adobe developing AI that turns selfies into self-portraits

OnTheGo

We all know the trademarks of a bad selfie: Unflattering angles, front-camera distortion, distracting backgrounds. Adobe is testing artificial intelligence tools to automatically correct these common issues and infuse selfies with more pleasing portraiture effects.

Adobe shared some of its research on selfie photography in the video below, and the potential results are compelling. That warped-face effect from the too-close front-facing camera? With a few taps, the typical selfie transforms into an image that appears to have been taken from a further distance, as would be seen in a more traditional portrait.

There’s also an automatic masking tool, which could add bokeh and depth-of-field effects similar to the iPhone 7 Plus’ Portrait Mode, blurring out distracting backgrounds for more emphasis on the subject, without the need for a dual-lens camera or other specific hardware.

In addition, Adobe hinted at the potential for automatic styling effects, similar to those seen recently in some of its other AI-driven photo retouching research, only selfie-specific.

Adobe has not confirmed if and when these tools will hit its lineup of smartphone apps, but if they are as easy and effective as this behind-the-scenes look suggests, they could be helping amateur photographers achieve more professional-looking results sometime in the not-so-distant future.

Source: New Atlas magazine

Facebook testing AI for suicide prevention tools

onthego

Facebook is expanding its suicide prevention tools and rolling them out to its Facebook Live and Messenger platforms. It’s also testing AI for detecting posts that indicate suicidal or self-injurious behavior.

The social media giant has had some form of suicide prevention measures in place for over a decade. If a Facebook user posts something that invokes concern for their well-being, their friends can reach out to the person directly or report the post to Facebook. According to the company’s blog, Facebook has a 24/7 team dedicated to reviewing high-priority reports like these, who can reach out to the user with support options.

A similar functionality is being rolled out to Facebook Live, the company’s live video broadcasting platform. People watching the video will now have options to reach out to the person directly or report it to Facebook. The person broadcasting the video, in turn, will see a set of resources and tips on their end.

Live support for individuals struggling with suicidal thoughts will also be coming to Messenger. These services are offered by Facebook in conjunction with its partner organizations, which include the Crisis Text Line, the National Eating Disorder Association and the National Suicide Prevention Hotline.

And, in an effort to streamline reporting and get the person in danger access to self-help tools more quickly, Facebook is putting artificial intelligence to work in detecting content that indicates potentially suicidal behavior. It is testing pattern recognition tools to automatically detect posts that are likely to indicate thoughts of suicide. If it works correctly, it could streamline the user reporting process or bypass it altogether.

Of course, these tools are not a substitute for direct action in times of crisis. If you encounter a direct threat of suicide or worry that someone is truly in danger, contact the authorities – not Facebook – immediately.

Source: Facebook