Innovation Through Crowdsourcing and AI

Young business woman with ipad

If artificial intelligence (AI) is the future, the future is now, and it’s all around us. Despite what science fiction and futuristic fantasy may have you believe, AI isn’t all about recreating human consciousness. Rather, it’s a practical, efficient way to help business technology get smarter as a product gains traction. AI allows companies to use insights from a large community of users to continually improve upon their products.

However, AI isn’t all games and robots. It takes a cross-sectional skill set to successfully implement good AI, and in order to do so, companies need to both understand their consumers’ motivations and capitalize on them using the right tools.

AI and Crowdsourcing: Better Together
Plenty of businesses rely on data from the usual suspects — business analytics, internal data, information gathered by employees — but few understand how to actively manage data contributed by users. Alexa and Siri are prime examples of how AI can leverage this crowdsourced information to improve the customer-company relationship.

Using crowdsourcing to gather human-contributed information and funneling that information through AI technology is the simplest path toward more meaningful insights. This method allows business owners to stop hunting down insights one at a time and to instead receive targeted data to inform smarter business decisions. This collaboration produces results that are greater than the sum of the parts.

The value lies in asking the right questions at the right time using AI and reporting the findings to the people who could benefit from the information. Collectively, crowdsourcing and AI produce truly intelligent market research.

Teamwork Makes the Dream Work
Companies must integrate crowdsourcing and AI to produce a scalable, intelligent model capable of handling their needs indefinitely. Many implement AI without really identifying how it can help them. Other, more tech-focused companies often find themselves trying to lay crowdsourcing on top of existing technology without properly understanding how to motivate their crowds.

Crowdsourcing matches people to questions that the community needs answered. Those with information to share can provide feedback on their fields of expertise according to the needs of those searching for that information. Meanwhile, AI technology filters out the answers and extracts meaningful intelligence from them, creating a powerful advantage over companies that fail to combine these tools to their full potential. What advantages, you ask? Here are a few:

1. A streamlined end-user experience. Alexa, Siri, Waze, and Skype Translator are all embodiments of the improved end-user experience thanks to crowdsourced AI insights. In the early stages, using these tools can be frustrating as they continue to gather data.

Waze took traffic navigation — something very few people like — and improved it with real-time updates, personalized vocal guides, and other features.

A wealth of information is the foundation. AI and crowdsourcing can build on that base to create a valuable, magical experience.

2. It brings outsiders into the fold. Waze began by gathering information on the patterns of power commuters eventually building up enough data via crowdsourcing and expert consultations to create optimal route maps for its users. Thanks to the app, people new to an area can have the same driving experience as someone who has lived there for 10 years.

The powerful crowdsourcing-AI combination has the capability to bring any outsider into any inner circle. The more comfortable a user is with the information — especially if he provides it — the more likely he is to become a repeat visitor.

3. Lots of intelligence, all in one spot. Currently, business leaders must track down information in silos. For example, only a specific department can answer specific questions, and help is often stalled while a department waits for approval from another division.

When done properly, this streamlining of processes even allows leaders to see connections they otherwise might have overlooked.

While some technologies introduce only complications to established processes, the power of combining crowdsourcing with AI is worth the disruption. If your company is looking for better insights and new advantages, consider the benefits of this powerful merger.

Source: innovationexcellence.com

How Data Analytics Make Airlines Fly

OnTheGo

Look up at the sky and think in numbers: Each day, nearly 7,000 commercial aircraft take off on 24,000 flights, according to the Federal Aviation Administration. With the expert guidance of 14,000 air traffic controllers, they fly about 2.2 million passengers every 24 hours.

To do that, airlines and controllers coordinate airplanes’ movements through internal dispatch offices and 476 control towers. Add to the mix some 200,000 general aviation aircraft traveling between more than 19,000 airports, and you can envision the logistical challenges involved in moving 719 million passengers around the U.S. each year.

It doesn’t take much to imagine the complex logistics involved in making a system of such scale work properly. While “data analytics” is a relatively new term to the mainstream, airlines have been applying such principles to the business for years, often under the phrase “operations modeling,” said Doug Gray, director of enterprise data analytics for Dallas-based Southwest Airlines.

At first, the data was used to guide decisions about fueling, crew schedules and flight itineraries. Today, analytics are used across the organization, in functions from marketing to operations, explained Gray, who’s also a member of the International Institute for Analytics’ Expert Network. “They have a significant impact on costs and profitability,” he said.

Indeed, calling real-time analytics “integral” to any airline’s success isn’t going too far. While specialists in fueling, crew scheduling, flight scheduling and other areas may have their plans laid out perfectly, their work is never immune from unexpected developments.

“Daily operations is where stuff hits the fan,” Gray observed. “You can’t predict a heart attack in-flight, or a control-tower fire.” Schedulers have only so much advance warning of bad weather. Even on the best of days, it seems an airline’s plans can only be regarded as tentative.

Real-Time Analytics to Face Real-Time Challenges
To help things move as smoothly as possible under almost any circumstances, Southwest launched “the Regular Operations Recovery Optimizer,” a system that takes an incident’s consequences, factors them into the airline’s operations at that moment, and “rejiggers events on-the-fly,” Gray said. The first of its kind in the industry, the Optimizer’s ability to crunch data and quickly propose solutions may give Southwest “a unique competitive advantage.”

With such systems in place, and the use of data only expected to grow, it’s no surprise that Gray predicts a constant demand for the right technical talent within the industry. But as he noted, analytics systems rely on more than data specialists to make them work. While he oversees an organization of some 200 employees, only about 10 percent hold a data scientist role or similar positions. The others are experts in areas such as data warehousing and ETL (Extract, Transform, Load), and still others have expertise in some of the airline’s nuts-and-bolts functions, such as the fuel supply chain.

“We look for hardcore operations research people who can work closely with domain experts,” Gray said. Though Southwest recruits from the industry, it also seeks newly minted masters of science in data analytics from nearby schools such as the University of Texas at Austin and Southern Methodist University.

Industry Expertise Helps
Of course, airlines need people who can implement and maintain analytics systems—software engineers and developers who need to have at least some expertise with data, Gray said.

For his part, Gray is particularly interested in tech pros familiar with Oracle, Teradata and Amazon Cloud Services. Also important are relational database skills such as NoSQL and Mongo DB. Although he sees the company experimenting more with Hadoop (“It’s better in the cloud,”), Gray said that “unless something better comes along,” the company will continue to rely heavily on R and Alteryx, a “data science self-service desktop” that combines ETL, R and visualization in one GUI-driven application.

Like many employers, Southwest will hire “the right person and train them” on needed skills, especially if they’re just out of school, Gray said. And, he believes, industry experience can give candidates an advantage. As more people pursue careers using data science, analytics and operations research, “more departments will have their own [data and analytics] experts, joined together by a center of excellence.”

“Our biggest constraint isn’t data,” he added. “It’s teaming up the right data people with the right subject-matter experts. There’s going to be a battle for talent, and we need people with a passion for the airline business.”

Source: insights.dice.com

How disinformation spreads in a network

OnTheGo

Disinformation is kind of a problem these days, yeah? Fatih Erikli uses a simulation that works like a disaster spread model applied to social networks to give an idea of how disinformation spreads.

“I tried to visualize how a disinformation becomes a post-truth by the people who subscribed in a network. We can think this network as a social media such as Facebook or Twitter. The nodes (points) in the map represent individuals and the edges (lines) shows the relationships between them in the community. The disinformation will be forwarded to their audience by the unconscious internet (community) members.”

Set the “consciousness” parameter and select a node to run.

Source: flowingdata.com

AI-powered diagnostic device takes out Tricorder XPrize

OnTheGo

Launched in 2012, the Qualcomm Tricorder XPrize tasked competing teams with developing a portable and versatile medical diagnostics machine that would give people “unprecedented access” to information about their health. The contest has now been run and won, with an AI-powered device awarded top honors and US$2.5 million for its trouble.

This particular XPrize – a series of competitions aimed at solving global issues – was created to encourage the development of a device that mimicked the iconic Tricorder from Star Trek. More specifically, this meant the ability to diagnose 13 conditions including anemia, diabetes, sleep apnea and urinary tract infections, along with the ability to detect three of five additional diseases: HIV, hypertension, melanoma, shingles and strep throat.

The competition was whittled down to ten finalists in 2014, and then again to two in December last year. The Taiwan-based Dynamical Biomarkers Group took second place with its prototype for a smartphone-based diagnostics device, but was beaten out by Final Frontier Medical Devices from Pennsylvania.

The winning machine is called DxtER and uses artificial intelligence to teach itself to diagnose medical conditions. It does this by using a set of non-invasive sensors to check vital signs, body chemistry and biological functions and draws on data from clinical emergency medicine and actual patients. All this data is then synthesized by the AI engine and the device spits out a “quick and accurate assessment.”

In addition to the $2.5 million, the Final Frontier and Dynamical Biomarkers Group teams (which received a not-too-shabby $1 million for second place) will benefit from ongoing support and funding from XPrize and its partners. This includes R&D partnerships with the US Food and Drug Administration and the University of California San Diego. Meanwhile, Lowe’s Home Improvements has committed to distributing a consumer-ready version of the device, while the General Hospital of Maputo in Mozambique will provide it to its doctors, nurses and patients.

“We could not be more pleased with the quality of innovation and performance of the teams who competed, particularly with teams Final Frontier and Dynamical Biomarkers Group,” said Marcus Shingles, CEO of the XPrize Foundation. “Although this XPrize competition phase has ended, XPrize, Qualcomm Foundation, and a network of strategic partners are committed and excited to now be entering a new phase which will support these teams in their attempt to scale impact and the continued evolution of the Tricorder device through a series of new post-competition initiatives.”

Source: Newatlas.com

Machine Learning Provides Competitive Edge in Retail

OnTheGo

A simple concept behind machine learning is proving that computation software can access a dataset and create results from that access. That concept also serves as the most crucial element in providing meaningful personalized service for customers.

In marshaling its resources, Amazon has begun to school retailers and search engines on how crucial an element machine learning is to a competitive environment.

Several Amazon advertising services are starting to rival Google in a significant business model for Google, online advertising. Amazon has been long offered Product Display Ads that feature product images and text that relate to people’s searches. It just launched a few advanced advertising services such as a cloud-based header bidding service, according to MarTech.

More to the point of machine learning, Amazon is now beefing up services related to this technology. The company has announced a new program that will allow developers to build and host most Alexa skills using Amazon Web Services for free. It also introduced three new AI services — Amazon Rekognition, which can perform image recognition, categorization, and facial analysis; Amazon Polly, a deep learning-driven text-to-speech (TTS) service; and Amazon Lex, a natural language and speech recognition program. The initiatives will bolster Amazon Web Services (AWS) against Microsoft and Google.

These product milestones for Amazon occur as the retail industry — the most frequent users of personalization ads — confront a complex puzzle of tech and trends. Retailers face a massive distribution transformation. Retailers are shifting away from their traditional locations. For instance, mid-level malls are losing stores such as JCPenney’s, Sears, and Macy’s. Other retailers are experimenting with smaller stores and kiosks in an effort to adjust their floor space. Even once online-only retailers such as Warby Parker and — you guessed it, Amazon — have added small brick-and-mortar stores to establish a cohesive consumer experience.

Changing Consumer Behaviors
Changing consumer digital behaviors are adding to the challenge for retailers. Behaviors such as “webrooming” and “showrooming” have become more popular over the last five holiday shopping seasons, and now have become standard activities. Webrooming and showrooming are when shoppers visit physical stores but use their smart phones to comparison shop and check competitive prices, and even place orders with a store’s competitior. The adoption of these behaviors meant retailers had to improve their mobile sites, launch apps, examine beacons, and consider virtual reality to create a customer experience that supports the brand and retains sales.

All of this has raised the bar for correlating data variety for trends — new sources, new contexts, and new intentions, all at different times. Managers who had just converted to the church of analytics now must listen to a new measurement sermon: where does machine learning fit within their business? And because of Amazon, retail managers are experiencing an urgency to learn machine learning protocols and also plan how to execute strategy in a world becoming dominated by a giant competitor.

Through its operational prowess, scale of services, and inroads into IoT devices and cloud solutions, Amazon has positioned itself to make a myriad of correlations between business metrics and technical metrics. I mentioned in recent posts that nascent search activity emerging from Amazon site visitors is rivaling search engines as a consumer starting point for researching products and services. Amazon can now take significant advantage with machine learning. Much of machine learning relies on data preparation, addressing data quality such as treating missing variables. Amazon has an opportunity to provide better context with the search conducted, and play a central role with partners who want to better understand how their products are received.

Amazon can then leverage its discoveries into meaningful customer and business value. A potential example is implementing tactics influenced by BizDevOps, a blend of front-end software development with business development and operations tactics. Its purpose is to align app development to customer and business value in upfront planning. That alignment has become critical as analytics has shifted from singular inferences from website activity into a central measurement of various activity across digital media and IoT devices. If you do a Google search, you’ll find more than a few posts on the topic of BizDevOps mentioning Amazon as a model example.

Retail’s Machine Learning Future
Amazon’s potential with machine learning is a long way from the early years when Wall Street analysts criticized the once-only-a-book retailer about its quarterly losses. Amazon’s machine learning potential also has far reaching implications.

Amazon’s interest in personalization ads and growing machine learning prowess is tantalizing to supporters of programmatic advertising, which aims messages and gains access to highly targeted, highly valued audiences. Marketers can better predict how ad creative, products, and services can be combined to better appeal to customers in different cycles of the customer experience or a sale. Amazon can ultimately play a central role with platform partners who want to better understand how their products are received.

If this Amazon news makes your strategic team feel that they are behind the curve, take heart. The good news is that machine learning is in its early stages with retailers seeking ways to integrate data and devices that produce the data. Retailers turn to Google for search and paid ads because it covers a large number of industries, so Amazon will remain a retail niche for now.

But if business managers want to find potential success like Amazon has found, they must look internally with technology teams to see how machine learning techniques can be the operational glue between business resources and personalized experience for customers.

Source: allanalytics.com

Machine learning creates living atlas of the planet

Machine learning, combined with satellite imagery and Cloud computing, is enabling understanding of the world and making the food supply chain more efficient.

OnTheGo

There are more than 7 billion people on Earth now, and roughly one in eight people do not have enough to eat. According to the World Bank, the human population will hit an astounding 9 billion by 2050. With rapidly increasing population, the growing need for food is becoming a grave concern.

The burden is now on technology to make up for the looming food crises in the coming decades. But fortunately there is no shortage of ideas and innovative minds are seeking solutions to combat this problem.

Machine learning to the rescue
Descartes Labs, a Los Alamos, New Mexico-based start-up is using machine learning to analyze satellite imagery to predict food supplies months in advance of current methods employed by the US government, a technique that could help predict food crises before they happen.

Descartes Labs pulls images from public databases like NASA’s Landsat and MODIS, ESA’s Sentinel missions and other private satellite imagery providers, including Planet. It also keeps a check on Google Earth and Amazon Web Services public datasets. This continuous up-to-date imagery is referred to as the ‘Living Atlas of the Plant’.

The commercial atlas, designed to provide real-time forecasts of commodity agriculture, uses decades of remotely sensed images stored on the Cloud to offer land use and land change analysis.

Descartes Labs cross-references the satellite information with other relevant data such as weather forecasts and prices of agricultural products. This data is then entered into the machine learning software, tracking and calculating future food supplies with amazing accuracy. By processing these images and data via their advanced machine learning algorithm, Descartes Labs collect remarkably in-depth information such as being able to distinguish individual crop fields and determining the specific field’s crop by analyzing how the sun’s light is reflecting off its surface. After the type of crop has been established, the machine learning program then monitors the field’s production levels.

“With machine learning techniques, we look at tons of pixels from satellites, and that tells us what’s growing,” says Mark Johnson, CEO and Co-founder, Descartes Labs.

How to tackle a data deluge
The total database includes approximately a petabyte — or 1015 bytes — of data. Descartes has actually reprocessed the whole 40-year archive starting with the first Landsat satellite imagery to offer completely Cloud-free view of land use and land change to create this ‘Living Atlas of the Planet’.

The data platform is said to have analyzed over 2.8 quadrillion multispectral pixels for this. It enables processing at petabytes per day rates using multi-source data to produce calibrated, georeferenced imagery stacks at desired points in time and space that can be used for pixel level or global scale analysis or for visualizing or measure changes such as floods, or changes in the condition of crops. “The platform is built for analysis. It is not built to store the data. This is a vastly different philosophy than traditional data platforms,” says Daniela Moody, Remote Sensing and Machine Learning Specialist, Descartes Labs.

The platform churns out imageries at specific locations for specific time at different wavelengths, thus offering unique insights into land cover changes over broad swaths of land. For instance, the NDVI (normalized difference vegetation index) reveals live green vegetation using a combination of red and near-infrared spectral bands (Figure 2). Combining NDVI with visible spectral bands allows a user to examine the landscape through many lenses. The platform offers both Web and API interfaces. While the Web interface offers options for visualizing data, whereas the API allows the user to interact directly with the data for specific analysis. The platform’s scalable Cloud infrastructure quickly ingests, analyzes, and creates predictions from the imagery.

Change is the only constant
The ability to have such fine-grained data on agricultural production will help in making the food supply chain more efficient. As Descartes Labs adds more geospatial data to its already robust database of earth imagery, these models will get even more accurate. Cloud computing and storage, combined with recent advances in machine learning and open software, are enabling understanding of the world at an unprecedented scale and detail.

Earth is not a static place, and researchers who study it need tools that keep up with the constant change. “We designed this platform to answer the problems of commodity agriculture,” Moody adds, “and in doing so we created a platform that is incredible and allows us to have a living atlas of the world.”

Source: geospatialworld.net

Our Fear of Artificial Intelligence

A true AI might ruin the world—but that assumes it’s possible at all.

OnTheGo

Years ago I had coffee with a friend who ran a startup. He had just turned 40. His father was ill, his back was sore, and he found himself overwhelmed by life. “Don’t laugh at me,” he said, “but I was counting on the singularity.”

My friend worked in technology; he’d seen the changes that faster microprocessors and networks had wrought. It wasn’t that much of a step for him to believe that before he was beset by middle age, the intelligence of machines would exceed that of humans—a moment that futurists call the singularity. A benevolent superintelligence might analyze the human genetic code at great speed and unlock the secret to eternal youth. At the very least, it might know how to fix your back.

But what if it wasn’t so benevolent? Nick Bostrom, a philosopher who directs the Future of Humanity Institute at the University of Oxford, describes the following scenario in his book Superintelligence, which has prompted a great deal of debate about the future of artificial intelligence. Imagine a machine that we might call a “paper-clip maximizer”—that is, a machine programmed to make as many paper clips as possible. Now imagine that this machine somehow became incredibly intelligent. Given its goals, it might then decide to create new, more efficient paper-clip-manufacturing machines—until, King Midas style, it had converted essentially everything to paper clips.

No worries, you might say: you could just program it to make exactly a million paper clips and halt. But what if it makes the paper clips and then decides to check its work? Has it counted correctly? It needs to become smarter to be sure. The superintelligent machine manufactures some as-yet-uninvented raw-computing material (call it “computronium”) and uses that to check each doubt. But each new doubt yields further digital doubts, and so on, until the entire earth is converted to computronium. Except for the million paper clips.

Bostrom does not believe that the paper-clip maximizer will come to be, exactly; it’s a thought experiment, one designed to show how even careful system design can fail to restrain extreme machine intelligence. But he does believe that superintelligence could emerge, and while it could be great, he thinks it could also decide it doesn’t need humans around. Or do any number of other things that destroy the world. The title of chapter 8 is: “Is the default outcome doom?”

If this sounds absurd to you, you’re not alone. Critics such as the robotics pioneer Rodney Brooks say that people who fear a runaway AI misunderstand what computers are doing when we say they’re thinking or getting smart. From this perspective, the putative superintelligence Bostrom describes is far in the future and perhaps impossible.

Yet a lot of smart, thoughtful people agree with Bostrom and are worried now. Why?

Volition
The question “Can a machine think?” has shadowed computer science from its beginnings. Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term “artificial intelligence” in 1955. As AI researchers in the 1960s and 1970s began to use computers to recognize images, translate between languages, and understand instructions in normal language and not just code, the idea that computers would eventually develop the ability to speak and think—and thus to do evil—bubbled into mainstream culture. Even beyond the oft-referenced HAL from 2001: A Space Odyssey, the 1970 movie Colossus: The Forbin Project featured a large blinking mainframe computer that brings the world to the brink of nuclear destruction; a similar theme was explored 13 years later in WarGames. The androids of 1973’s Westworld went crazy and started killing.

When AI research fell far short of its lofty goals, funding dried up to a trickle, beginning long “AI winters.” Even so, the torch of the intelligent machine was carried forth in the 1980s and ’90s by sci-fi authors like Vernor Vinge, who popularized the concept of the singularity; researchers like the roboticist Hans Moravec, an expert in computer vision; and the engineer/entrepreneur Ray Kurzweil, author of the 1999 book The Age of Spiritual Machines. Whereas Turing had posited a humanlike intelligence, Vinge, Moravec, and Kurzweil were thinking bigger: when a computer became capable of independently devising ways to achieve goals, it would very likely be capable of introspection—and thus able to modify its software and make itself more intelligent. In short order, such a computer would be able to design its own hardware.

As Kurzweil described it, this would begin a beautiful new era. Such machines would have the insight and patience (measured in picoseconds) to solve the outstanding problems of nanotechnology and spaceflight; they would improve the human condition and let us upload our consciousness into an immortal digital form. Intelligence would spread throughout the cosmos.

You can also find the exact opposite of such sunny optimism. Stephen Hawking has warned that because people would be unable to compete with an advanced AI, it “could spell the end of the human race.” Upon reading Superintelligence, the entrepreneur Elon Musk tweeted: “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” Musk then followed with a $10 million grant to the Future of Life Institute. Not to be confused with Bostrom’s center, this is an organization that says it is “working to mitigate existential risks facing humanity,” the ones that could arise “from the development of human-level artificial intelligence.”

No one is suggesting that anything like superintelligence exists now. In fact, we still have nothing approaching a general-purpose artificial intelligence or even a clear path to how it could be achieved. Recent advances in AI, from automated assistants such as Apple’s Siri to Google’s driverless cars, also reveal the technology’s severe limitations; both can be thrown off by situations that they haven’t encountered before. Artificial neural networks can learn for themselves to recognize cats in photos. But they must be shown hundreds of thousands of examples and still end up much less accurate at spotting cats than a child.

This is where skeptics such as Brooks, a founder of iRobot and Rethink Robotics, come in. Even if it’s impressive—relative to what earlier computers could manage—for a computer to recognize a picture of a cat, the machine has no volition, no sense of what cat-ness is or what else is happening in the picture, and none of the countless other insights that humans have. In this view, AI could possibly lead to intelligent machines, but it would take much more work than people like Bostrom imagine. And even if it could happen, intelligence will not necessarily lead to sentience. Extrapolating from the state of AI today to suggest that superintelligence is looming is “comparable to seeing more efficient internal combustion engines appearing and jumping to the conclusion that warp drives are just around the corner,” Brooks wrote recently on Edge.org. “Malevolent AI” is nothing to worry about, he says, for a few hundred years at least.

Insurance policy
Even if the odds of a superintelligence arising are very long, perhaps it’s irresponsible to take the chance. One person who shares Bostrom’s concerns is Stuart J. Russell, a professor of computer science at the University of California, Berkeley. Russell is the author, with Peter Norvig (a peer of Kurzweil’s at Google), of Artificial Intelligence: A Modern Approach, which has been the standard AI textbook for two decades.

“There are a lot of supposedly smart public intellectuals who just haven’t a clue,” Russell told me. He pointed out that AI has advanced tremendously in the last decade, and that while the public might understand progress in terms of Moore’s Law (faster computers are doing more), in fact recent AI work has been fundamental, with techniques like deep learning laying the groundwork for computers that can automatically increase their understanding of the world around them.

Because Google, Facebook, and other companies are actively looking to create an intelligent, “learning” machine, he reasons, “I would say that one of the things we ought not to do is to press full steam ahead on building superintelligence without giving thought to the potential risks. It just seems a bit daft.” Russell made an analogy: “It’s like fusion research. If you ask a fusion researcher what they do, they say they work on containment. If you want unlimited energy you’d better contain the fusion reaction.” Similarly, he says, if you want unlimited intelligence, you’d better figure out how to align computers with human needs.

Bostrom’s book is a research proposal for doing so. A superintelligence would be godlike, but would it be animated by wrath or by love? It’s up to us (that is, the engineers). Like any parent, we must give our child a set of values. And not just any values, but those that are in the best interest of humanity. We’re basically telling a god how we’d like to be treated. How to proceed?

Bostrom draws heavily on an idea from a thinker named Eliezer Yudkowsky, who talks about “coherent extrapolated volition”—the consensus-derived “best self” of all people. AI would, we hope, wish to give us rich, happy, fulfilling lives: fix our sore backs and show us how to get to Mars. And since humans will never fully agree on anything, we’ll sometimes need it to decide for us—to make the best decisions for humanity as a whole. How, then, do we program those values into our (potential) superintelligences? What sort of mathematics can define them? These are the problems, Bostrom believes, that researchers should be solving now. Bostrom says it is “the essential task of our age.”

For the civilian, there’s no reason to lose sleep over scary robots. We have no technology that is remotely close to superintelligence. Then again, many of the largest corporations in the world are deeply invested in making their computers more intelligent; a true AI would give any one of these companies an unbelievable advantage. They also should be attuned to its potential downsides and figuring out how to avoid them.

This somewhat more nuanced suggestion—without any claims of a looming AI-mageddon—is the basis of an open letter on the website of the Future of Life Institute, the group that got Musk’s donation. Rather than warning of existential disaster, the letter calls for more research into reaping the benefits of AI “while avoiding potential pitfalls.” This letter is signed not just by AI outsiders such as Hawking, Musk, and Bostrom but also by prominent computer scientists (including Demis Hassabis, a top AI researcher). You can see where they’re coming from. After all, if they develop an artificial intelligence that doesn’t share the best human values, it will mean they weren’t smart enough to control their own creations.

Source: MIT Technology Review