AI predictions for 2019 from Yann LeCun, Hilary Mason, Andrew Ng, and Rumman Chowdhury

analyticsanywhere

Artificial intelligence is cast all at once as the technology that will save the world and end it.

To cut through the noise and hype, VentureBeat spoke with luminaries whose views on the right way to do AI have been informed by years of working with some of the biggest tech and industry companies on the planet.

Below find insights from Google Brain cofounder Andrew Ng, Cloudera general manager of ML and Fast Forward Labs founder Hilary Mason, Facebook AI Research founder Yann LeCun, and Accenture’s responsible AI global lead Dr. Rumman Chowdhury. We wanted to get a sense of what they saw as the key milestones of 2018 and hear what they think is in store for 2019.

Amid a recap of the year and predictions for the future, some said they were encouraged to be hearing fewer Terminator AI apocalypse scenarios, as more people understand what AI can and cannot do. But these experts also stressed a continued need for computer and data scientists in the field to adopt responsible ethics as they advance artificial intelligence.

Dr. Rumman Chowdhury

Dr. Rumman Chowdhury is managing director of the Applied Intelligence division at Accenture and global lead of its Responsible AI initiative, and was named to BBC’s 100 Women list in 2017. Last year, I had the honor of sharing the stage with her in Boston at Affectiva’s conference to discuss matters of trust surrounding artificial intelligence. She regularly speaks to audiences around the world on the topic.

For the sake of time, she responded to questions about AI predictions for 2019 via email. All responses from the other people in this article were shared in phone interviews.

Chowdhury said in 2018 she was happy to see growth in public understanding of the capabilities and limits of AI and to hear a more balanced discussion of the threats AI poses — beyond fears of a global takeover by intelligent machines as in The Terminator. “With that comes increasing awareness and questions about privacy and security, and the role AI may play in shaping us and future generations,” she said.

Public awareness of AI still isn’t where she thinks it needs to be, however, and in the year ahead Chowdhury hopes to see more people take advantage of educational resources to understand AI systems and be able to intelligently question AI decisions.

She has been pleasantly surprised by the speed with which tech companies and people in the AI ecosystem have begun to consider the ethical implications of their work. But she wants to see the AI community do more to “move beyond virtue signaling to real action.”

“As for the ethics and AI field — beyond the trolley problem — I’d like to see us digging into the difficult questions AI will raise, the ones that have no clear answer. What is the ‘right’ balance of AI- and IoT-enabled monitoring that allows for security but resists a punitive surveillance state that reinforces existing racial discrimination? How should we shape the redistribution of gains from advanced technology so we are not further increasing the divide between the haves and have-nots? What level of exposure to children allows them to be ‘AI natives’ but not manipulated or homogenized? How do we scale and automate education using AI but still enable creativity and independent thought to flourish?” she asked.

In the year ahead, Chowdhury expects to see more government scrutiny and regulation of tech around the world.

“AI and the power that is wielded by the global tech giants raises a lot of questions about how to regulate the industry and the technology,” she said. “In 2019, we will have to start coming up with the answers to these questions — how do you regulate a technology when it is a multipurpose tool with context-specific outcomes? How do you create regulation that doesn’t stifle innovation or favor large companies (who can absorb the cost of compliance) over small startups? At what level do we regulate? International? National? Local?”

She also expects to see the continued evolution of AI’s role in geopolitical matters.

“This is more than a technology, it is an economy- and society-shaper. We reflect, scale, and enforce our values in this technology, and our industry needs to be less naive about the implications of what we build and how we build it,” she said. For this to happen, she believes people need to move beyond the idea common in the AI industry that if we don’t build it, China will, as if creation alone is where power lies.

“I hope regulators, technologists, and researchers realize that our AI race is about more than just compute power and technical acumen, just like the Cold War was about more than nuclear capabilities,” she said. “We hold the responsibility of recreating the world in a way that is more just, more fair, and more equitable while we have the rare opportunity to do so. This moment in time is fleeting; let’s not squander it.”

<pOn a consumer level, she believes 2019 will see more use of AI in the home. Many people have become much more accustomed to using smart speakers like Google Home and Amazon Echo, as well as a host of smart devices. On this front, she’s curious to see if anything especially interesting emerges from the Consumer Electronics Show — set to kick off in Las Vegas in the second week of January — that might further integrate artificial intelligence into people’s daily lives.

“I think we’re all waiting for a robot butler,” she said.

Andrew Ng

I always laugh more than I expect to when I hear Andrew Ng deliver a whiteboard session at a conference or in an online course. Perhaps because it’s easy to laugh with someone who is both passionate and having a good time.

Ng is an adjunct computer science professor at Stanford University whose name is well known in AI circles for a number of different reasons.

He’s the cofounder of Google Brain, an initiative to spread AI throughout Google’s many products, and the founder of Landing AI, a company that helps businesses integrate AI into their operations.

He’s also the instructor of some of the most popular machine learning courses on YouTube and Coursera, an online learning company he founded, and he founded deeplearning.ai and wrote the book Deep Learning Yearning.

After more than three years there, in 2017 he left his post as chief AI scientist for Baidu, another tech giant that he helped transform into an AI company.

Finally, he’s also part of the $175 million AI Fund and on the board of driverless car company Drive.ai.

Ng spoke with VentureBeat earlier this month when he released the AI Transformation Playbook, a short read about how companies can unlock the positive impacts of artificial intelligence for their own companies.

One major area of progress or change he expects to see in 2019 is AI being used in applications outside of tech or software companies. The biggest untapped opportunities in AI lie beyond the software industry, he said, citing use cases from a McKinsey report that found that AI will generate $13 trillion in GDP by 2030.

“I think a lot of the stories to be told next year [2019] will be in AI applications outside the software industry. As an industry, we’ve done a decent job helping companies like Google and Baidu but also Facebook and Microsoft — which I have nothing to do with — but even companies like Square and Airbnb, Pinterest, are starting to use some AI capabilities. I think the next massive wave of value creation will be when you can get a manufacturing company or agriculture devices company or a health care company to develop dozens of AI solutions to help their businesses.”

Like Chowdhury, Ng was surprised by growth in understanding in what AI can and cannot do in 2018, and pleased that conversations can take place without focusing on the killer robot scenario or fear of artificial general intelligence.

Ng said he intentionally responded to my questions with answers he didn’t expect many others to have.

“I’m trying to cite deliberately a couple of areas which I think are really important for practical applications. I think there are barriers to practical applications of AI, and I think there’s promising progress in some places on these problems,” he said.

In the year ahead, Ng is excited to see progress in two specific areas in AI/ML research that help advance the field as a whole. One is AI that can arrive at accurate conclusions with less data, something called “few shot learning” by some in the field.

“I think the first wave of deep learning progress was mainly big companies with a ton of data training very large neural networks, right? So if you want to build a speech recognition system, train it on 100,000 hours of data. Want to train a machine translation system? Train it on a gazillion pairs of sentences of parallel corpora, and that creates a lot of breakthrough results,” Ng said. “Increasingly I’m seeing results on small data where you want to try to take in results even if you have 1,000 images.”

The other is advances in computer vision referred to as “generalizability.” A computer vision system might work great when trained with pristine images from a high-end X-ray machine at Stanford University. And many advanced companies and researchers in the field have created systems that outperform a human radiologist, but they aren’t very nimble.

“But if you take your trained model and you apply it to an X-ray taken from a lower-end X-ray machine or taken from a different hospital, where the images are a bit blurrier and maybe the X-ray technician has the patient slightly turned to their right so the angle’s a little bit off, it turns out that human radiologists are much better at generalizing to this new context than today’s learning algorithms. And so I think interesting research [is on] trying to improve the generalizability of learning algorithms in new domains,” he said.

Yann LeCun

Yann LeCun is a professor at New York University, Facebook chief AI scientist, and founding director of Facebook AI Research (FAIR), a division of the company that created PyTorch 1.0 and Caffe2, as well as a number of AI systems — like the text translation AI tools Facebook uses billions of times a day or advanced reinforcement learning systems that play Go.

LeCun believes the open source policy FAIR adopts for its research and tools has helped nudge other large tech companies to do the same, something he believes has moved the AI field forward as a whole. LeCun spoke with VentureBeat last month ahead of the NeurIPS conference and the fifth anniversary of FAIR, an organization he describes as interested in the “technical, mathematical underbelly of machine learning that makes it all work.”

“It gets the entire field moving forward faster when more people communicate about the research, and that’s actually a pretty big impact,” he said. “The speed of progress you’re seeing today in AI is largely because of the fact that more people are communicating faster and more efficiently and doing more open research than they were in the past.”

On the ethics front, LeCun is happy to see progress in simply considering the ethical implications of work and the dangers of biased decision-making.

“The fact that this is seen as a problem that people should pay attention to is now well established. This was not the case two or three years ago,” he said.

LeCun said he does not believe ethics and bias in AI have become a major problem that require immediate action yet, but he believes people should be ready for that.

“I don’t think there are … huge life and death issues yet that need to be urgently solved, but they will come and we need to … understand those issues and prevent those issues before they occur,” he said.

Like Ng, LeCun wants to see more AI systems capable of the flexibility that can lead to robust AI systems that do not require pristine input data or exact conditions for accurate output.

LeCun said researchers can already manage perception rather well with deep learning but that a missing piece is an understanding of the overall architecture of a complete AI system.

He said that teaching machines to learn through observation of the world will require self-supervised learning, or model-based reinforcement learning.

“Different people give it different names, but essentially human babies and animals learn how the world works by observing and figure out this huge amount of background information about it, and we don’t know how to do this with machines yet, but that’s one of the big challenges,” he said. “The prize for that is essentially making real progress in AI, as well as machines, to have a bit of common sense and virtual assistants that are not frustrating to talk to and have a wider range of topics and discussions.”

For applications that will help internally at Facebook, LeCun said significant progress toward self-supervised learning will be important, as well as AI that requires less data to return accurate results.

“On the way to solving that problem, we’re hoping to find ways to reduce the amount of data that’s necessary for any particular task like machine translation or image recognition or things like this, and we’re already making progress in that direction; we’re already making an impact on the services that are used by Facebook by using weakly supervised or self-supervised learning for translation and image recognition. So those are things that are actually not just long term, they also have very short term consequences,” he said.

In the future, LeCun wants to see progress made toward AI that can establish causal relationships between events. That’s the ability to not just learn by observation, but to have the practical understanding, for example, that if people are using umbrellas, it’s probably raining.

“That would be very important, because if you want a machine to learn models of the world by observation, it has to be able to know what it can influence to change the state of the world and that there are things you can’t do,” he said. “You know if you are in a room and a table is in front of you and there is an object on top of it like a water bottle, you know you can push the water bottle and it’s going to move, but you can’t move the table because it’s big and heavy — things like this related to causality.”

Hilary Mason

After Cloudera acquired Fast Forward Labs in 2017, Hilary Mason became Cloudera’s general manager of machine learning. Fast Forward Labs, while absorbed into Cloudera, is still in operation, producing applied machine learning reports and advising customers to help them see six months to two years into the future.

One advancement in AI that surprised Mason in 2018 was related to multitask learning, which can train a single neural network to apply multiple kinds of labels when inferring, for example, objects seen in an image.

Fast Forward Labs has also been advising customers on the ethical implications of AI systems. Mason sees a wider awareness for the necessity of putting some kind of ethical framework in place.

“This is something that since we founded Fast Forward — so, five years ago — we’ve been writing about ethics in every report but this year [2018] people have really started to pick up and pay attention, and I think next year we’ll start to see the consequences or some accountability in the space for companies and for people who pay no attention to this,” Mason said. “What I’m not saying very clearly is that I hope that the practice of data science and AI evolve as such that it becomes the default expectation that both technical folks and business leaders creating products with AI will be accounting for ethics and issues of bias and the development of those products, whereas today it is not the default that anyone thinks about those things.”

As more AI systems become part of business operations in the year ahead, Mason expects that product managers and product leaders will begin to make more contributions on the AI front because they’re in the best position to do so.

“I think it’s clearly the people who have the idea of the whole product in mind and understand the business understand what would be valuable and not valuable, who are in the best position to make these decisions about where they should invest,” she said. “So if you want my prediction, I think in the same way we expect all of those people to be minimally competent using something like spreadsheets to do simple modeling, we will soon expect them to be minimally competent in recognizing where AI opportunities in their own products are.”

The democratization of AI, or expansion to corners of a company beyond data science teams, is something that several companies have emphasized, including Google Cloud AI products like Kubeflow Pipelines and AI Hub as well as advice from the CI&T consultancy to ensure AI systems are actually utilized within a company.

Mason also thinks more and more businesses will need to form structures to manage multiple AI systems.

Like an analogy sometimes used to describe challenges faced by people working in DevOps, Mason said, managing a single system can be done with hand-deployed custom scripts, and cron jobs can manage a few dozen. But when you’re managing tens or hundreds of systems, in an enterprise that has security, governance, and risk requirements, you need professional, robust tooling.

Businesses are shifting from having pockets of competency or even brilliance to having a systematic way to pursue machine learning and AI opportunities, she said.

The emphasis on containers for deploying AI makes sense to Mason, since Cloudera recently launched its own container-based machine learning platform. She believes this trend will continue in years ahead so companies can choose between on-premise AI or AI deployed in the cloud.

Finally, Mason believes the business of AI will continue to evolve, with common practices across the industry, not just within individual companies.

“I think we will see a continuing evolution of the professional practice of AI,” she said. “Right now, if you’re a data scientist or an ML engineer at one company and you move to another company, your job will be completely different: different tooling, different expectations, different reporting structures. I think we’ll see consistency there,” she said.

Source: venturebeat.com

Advertisements

Top 10 Technology Trends for 2018: IEEE Computer Society Predicts the Future of Tech

Tech experts at the IEEE Computer Society (IEEE-CS) annually predict the “Future of Tech” and have revealed what they believe will be the biggest trends in technology for 2018. The forecast by the world’s premier organization of computing professionals is among its most anticipated announcements.

“The Computer Society’s predictions, based on a deep-dive analysis by a team of leading technology experts, identify top-trending technologies that hold extensive disruptive potential for 2018,” said Jean-Luc Gaudiot, IEEE Computer Society President. “The vast computing community depends on the Computer Society as the provider for relevant technology news and information, and our predictions directly align with our commitment to keeping our community well-informed and prepared for the changing technological landscape of the future.”

Dejan Milojicic, Hewlett Packard Enterprise Distinguished Technologist and IEEE Computer Society past president, said “The following year we will witness some of the most intriguing dilemmas in the future of technology. Will deep learning and AI indeed expand deployment domains or remain within the realms of neural networks? Will cryptocurrency technologies keep their extraordinary evolution or experience a bubble burst? Will new computing and memory technologies finally disrupt the extended life of Moore’s law? We’ve made our bets on our 2018 predictions.”

The top 10 technology trends predicted to reach adoption in 2018 are:

1. Deep learning (DL). Machine learning (ML) and more specifically DL are already on the cusp of revolution. They are widely adopted in datacenters (Amazon making graphical processing units [GPUs] available for DL, Google running DL on tensor processing units [TPUs], Microsoft using field programmable gate arrays [FPGAs], etc.), and DL is being explored at the edge of the network to reduce the amount of data propagated back to datacenters. Applications such as image, video, and audio recognition are already being deployed for a variety of verticals. DL heavily depends on accelerators (see #9 below) and is used for a variety of assistive functions (#s 6, 7, and 10).

2. Digital currencies. Bitcoin, Ethereum, and newcomers Litecoin, Dash, and Ripple have become commonly traded currencies. They will continue to become a more widely adopted means of trading. This will trigger improved cybersecurity (see #10) because the stakes will be ever higher as their values rise. In addition, digital currencies will continue to enable and be enabled by other technologies, such as storage (see #3), cloud computing (see B in the list of already adopted technologies), the Internet of Things (IoT), edge computing, and more.

3. Blockchain. The use of Bitcoin and the revitalization of peer-to-peer computing have been essential for the adoption of blockchain technology in a broader sense. We predict increased expansion of companies delivering blockchain products and even IT heavyweights entering the market and consolidating the products.

4. Industrial IoT. Empowered by DL at the edge, industrial IoT continues to be the most widely adopted use case for edge computing. It is driven by real needs and requirements. We anticipate that it will continue to be adopted with a broader set of technical offerings enabled by DL, as well as other uses of IoT (see C and E).

5. Robotics. Even though robotics research has been performed for many decades, robotics adoption has not flourished. However, the past few years have seen increased market availability of consumer robots, as well as more sophisticated military and industrial robots. We predict that this will trigger wider adoption of robotics in the medical space for caregiving and other healthcare uses. Combined with DL (#1) and AI (#10), robotics will further advance in 2018. Robotics will also motivate further evolution of ethics (see #8).

6. Assisted transportation. While the promise of fully autonomous vehicles has slowed down due to numerous obstacles, a limited use of automated assistance has continued to grow, such as parking assistance, video recognition, and alerts for leaving the lane or identifying sudden obstacles. We anticipate that vehicle assistance will develop further as automation and ML/DL are deployed in the automotive industry.

7. Assisted reality and virtual reality (AR/VR). Gaming and AR/VR gadgets have grown in adoption in the past year. We anticipate that this trend will grow with modern user interfaces such as 3D projections and movement detection. This will allow for associating individuals with metadata that can be viewed subject to privacy configurations, which will continue to drive international policies for cybersecurity and privacy (see #10).

8. Ethics, laws, and policies for privacy, security, and liability. With the increasing advancement of DL (#1), robotics (#5), technological assistance (#s 6 and 7), and applications of AI (#10), technology has moved beyond society’s ability to control it easily. Mandatory guidance has already been deeply analyzed and rolled out in various aspects of design (see the IEEE standards association document), and it is further being applied to autonomous and intelligent systems and in cybersecurity. But adoption of ethical considerations will speed up in many vertical industries and horizontal technologies.

9. Accelerators and 3D. With the end of power scaling and Moore’s law and the shift to 3D, accelerators are emerging as a way to continue improving hardware performance and energy efficiency and to reduce costs. There are a number of existing technologies (FPGAs and ASICs) and new ones (such as memristor-based DPE) that hold a lot of promise for accelerating application domains (such as matrix multiplication for the use of DL algorithms). We predict wider diversity and broader applicability of accelerators, leading to more widespread use in 2018.

10. Cybersecurity and AI. Cybersecurity is becoming essential to everyday life and business, yet it is increasingly hard to manage. Exploits have become extremely sophisticated and it is hard for IT to keep up. Pure automation no longer suffices and AI is required to enhance data analytics and automated scripts. It is expected that humans will still be in the loop of taking actions; hence, the relationship to ethics (#8). But AI itself is not immune to cyberattacks. We will need to make AI/DL techniques more robust in the presence of adversarial traffic in any application area.

Existing Technologies: We did not include the following technologies in our top 10 list as we assume that they have already experienced broad adoption:
A. Data science
B. “Cloudification”
C. Smart cities
D. Sustainability
E. IoT/edge computing

Source: computer.org

3 Technologies You Need To Start Paying Attention To Right Now

AnalyticsAnywhere

At any given time, a technology or two captures the zeitgeist. A few years ago it was social media and mobile that everybody was talking about. These days it’s machine learning and block chain. Everywhere you look, consulting firms are issuing reports, conferences are being held and new “experts” are being anointed.

In a sense, there’s nothing wrong with that. Social media and mobile computing really did change the world and, clearly, the impact of artificial intelligence and distributed database architectures will be substantial. Every enterprise needs to understand these technologies and how they will impact its business.

Still we need to remember that we always get disrupted by what we can’t see. The truth is that the next big thing always starts out looking like nothing at all. That’s why it’s so disruptive. If we saw it coming, it wouldn’t be. So here are three technologies you may not of heard about, but you should start paying attention to. The fate of your business may depend on it.

1. New Computing Architectures

In the April 19th issue of Electronics in 1965, Intel Co-Founder Gordon Moore published an article that observed the number of transistors on a silicon chip were doubling roughly every two years. Over the past half century, that consistent doubling of computing power, now known as Moore’s Law, has driven the digital revolution.

Today, however, that process has slowed and it will soon it come to a complete halt. There are only so many transistors you can cram onto a silicon wafer before subatomic effects come into play and make it impossible for the technology to function. Experts disagree on exactly when this will happen, but it’s pretty clear that it will be sometime within the next five years.

There are, of course, a number of ways to improve chip performance other than increasing the number of transistors, such as FPGA, ASIC and 3D stacking. Yet those are merely stopgaps and are unlikely to take us more than a decade or so into the future. To continue to advance technology over the next 50 years, we need fundamentally new architectures like quantum computing and neuromorphic chips.

The good news is that these architectures are very advanced in their development and we should start seeing a commercial impact within 5-10 years. The bad news is that, being fundamentally new architectures, nobody really knows how to use them yet. We are, in a sense, back to the early days of computing, with tons of potential but little idea how to actualize it.

2. Genetic Engineering

While computer scientists have been developing software languages over the past 50 years, biologist have been trying to understand a far more pervasive kind of code, the genetic code. For the most part, things have gone slowly. Although there has been significant scientific progress, the impact of that advancement has been relatively paltry.

That began to change in 2003 with the completion of the Human Genome Project. For the first time, we began to truly understand how DNA interacts with our biology, which led to other efforts, such as the Cancer Genome Atlas, as well as tangible advancements in agriculture. For the first time, genomics became more than mere scientific inquiry, but a source of new applications

Now, a new technology called CRISPR, is allowing scientists to edit genes at will. In fact, because the technology is simple enough for even amateur biologists to use, we can expect genetic engineering to become much more widespread across industries. Early applications include liquid fuels from sunshine and genomic vaccines.

“CRISPR is accelerating everything we do with genomics,” Megan Hochstrasser of the Innovative Genomics Initiative at Cal Berkeley told me, “from cancer research to engineering disease resistant crops and many other applications that haven’t yet come to the fore. Probably the most exciting aspect is that CRISPR is so cheap and easy to use, it will have a democratizing effect, where more can be done with less. We’re really just getting started.”

3. Materials Science

Traditionally, the way you improved a material to build a product has been a process of trial and error. You changed the ingredients or the process by which you made it and saw what happened. For example, at some point a medieval blacksmith figured out that annealing iron would make better swords.

Today, coming up with better materials is a multi-billion business. Consider the challenges that Boeing faced when designing its new Dreamliner. How do you significantly increase the performance of an airplane, a decades old technology? Yet by discovering new composite materials, the company was able to reduce weight by 40,000 pounds and fuel use by 20%.

With this in mind, the Materials Genome Initiative is building databases of material properties like strength, density and other things, and also includes computer models to predict what processes will achieve the qualities a manufacturer is looking for. As a government program, it is also able to make the data widely available for anyone who wants to use it, not just billion dollar companies like Boeing.

“Our goal is to speed up the development of new materials by making clear the relationship between materials, how they are processed and what properties are likely to result,” Jim Warren, Director of the Materials Genome program told me. “My hope is that the Materials Genome will accelerate innovation in just about every industry America competes in.”

It’s Better To Prepare Than Adapt

For the past few decades, great emphasis has been put on agility and adaptation. When a new technology, like social media, mobile computing or artificial intelligence begins to disrupt the marketplace, firms rush to figure out what it means and adapt their strategies accordingly. If they could do that a bit faster than the competition, they would win.

Today, however, we’re entering a new era of innovation that will look much more like the 50s and 60s than it will the 90s and aughts. The central challenge will no longer be to dream up new applications based on improved versions of old technologies, but to understand fundamentally new paradigms.

That’s why over the next few decades, it will be more important to prepare than adapt. How will you work with new computing architectures? How will fast, cheap genetic engineering affect your industry? What should you be doing to explore new materials that can significantly increase performance and lower costs? These are just some of the questions we will grapple with.

Not all who wander are lost. The challenge is to wander with purpose.

Source: Digital Tonto

The meaning of life in a world without work

As technology renders jobs obsolete, what will keep us busy? Sapiens author Yuval Noah Harari examines ‘the useless class’ and a new quest for purpose.

AnalyticsAnywhere

Most jobs that exist today might disappear within decades. As artificial intelligence outperforms humans in more and more tasks, it will replace humans in more and more jobs. Many new professions are likely to appear: virtual-world designers, for example. But such professions will probably require more creativity and flexibility, and it is unclear whether 40-year-old unemployed taxi drivers or insurance agents will be able to reinvent themselves as virtual-world designers (try to imagine a virtual world created by an insurance agent!). And even if the ex-insurance agent somehow makes the transition into a virtual-world designer, the pace of progress is such that within another decade he might have to reinvent himself yet again.

The crucial problem isn’t creating new jobs. The crucial problem is creating new jobs that humans perform better than algorithms. Consequently, by 2050 a new class of people might emerge – the useless class. People who are not just unemployed, but unemployable.

The same technology that renders humans useless might also make it feasible to feed and support the unemployable masses through some scheme of universal basic income. The real problem will then be to keep the masses occupied and content. People must engage in purposeful activities, or they go crazy. So what will the useless class do all day?

One answer might be computer games. Economically redundant people might spend increasing amounts of time within 3D virtual reality worlds, which would provide them with far more excitement and emotional engagement than the “real world” outside. This, in fact, is a very old solution. For thousands of years, billions of people have found meaning in playing virtual reality games. In the past, we have called these virtual reality games “religions”.

What is a religion if not a big virtual reality game played by millions of people together? Religions such as Islam and Christianity invent imaginary laws, such as “don’t eat pork”, “repeat the same prayers a set number of times each day”, “don’t have sex with somebody from your own gender” and so forth. These laws exist only in the human imagination. No natural law requires the repetition of magical formulas, and no natural law forbids homosexuality or eating pork. Muslims and Christians go through life trying to gain points in their favorite virtual reality game. If you pray every day, you get points. If you forget to pray, you lose points. If by the end of your life you gain enough points, then after you die you go to the next level of the game (aka heaven).

As religions show us, the virtual reality need not be encased inside an isolated box. Rather, it can be superimposed on the physical reality. In the past this was done with the human imagination and with sacred books, and in the 21st century it can be done with smartphones.

Some time ago I went with my six-year-old nephew Matan to hunt for Pokémon. As we walked down the street, Matan kept looking at his smartphone, which enabled him to spot Pokémon all around us. I didn’t see any Pokémon at all, because I didn’t carry a smartphone. Then we saw two others kids on the street who were hunting the same Pokémon, and we almost got into a fight with them. It struck me how similar the situation was to the conflict between Jews and Muslims about the holy city of Jerusalem. When you look at the objective reality of Jerusalem, all you see are stones and buildings. There is no holiness anywhere. But when you look through the medium of smartbooks (such as the Bible and the Qur’an), you see holy places and angels everywhere.

The idea of finding meaning in life by playing virtual reality games is of course common not just to religions, but also to secular ideologies and lifestyles. Consumerism too is a virtual reality game. You gain points by acquiring new cars, buying expensive brands and taking vacations abroad, and if you have more points than everybody else, you tell yourself you won the game.

You might object that people really enjoy their cars and vacations. That’s certainly true. But the religious really enjoy praying and performing ceremonies, and my nephew really enjoys hunting Pokémon. In the end, the real action always takes place inside the human brain. Does it matter whether the neurons are stimulated by observing pixels on a computer screen, by looking outside the windows of a Caribbean resort, or by seeing heaven in our mind’s eyes? In all cases, the meaning we ascribe to what we see is generated by our own minds. It is not really “out there”. To the best of our scientific knowledge, human life has no meaning. The meaning of life is always a fictional story created by us humans.

In his groundbreaking essay, Deep Play: Notes on the Balinese Cockfight (1973), the anthropologist Clifford Geertz describes how on the island of Bali, people spent much time and money betting on cockfights. The betting and the fights involved elaborate rituals, and the outcomes had substantial impact on the social, economic and political standing of both players and spectators.

The cockfights were so important to the Balinese that when the Indonesian government declared the practice illegal, people ignored the law and risked arrest and hefty fines. For the Balinese, cockfights were “deep play” – a made-up game that is invested with so much meaning that it becomes reality. A Balinese anthropologist could arguably have written similar essays on football in Argentina or Judaism in Israel.

Indeed, one particularly interesting section of Israeli society provides a unique laboratory for how to live a contented life in a post-work world. In Israel, a significant percentage of ultra-orthodox Jewish men never work. They spend their entire lives studying holy scriptures and performing religion rituals. They and their families don’t starve to death partly because the wives often work, and partly because the government provides them with generous subsidies. Though they usually live in poverty, government support means that they never lack for the basic necessities of life.

That’s universal basic income in action. Though they are poor and never work, in survey after survey these ultra-orthodox Jewish men report higher levels of life-satisfaction than any other section of Israeli society. In global surveys of life satisfaction, Israel is almost always at the very top, thanks in part to the contribution of these unemployed deep players.

You don’t need to go all the way to Israel to see the world of post-work. If you have at home a teenage son who likes computer games, you can conduct your own experiment. Provide him with a minimum subsidy of Coke and pizza, and then remove all demands for work and all parental supervision. The likely outcome is that he will remain in his room for days, glued to the screen. He won’t do any homework or housework, will skip school, skip meals and even skip showers and sleep. Yet he is unlikely to suffer from boredom or a sense of purposelessness. At least not in the short term.

Hence virtual realities are likely to be key to providing meaning to the useless class of the post-work world. Maybe these virtual realities will be generated inside computers. Maybe they will be generated outside computers, in the shape of new religions and ideologies. Maybe it will be a combination of the two. The possibilities are endless, and nobody knows for sure what kind of deep plays will engage us in 2050.

In any case, the end of work will not necessarily mean the end of meaning, because meaning is generated by imagining rather than by working. Work is essential for meaning only according to some ideologies and lifestyles. Eighteenth-century English country squires, present-day ultra-orthodox Jews, and children in all cultures and eras have found a lot of interest and meaning in life even without working. People in 2050 will probably be able to play deeper games and to construct more complex virtual worlds than in any previous time in history.

But what about truth? What about reality? Do we really want to live in a world in which billions of people are immersed in fantasies, pursuing make-believe goals and obeying imaginary laws? Well, like it or not, that’s the world we have been living in for thousands of years already.

Source: The Guardian

Past, Present and Future of AI / Machine Learning (Google I/O ’17)

 

We are in the middle of a major shift in computing that’s transitioning us from a mobile-first world into one that’s AI-first. AI will touch every industry and transform the products and services we use daily. Breakthroughs in machine learning have enabled dramatic improvements in the quality of Google Translate, made your photos easier to organize with Google Photos, and enabled improvements in Search, Maps, YouTube, and more.

 

Shiny vs Useful: Which trends in the analytics market are business ready?

OnTheGo

Business analytics continues to be a hot segment in the enterprise software market and a core component of digital transformation for every organization. But there are many specific advances that are at differing points along the continuum of market readiness for actual use.

It is critical that technology leaders recognize the difference between mature trends that can be applied to real-world business scenarios today versus those that are still taking shape but make for awe-inspiring vendor demos. These trends fall into categories ranked from least to most mature in the market: artificial intelligence (AI), natural language processing (NLP), and embedded analytics.

Artificial augments actual human intelligence

The hype and excitement surrounding AI, which encompasses machine learning (ML) and deep learning, has surpassed that of big data in today’s market. The notion of AI completely replacing and automating manual analytical tasks done by humans today is far from application to most real-world use cases. In fact, full automation of analytical workflows should not even be considered the final goal — now or in the future.

The term assistive intelligence is a more appropriate phrase for the AI acronym, and is far more palatable for analysts who view automation as a threat. This concept of assistive intelligence, where analyst or business user skills are augmented by embedded advanced analytic capabilities and machine learning algorithms, is being adopted by a growing number of organizations in the market today. The utility of these types of smart capabilities has proven useful in assisting with data preparation and integration, as well as analytical processes such as the detection of patterns, correlations, outliers and anomalies in data.

Natural interactions improve accessibility of analytics

Natural Language Processing (NLP) and Natural Language Generation (NLG) are often used interchangeably but serve completely different purposes. While both enable natural interactions with analytics platforms, NLP can be thought of as the question-asking part of the equation, whereas NLG is used to render findings and insights in natural language to the user.

Of the two, NLP is more recognizable in the mainstream market as natural language interfaces increasingly become more commonplace in our personal lives through Siri, Cortana, Alexa, Google Home, etc. Analytics vendors are adding NLP functionality into their product offerings to capitalize on this consumer trend and reach a broader range of business users who may find a natural language interface less intimidating than traditional means of analysis. It is inevitable that NLP will become a widely used core component of an analytics platform but it is not currently being utilized across a broad enough range of users or use cases to be considered mainstream in today’s market.

On the other hand, NLG has been in the market for several years but only recently has it been incorporated into mainstream analytics tools to augment the visual representation of data. Many text-based summaries of sporting events, player statistics, mutual fund performance, etc., are created automatically using NLG technology. Increasingly, NLG capabilities are also being used as the delivery mechanism to make AI-based output more consumable to mainstream users.

Recently, analytics vendors have been forging partnerships with NLG vendors to leverage their expertise in adding another dimension to data visualization, where key insights are automatically identified and expressed in a natural language narrative to accompany the visualization. While the combination of business analytics and NLG is relatively new, it is gaining awareness and traction in the market and has opened the door to new uses cases for organizations to explore.

Embedded analytics brings insights closer to action

The true value of analytics is realized when insights can inform decision-making to improve business outcomes. By embedding analytics into applications and systems, where decision-makers conduct normal business, a barrier to adoption is removed and insights are delivered directly to the person who can take immediate action.

Modern analytics platform vendors have made it incredibly easy for organizations to adopt an embedded strategy to proliferate analytic content to line-of-business users previously unreachable by traditional means. And organizations are now extending similar capabilities to customers, partners, suppliers, etc., in an effort to increase competitive differentiation and, in some cases, new revenue streams through monetization of data assets and analytic applications.

These innovations present technology leaders with a unique opportunity to lead their organizations into an era where data analysis is the foundation for all business decisions. Every organization will embark on this journey at its own pace. Some will be early adopters of new innovations and some will only adopt when the majority of the market has successfully implemented.

Ultimately, organizational readiness to adopt any new technology will be determined by end users and their ability and willingness to adopt new innovations and embrace process change.

Source: Tableau

Best Tablets For 2017: Android, iOS & Windows 10

onthego

One of the reasons Apple’s iPad was so successful was that, like many Apple products, it captured the public’s imagination – commercially, at least, there hadn’t been anything quite like it aimed at consumers, and it promised a bright sci-fi-like experience full of exciting possibilities.

The iPad introduced the idea of tablets to an unexpecting mass market. What wasn’t so predictable was the steady decline of tablets thereafter. Following the inevitable boom where everyone rushed to cash in on the sudden interest in tablets, sales have gradually dropped off year-on-year, and it’s not just competitor models this is happening to either, Apple itself is struggling to shift iPads in anywhere near the quantities it expected to or used to.

The catch, it seems, is that while users will happily replace their contract-tethered smartphone every year or two, buying a new tablet this regularly is a big no-no, and consumers seem to treat these larger devices similarly to laptops and PCs as a rare, carefully considered, and long-lasting purchase.

But that doesn’t mean tablets are useless. Indeed, they can be great content consumption–and even creation–devices. And the tablets on the market today are better than they have been during any time in the past.

Global market research firm TrendForce estimates that 2016 tablet sales numbered around 154.5 million units–or a  decline of 8.3% from the year earlier. They also estimate that global tablet shipments for 2017 are likely to fall by  5.3% annually this year to about 146.4 million units. In other words, tablet sales are still decreasing, but not by as much.

“Most tablet brands will be more conservative in committing their resources during 2017,” TrendForce notebook analyst Anita Wang pointed out. “Amazon and Huawei on the contrary have ambitions to increase their tablet shipments by many folds. The two brands are expected to expand their offerings in the near future. Additionally, Microsoft will be releasing Surface Pro 5 in the first quarter of 2017. Generally speaking, tablet shipments will drop next year but the decline will be fairly limited.”

The number of Android tablets in circulation has dropped off at a rather alarming rate during the past 18 months.

Not so long ago you couldn’t go a week without an Android tablet launching and now there fast becoming as rare as hen’s teeth.

A lot of this is to do with Apple’s iPad; it dominates the space almost entirely, just as the iPod did in the MP3 player space.

However, all is not lost – things are starting to change. And we have Microsoft to thank for that. Windows 10 and the hybrid machines it gave birth to and growing in popularity through their ability to bridge the gap between traditional laptop and tablet.

What the Android space REALLY needs is a decent ChromeOS dual-boot slate; a tablet that runs Android, but features all the cool attributes of ChromeOS.

Google is doing more cross-over stuff with Android and ChromeOS, but progress is painfully slow.

I would 1000% buy a Android tablet that could dual-boot ChromeOS. Hell, I’m tempted to start a KickStarter campaign to make it happen!

Budget tablets and hybrids like the current Surface Pro 4 and upcoming Surface Pro 5, and also the iPad Pro, are expected to be the driving forces behind 2017’s tablet space.

Here are our favorite tablets for 2017 so far.

iPad Pro 12.9in

The iPad Pro was the newest tablet of 2016–and it’s a monster. It’s got a massive 12.9-inch 2732 x 2048 resolution at 264 ppi. But beneath that gorgeous display is a powerhouse of productivity. Inside you’ll find and INSANELY fast A9X chip–it’s actually faster than the Intel chips found in some MacBooks. Add to that the 4GB of RAM and four speaker audio and it’s no wonder this thing was labeled “Pro”.

You can actually edit 4K video on it without any lag. That’s not even to mention the optional Apple Pencil, dubbed by many as the best stylus ever made. The Pencil and the Pro work so well together, some artists are even saying it’s the first tablet that’s as good as a real pencil and paper.

Samsung Galaxy TabPro S

Samsung’s hybrid Windows 10 machine has an amazing screen, decent specs and it looks really smart. Also, the battery life is pretty decent as well. Combine this with all the benefits you get from running Windows 10 and you have one hell of a productivity machine that is great for working on the move and consuming media while on riding plans and trains (or your sofa).

The Galaxy TabPro S comes with a keyboard, but if you want to take advantage of Windows Ink, you will need to pony up for a stylus. Why Samsung didn’t include one from the get go remains to be seen. Ink is an awesome feature that lets you add notes to applications and web pages. You can then get Cortana to store these notes for a later date.

Who’s this for? Anyone that wants a portable, powerful Windows 10 machine with tablet properties and a truly STUNNING display.

Samsung Galaxy TabPro S Specs

  •     Windows 10.
  •     12in Super AMOLED (2160×1440)
  •     6th Gen. Intel Core M processor (Dual Core 2.2GHz)
  •     4GB(RAM)
  •     128GB SSD.
  •     Wi-Fi 802.11 a/b/g/n/ac MIMO.
  •     Wi-Fi Direct.
  •     NFC.

iPad Air 2

While the iPad Pro is probably too much for most people, the iPad Air 2 is designed for everyone. Surprisingly, the Air 2 didn’t receive an update last year–it’s the exact same model as the year before. Given that it’s still one of the best tablets on the market it goes to show how ahead of its time it was for its 2014 release.

The iPad Air 2 features a 9.7-inch display with a 2048 x 1536 pixel resolution at 264 ppi. Though its A8X chip can’t compete with the A9X found in the iPad Pro, it’s no slouch either. The iPad Air 2 is not only great for browsing the web and sending email, but for getting major productivity tasks–such as video and photo editing–done.

iPad mini 4

Though the iPad mini 4 hasn’t seen an update recently, it’s still probably the best small-sized tablet on the market. It’s 7.9in 1536 x 2048 display isn’t too big or too small. It features a Dual-core 1.5 GHz processor with 2GB of RAM and comes in 16GB, 64GB, and 128GB options.

Samsung Galaxy Tab S2 8

The Galaxy Tab S2 8 doesn’t have the best design. It’s got a rubber body, which makes it look rather clunky. But what it lacks in sex appeal it makes up in specs. It features an 8-inch 2048 x 1536 resolution AMOLED display at 320 ppi. Inside you’ll find a powerful Exynos 7 Octa Core processor and 3GB of RAM. Combine all that with Samsung’s excellent craftsmanship and a built-in fingerprint scanner and the Galaxy Tab S2 8 is one of the best all-around Android tablets on the market.

Microsoft Surface Pro 4

It’s almost hard thinking about the Microsoft Surface Pro 4 as a true tablet. That’s because it does an amazing job doubling as a laptop (that’s good, considering Microsoft bills the Surface as a hybrid). The Surface Pro 4 packs a 12.3-inch 2736 x 1824 pixel display at 267 pixels per inch and comes in 128GB, 256GB, or 512GB storage options–far more than any other tablet on this list.

It also features Intel Skylake Core M3, Core i5, or Core i7 processors and 4GB, 8GB, or 16GB of RAM. Oh, and it runs the full version of Windows 10 so it can run any desktop app you own. And as with the Apple Pencil and the iPad Pro, the Surface Pro 4 has gotten high marks for its stylus, which is included (unlike with the iPad Pro).

Asus ZenPad 3S 10

The Asus ZenPad 3S 10 is perfectly proportioned for those looking for on-the-go usage. It has a 9.7in display and built-in support for high-resolution, meaning your tunes sound truly epic when fired through its built-in speakers or streamed or sent to your headphones.

It is also one of the cheaper tablets on this list as well, making it an ideal choice for those after value for money. This is a more of a traditional tablet compared to the likes of the iPad Pro or Surface Pro 4. But for those that want a large screen media and browsing experience, it simply cannot be beaten.

Even more so when Google REFUSES to update its Nexus 7 slate.

iPad Pro 9.7

Overall, however, the best tablet on the market has to be the 9.7in iPad Pro. It’s the perfect size for lots of people (let’s face it: the larger iPad Pro is just too big for most). It’s beautiful 1536 x 2048 display is accompanied by a A9X processor with 2GB of RAM and it comes in 32GB, 128GB, or a massive 256GB option. Oh, and add in that Apple Pencil and keyboard support and this is once of the best tablets ever made.

Source: knowyourmobile.com