Apple’s AR platform: These demos show what ARKit can do in iOS 11

 

AnalyticsAnywhere

Apple sees a lot of potential in augmented reality.

Ever since Pokemon Go exploded in popularity last summer and subsequently revived interest in both Apple’s App Store and mobile gaming, Apple has said several times that it is embracing the technology, which is commonly called AR, especially now that it offers the ARKit platform. Here’s everything you need to know about ARKit, including what it can do and examples of its power in action.

What is AR?

Augmented reality isn’t a new technology. But Apple is now jumping into AR, so everyone’s been talking about it. You see, while virtual reality immerses you into a space, essentially replacing everything you see in a physical world, AR takes the world around you and adds virtual objects to it. You can look with your phone, for instance, and see a Pokemon standing in your living room.

What is Apple ARKit?

With iOS 11, which debuted at WWDC 2017, Apple is officially acknowledging AR. It has introduced the ARKit development platform, allowing app developers to quickly and easily build AR experiences into their apps and games. It will launch alongside iOS 11 this autumn. When it’s finally live, it’ll use your iOS device’s camera, processors, and motion sensors to create some immersive interactions.

It also uses a technology called Visual Inertial Odometry in order to track the world around your iPad or iPhone. This functionality allows your iOS device to sense how it moves in a room. ARKit will use that data to not only analyse a room’s layout, but also detect horizontal planes like tables and floors and serve up virtual objects to be placed upon those surfaces in your physical room.

What’s the point of ARKit?

Developers are free to create all kinds of experiences using ARKit, some of which are already being shown off on Twitter. IKEA even announced it is developing a new AR app built on ARKit that will let customers to preview IKEA products in their own homes before making a purchase. IKEA said that Apple’s new platform will allow AR to “play a key role” in new product lines.

That last bit is key. For Apple, ARKit opens up an entirely new category of apps that would run on every iPhone and iPad. It essentially wants to recreate and multiply the success of Pokemon Go. Plus, it opens up so many long-term possibilities. The company is rumoured to be working on an AR headset, for instance. Imagine wearing Apple AR glasses capable of augmenting you world every day.

Does ARKit face any competition?

Let’s also not forget that ARKit allows Apple to compete with Microsoft’s Hololens and Google’s Tango AR kit. But while Hololens and Tango are designed to be aware of multiple physical spaces and all of the shapes contained within, ARKit is more about detecting flat surfaces and drawing on those flat surfaces. In other words, it’s more limited, but we’re still in early-days territory right now.

We actually think ARKit’s capabilities, as of July 2017, reminds us of the AR effects found inside Snapchat or even the Facebook Camera app. The potential of Apple’s AR platform will likely improve as we move closer to the launch of iOS 11, however.

Which iOS devices can handle ARKit apps?

Any iPhone or iPad capable of running iOS 11 will be able to install ARKit apps. However, we’re assuming newer devices will handle the apps better. For instance, the new 10.5-inch and 12.5-inch iPads Pro tablets that debuted during WWDC 2017 have bumped-up display refresh rates of 120hz, which means what you see through the camera should seem much more impressive on those devices.

How do you get started with ARKit?

If you’re interested in building ARKit apps for iOS 11, go to the Apple Developer site, which has forums for building AR apps and beta downloads. If you’re a consumer who is just excited to play, you can go get the new iPad Pro and install the iOS 11 public beta to try out some of the early demos for AR. Otherwise, wait for iOS 11 to officially release alongside new AR apps in the App Store.

Source: pocket-lint.com

Advertisements

3 Technologies You Need To Start Paying Attention To Right Now

AnalyticsAnywhere

At any given time, a technology or two captures the zeitgeist. A few years ago it was social media and mobile that everybody was talking about. These days it’s machine learning and block chain. Everywhere you look, consulting firms are issuing reports, conferences are being held and new “experts” are being anointed.

In a sense, there’s nothing wrong with that. Social media and mobile computing really did change the world and, clearly, the impact of artificial intelligence and distributed database architectures will be substantial. Every enterprise needs to understand these technologies and how they will impact its business.

Still we need to remember that we always get disrupted by what we can’t see. The truth is that the next big thing always starts out looking like nothing at all. That’s why it’s so disruptive. If we saw it coming, it wouldn’t be. So here are three technologies you may not of heard about, but you should start paying attention to. The fate of your business may depend on it.

1. New Computing Architectures

In the April 19th issue of Electronics in 1965, Intel Co-Founder Gordon Moore published an article that observed the number of transistors on a silicon chip were doubling roughly every two years. Over the past half century, that consistent doubling of computing power, now known as Moore’s Law, has driven the digital revolution.

Today, however, that process has slowed and it will soon it come to a complete halt. There are only so many transistors you can cram onto a silicon wafer before subatomic effects come into play and make it impossible for the technology to function. Experts disagree on exactly when this will happen, but it’s pretty clear that it will be sometime within the next five years.

There are, of course, a number of ways to improve chip performance other than increasing the number of transistors, such as FPGA, ASIC and 3D stacking. Yet those are merely stopgaps and are unlikely to take us more than a decade or so into the future. To continue to advance technology over the next 50 years, we need fundamentally new architectures like quantum computing and neuromorphic chips.

The good news is that these architectures are very advanced in their development and we should start seeing a commercial impact within 5-10 years. The bad news is that, being fundamentally new architectures, nobody really knows how to use them yet. We are, in a sense, back to the early days of computing, with tons of potential but little idea how to actualize it.

2. Genetic Engineering

While computer scientists have been developing software languages over the past 50 years, biologist have been trying to understand a far more pervasive kind of code, the genetic code. For the most part, things have gone slowly. Although there has been significant scientific progress, the impact of that advancement has been relatively paltry.

That began to change in 2003 with the completion of the Human Genome Project. For the first time, we began to truly understand how DNA interacts with our biology, which led to other efforts, such as the Cancer Genome Atlas, as well as tangible advancements in agriculture. For the first time, genomics became more than mere scientific inquiry, but a source of new applications

Now, a new technology called CRISPR, is allowing scientists to edit genes at will. In fact, because the technology is simple enough for even amateur biologists to use, we can expect genetic engineering to become much more widespread across industries. Early applications include liquid fuels from sunshine and genomic vaccines.

“CRISPR is accelerating everything we do with genomics,” Megan Hochstrasser of the Innovative Genomics Initiative at Cal Berkeley told me, “from cancer research to engineering disease resistant crops and many other applications that haven’t yet come to the fore. Probably the most exciting aspect is that CRISPR is so cheap and easy to use, it will have a democratizing effect, where more can be done with less. We’re really just getting started.”

3. Materials Science

Traditionally, the way you improved a material to build a product has been a process of trial and error. You changed the ingredients or the process by which you made it and saw what happened. For example, at some point a medieval blacksmith figured out that annealing iron would make better swords.

Today, coming up with better materials is a multi-billion business. Consider the challenges that Boeing faced when designing its new Dreamliner. How do you significantly increase the performance of an airplane, a decades old technology? Yet by discovering new composite materials, the company was able to reduce weight by 40,000 pounds and fuel use by 20%.

With this in mind, the Materials Genome Initiative is building databases of material properties like strength, density and other things, and also includes computer models to predict what processes will achieve the qualities a manufacturer is looking for. As a government program, it is also able to make the data widely available for anyone who wants to use it, not just billion dollar companies like Boeing.

“Our goal is to speed up the development of new materials by making clear the relationship between materials, how they are processed and what properties are likely to result,” Jim Warren, Director of the Materials Genome program told me. “My hope is that the Materials Genome will accelerate innovation in just about every industry America competes in.”

It’s Better To Prepare Than Adapt

For the past few decades, great emphasis has been put on agility and adaptation. When a new technology, like social media, mobile computing or artificial intelligence begins to disrupt the marketplace, firms rush to figure out what it means and adapt their strategies accordingly. If they could do that a bit faster than the competition, they would win.

Today, however, we’re entering a new era of innovation that will look much more like the 50s and 60s than it will the 90s and aughts. The central challenge will no longer be to dream up new applications based on improved versions of old technologies, but to understand fundamentally new paradigms.

That’s why over the next few decades, it will be more important to prepare than adapt. How will you work with new computing architectures? How will fast, cheap genetic engineering affect your industry? What should you be doing to explore new materials that can significantly increase performance and lower costs? These are just some of the questions we will grapple with.

Not all who wander are lost. The challenge is to wander with purpose.

Source: Digital Tonto

AI Models For Investing: Lessons From Petroleum (And Edgar Allan Poe)

AnalyticsAnywhere

A decade ago, at a NY conference, an analyst put up slides showing his model of the short-term oil price (variables like inventories, production and demand trends, and so forth). I turned to the colleague next to me and said, “I just want to ask him, ‘How old are you?’” I worked on a computer model of the world oil market from 1977, when the model was run from a remote terminal and the output had to be picked up on the other side of campus. (Yes, by dinosaurs.) Although I haven’t done formal modeling in recent years, my experiences might provide some insight into the current fashion for using computer models in investing (among other things).

About two centuries ago, Baron von Maelzel toured the U.S. with an amazing clockwork automaton (invented by Baron Kempelen), a chess-playing “Turk” in the form of a mannequin at a desk with a chess board. The mannequin was dressed up as a Turk, given perceptions at the time of their perceived superior wisdom. The automaton could not only play chess very well, but solve problems presented to it that experts found difficult. Viewers were amazed, given the complexity of chess, and the level of play was not matched by modern computers for nearly two centuries. None of the Turk’s observers could initially explain the mechanism by which such feats were performed.

This is reminiscent of the 1970s when Uri Geller made claims to have paranormal abilities, which physicists from SRI found they could not explain. Because he wasn’t committing acts of physics, but sleight of hand, as demonstrated by the Amazing Randi who was not a scientist but rather an expert in the latter craft. (Similarly, peak oil advocates are often amazed at techniques done by scientists that are actually statistical in nature—and done wrong.)

Edgar Allan Poe considered the case and proved to be the Amazing Randi of his day. The chess-playing Turk was the result of “the wonderful mechanical genius of Baron Kempelen [that] could invent the necessary means for shutting a door or slipping aside a panel with a human agent too at his service…” in Poe’s words. He noted that the Baron would open one panel on the desk, show no one behind it, close it and open the other, again revealing no human agent; but this is just a standard magician’s trick, where the subject simply moves from one side to the other. Indeed, others claimed to have seen a chess player exit the desk after the audience had left.

Computer models often fall into this category. No matter how scientific and objective they appear, there is always a human agent behind them. In oil market modeling in the 1970s, this took the form of the price mechanism. NYU Professor Dermot Gately had suggested that prices moved according to capacity utilization in OPEC, as in the following figure (later used by the Energy Information Administration, among many others). If utilization was above 80%, prices would rise sharply, below 80% they would taper off.

AnalyticsAnywhereThis made sense, given that many industries use a similar conceptual model to predict inflation: high utilization in the steel industry results in higher steel prices, etc. And the model certainly seems to fit the existing data.

At least until 1986. After 1985, the data points no longer fit the curve, and the EIA stopped publishing the figure after 1987; for the last two years the model was well off. Subsequently, EIA ceased to publish the figure, although they used the formula for some time to come.

What had become obscured by the supposed success of the formula was that it was intended for use of short-term price changes. High steel capacity utilization would mean higher steel prices, but lead to investment and more capacity, so that prices would stabilize and even drop.

But oil models couldn’t capture this, because much of the capacity was in OPEC and it was assumed that OPEC would not necessarily invest in response to higher prices. Instead, the programmer had to choose numbers for OPEC’s future capacity and input them into the machine, meaning the programmer had control over the price forecast by simply modifying the capacity numbers. Despite the ‘scientific’ appearance of the computer model, there really was a man in the machine making the moves.

People have long sought to reduce the influence of fallible humans, whether replacing workers with machines or putting control of our nuclear weapons in the hands of Colossus, a giant computer that would avoid an accidental nuclear war. (1970 movie Forbin, the Colossus Project, fourteen years before Terminator’s Skynet). Ignoring that there is a always human element, even if only in the design.

Without any expertise in the field of artificial intelligence, it nonetheless seems to me that AI trading programs might learn, but won’t they learn at they are taught to do? Will this not simply be an extension of algorithms used by others in the financial world, at whose core as simply comparison of current with historical data and trends?

And this, after all, is what led to the financial meltdown described so aptly in When Genius Failed, the story of Long Term Capital Management and the way it nearly crashed the world economy. Recognizing patterns of behavior preceding an OPEC meeting, such as the way prices move in response to comments by member country ministers, can be useful, but will novel cases such as the SARS epidemic or the 2008 financial crisis catch the programs flat-footed, possibly triggering massive losses?

The answer, as it often does, comes down to gearing. LTCM’s model failed, but the problem was the huge amount of money they had at risk, far outreaching their capital. For a few small traders to use AI programs, or an investment bank to risk a fraction of its commodities’ funds would not be a concern. But if such programs become widespread, and they all programs are drawing the same conclusions from historical data, could there be a huge amount of money making the same bet?

For individuals, of course, the answer is diversify, one of the first investing lessons. I wonder how many AI programs will practice the same.

Source: Forbes