AI Models For Investing: Lessons From Petroleum (And Edgar Allan Poe)

AnalyticsAnywhere

A decade ago, at a NY conference, an analyst put up slides showing his model of the short-term oil price (variables like inventories, production and demand trends, and so forth). I turned to the colleague next to me and said, “I just want to ask him, ‘How old are you?’” I worked on a computer model of the world oil market from 1977, when the model was run from a remote terminal and the output had to be picked up on the other side of campus. (Yes, by dinosaurs.) Although I haven’t done formal modeling in recent years, my experiences might provide some insight into the current fashion for using computer models in investing (among other things).

About two centuries ago, Baron von Maelzel toured the U.S. with an amazing clockwork automaton (invented by Baron Kempelen), a chess-playing “Turk” in the form of a mannequin at a desk with a chess board. The mannequin was dressed up as a Turk, given perceptions at the time of their perceived superior wisdom. The automaton could not only play chess very well, but solve problems presented to it that experts found difficult. Viewers were amazed, given the complexity of chess, and the level of play was not matched by modern computers for nearly two centuries. None of the Turk’s observers could initially explain the mechanism by which such feats were performed.

This is reminiscent of the 1970s when Uri Geller made claims to have paranormal abilities, which physicists from SRI found they could not explain. Because he wasn’t committing acts of physics, but sleight of hand, as demonstrated by the Amazing Randi who was not a scientist but rather an expert in the latter craft. (Similarly, peak oil advocates are often amazed at techniques done by scientists that are actually statistical in nature—and done wrong.)

Edgar Allan Poe considered the case and proved to be the Amazing Randi of his day. The chess-playing Turk was the result of “the wonderful mechanical genius of Baron Kempelen [that] could invent the necessary means for shutting a door or slipping aside a panel with a human agent too at his service…” in Poe’s words. He noted that the Baron would open one panel on the desk, show no one behind it, close it and open the other, again revealing no human agent; but this is just a standard magician’s trick, where the subject simply moves from one side to the other. Indeed, others claimed to have seen a chess player exit the desk after the audience had left.

Computer models often fall into this category. No matter how scientific and objective they appear, there is always a human agent behind them. In oil market modeling in the 1970s, this took the form of the price mechanism. NYU Professor Dermot Gately had suggested that prices moved according to capacity utilization in OPEC, as in the following figure (later used by the Energy Information Administration, among many others). If utilization was above 80%, prices would rise sharply, below 80% they would taper off.

AnalyticsAnywhereThis made sense, given that many industries use a similar conceptual model to predict inflation: high utilization in the steel industry results in higher steel prices, etc. And the model certainly seems to fit the existing data.

At least until 1986. After 1985, the data points no longer fit the curve, and the EIA stopped publishing the figure after 1987; for the last two years the model was well off. Subsequently, EIA ceased to publish the figure, although they used the formula for some time to come.

What had become obscured by the supposed success of the formula was that it was intended for use of short-term price changes. High steel capacity utilization would mean higher steel prices, but lead to investment and more capacity, so that prices would stabilize and even drop.

But oil models couldn’t capture this, because much of the capacity was in OPEC and it was assumed that OPEC would not necessarily invest in response to higher prices. Instead, the programmer had to choose numbers for OPEC’s future capacity and input them into the machine, meaning the programmer had control over the price forecast by simply modifying the capacity numbers. Despite the ‘scientific’ appearance of the computer model, there really was a man in the machine making the moves.

People have long sought to reduce the influence of fallible humans, whether replacing workers with machines or putting control of our nuclear weapons in the hands of Colossus, a giant computer that would avoid an accidental nuclear war. (1970 movie Forbin, the Colossus Project, fourteen years before Terminator’s Skynet). Ignoring that there is a always human element, even if only in the design.

Without any expertise in the field of artificial intelligence, it nonetheless seems to me that AI trading programs might learn, but won’t they learn at they are taught to do? Will this not simply be an extension of algorithms used by others in the financial world, at whose core as simply comparison of current with historical data and trends?

And this, after all, is what led to the financial meltdown described so aptly in When Genius Failed, the story of Long Term Capital Management and the way it nearly crashed the world economy. Recognizing patterns of behavior preceding an OPEC meeting, such as the way prices move in response to comments by member country ministers, can be useful, but will novel cases such as the SARS epidemic or the 2008 financial crisis catch the programs flat-footed, possibly triggering massive losses?

The answer, as it often does, comes down to gearing. LTCM’s model failed, but the problem was the huge amount of money they had at risk, far outreaching their capital. For a few small traders to use AI programs, or an investment bank to risk a fraction of its commodities’ funds would not be a concern. But if such programs become widespread, and they all programs are drawing the same conclusions from historical data, could there be a huge amount of money making the same bet?

For individuals, of course, the answer is diversify, one of the first investing lessons. I wonder how many AI programs will practice the same.

Source: Forbes

Advertisements

The meaning of life in a world without work

As technology renders jobs obsolete, what will keep us busy? Sapiens author Yuval Noah Harari examines ‘the useless class’ and a new quest for purpose.

AnalyticsAnywhere

Most jobs that exist today might disappear within decades. As artificial intelligence outperforms humans in more and more tasks, it will replace humans in more and more jobs. Many new professions are likely to appear: virtual-world designers, for example. But such professions will probably require more creativity and flexibility, and it is unclear whether 40-year-old unemployed taxi drivers or insurance agents will be able to reinvent themselves as virtual-world designers (try to imagine a virtual world created by an insurance agent!). And even if the ex-insurance agent somehow makes the transition into a virtual-world designer, the pace of progress is such that within another decade he might have to reinvent himself yet again.

The crucial problem isn’t creating new jobs. The crucial problem is creating new jobs that humans perform better than algorithms. Consequently, by 2050 a new class of people might emerge – the useless class. People who are not just unemployed, but unemployable.

The same technology that renders humans useless might also make it feasible to feed and support the unemployable masses through some scheme of universal basic income. The real problem will then be to keep the masses occupied and content. People must engage in purposeful activities, or they go crazy. So what will the useless class do all day?

One answer might be computer games. Economically redundant people might spend increasing amounts of time within 3D virtual reality worlds, which would provide them with far more excitement and emotional engagement than the “real world” outside. This, in fact, is a very old solution. For thousands of years, billions of people have found meaning in playing virtual reality games. In the past, we have called these virtual reality games “religions”.

What is a religion if not a big virtual reality game played by millions of people together? Religions such as Islam and Christianity invent imaginary laws, such as “don’t eat pork”, “repeat the same prayers a set number of times each day”, “don’t have sex with somebody from your own gender” and so forth. These laws exist only in the human imagination. No natural law requires the repetition of magical formulas, and no natural law forbids homosexuality or eating pork. Muslims and Christians go through life trying to gain points in their favorite virtual reality game. If you pray every day, you get points. If you forget to pray, you lose points. If by the end of your life you gain enough points, then after you die you go to the next level of the game (aka heaven).

As religions show us, the virtual reality need not be encased inside an isolated box. Rather, it can be superimposed on the physical reality. In the past this was done with the human imagination and with sacred books, and in the 21st century it can be done with smartphones.

Some time ago I went with my six-year-old nephew Matan to hunt for Pokémon. As we walked down the street, Matan kept looking at his smartphone, which enabled him to spot Pokémon all around us. I didn’t see any Pokémon at all, because I didn’t carry a smartphone. Then we saw two others kids on the street who were hunting the same Pokémon, and we almost got into a fight with them. It struck me how similar the situation was to the conflict between Jews and Muslims about the holy city of Jerusalem. When you look at the objective reality of Jerusalem, all you see are stones and buildings. There is no holiness anywhere. But when you look through the medium of smartbooks (such as the Bible and the Qur’an), you see holy places and angels everywhere.

The idea of finding meaning in life by playing virtual reality games is of course common not just to religions, but also to secular ideologies and lifestyles. Consumerism too is a virtual reality game. You gain points by acquiring new cars, buying expensive brands and taking vacations abroad, and if you have more points than everybody else, you tell yourself you won the game.

You might object that people really enjoy their cars and vacations. That’s certainly true. But the religious really enjoy praying and performing ceremonies, and my nephew really enjoys hunting Pokémon. In the end, the real action always takes place inside the human brain. Does it matter whether the neurons are stimulated by observing pixels on a computer screen, by looking outside the windows of a Caribbean resort, or by seeing heaven in our mind’s eyes? In all cases, the meaning we ascribe to what we see is generated by our own minds. It is not really “out there”. To the best of our scientific knowledge, human life has no meaning. The meaning of life is always a fictional story created by us humans.

In his groundbreaking essay, Deep Play: Notes on the Balinese Cockfight (1973), the anthropologist Clifford Geertz describes how on the island of Bali, people spent much time and money betting on cockfights. The betting and the fights involved elaborate rituals, and the outcomes had substantial impact on the social, economic and political standing of both players and spectators.

The cockfights were so important to the Balinese that when the Indonesian government declared the practice illegal, people ignored the law and risked arrest and hefty fines. For the Balinese, cockfights were “deep play” – a made-up game that is invested with so much meaning that it becomes reality. A Balinese anthropologist could arguably have written similar essays on football in Argentina or Judaism in Israel.

Indeed, one particularly interesting section of Israeli society provides a unique laboratory for how to live a contented life in a post-work world. In Israel, a significant percentage of ultra-orthodox Jewish men never work. They spend their entire lives studying holy scriptures and performing religion rituals. They and their families don’t starve to death partly because the wives often work, and partly because the government provides them with generous subsidies. Though they usually live in poverty, government support means that they never lack for the basic necessities of life.

That’s universal basic income in action. Though they are poor and never work, in survey after survey these ultra-orthodox Jewish men report higher levels of life-satisfaction than any other section of Israeli society. In global surveys of life satisfaction, Israel is almost always at the very top, thanks in part to the contribution of these unemployed deep players.

You don’t need to go all the way to Israel to see the world of post-work. If you have at home a teenage son who likes computer games, you can conduct your own experiment. Provide him with a minimum subsidy of Coke and pizza, and then remove all demands for work and all parental supervision. The likely outcome is that he will remain in his room for days, glued to the screen. He won’t do any homework or housework, will skip school, skip meals and even skip showers and sleep. Yet he is unlikely to suffer from boredom or a sense of purposelessness. At least not in the short term.

Hence virtual realities are likely to be key to providing meaning to the useless class of the post-work world. Maybe these virtual realities will be generated inside computers. Maybe they will be generated outside computers, in the shape of new religions and ideologies. Maybe it will be a combination of the two. The possibilities are endless, and nobody knows for sure what kind of deep plays will engage us in 2050.

In any case, the end of work will not necessarily mean the end of meaning, because meaning is generated by imagining rather than by working. Work is essential for meaning only according to some ideologies and lifestyles. Eighteenth-century English country squires, present-day ultra-orthodox Jews, and children in all cultures and eras have found a lot of interest and meaning in life even without working. People in 2050 will probably be able to play deeper games and to construct more complex virtual worlds than in any previous time in history.

But what about truth? What about reality? Do we really want to live in a world in which billions of people are immersed in fantasies, pursuing make-believe goals and obeying imaginary laws? Well, like it or not, that’s the world we have been living in for thousands of years already.

Source: The Guardian

Self-driving AI clinic reimagines healthcare for the 21st century

Champion

Seattle-based design firm Artefact Group has revealed a comprehensive concept that would make the future of healthcare mobile. Integrating passive monitoring technologies in the home, a smartphone app, AI diagnostics and a self-driving clinic, the system combines a variety of innovations for a new spin on healthcare.

While many sectors of society are being dramatically disrupted by rapidly evolving digital innovations, the arena of healthcare seems to responding more slowly, with many hospitals still largely relying on paper to record patient data. Earlier in the year we saw a gadget-filed, subscription-based medical clinic open in San Francisco, and several fascinating advances are occurring in the field of artificial intelligence diagnostics, But the Aim concept envisions a fundamentally different healthcare approach than what we have been used to for the past 100 years.

The system begins with a series of active testing and passive monitoring devices in the home, capturing data from several sources, such as the bathroom scale, toilet and medicine cabinet. The goal is to create an interconnected set of devices, including health-monitoring wearables, that can create a unified, patient-owned health record.

A constantly learning AI would then monitor a person’s health data and flag unusual results. When needed, a self-driving mini clinic could navigate to your location for more comprehensive diagnostics, such as thermography, breath analysis, and respiration or cardiac rhythm.

Inside this mobile clinic, an AI could offer its diagnosis, and even deliver common pharmaceuticals such as antibiotics or contraceptives. If a health condition is flagged as serious or escalating, the Aim system would then connect the patient to an on-call specialist or even transport them directly to a hospital emergency room.

“The mission of Aim is to close the data, experience and logistical gaps between home and clinical environments,” the designers say.

Despite being a slightly pie-in-the-sky concept right now, rapid advances in personal health monitoring and AI means it’s not necessarily that far from being feasible, and much of the Aim system feels like it could be pragmatically implemented into our current healthcare processes without too much trouble. With the current burden on patients to get to doctors’ clinics, which can sometimes be quite far away, an integrated monitoring system such as this could lighten the load for overworked healthcare workers.

AI-driven diagnostic tools are also set to inevitably become increasingly useful for low-risk patient monitoring, and a mobile autonomous clinic could significantly reduce the drain on current hospital resources by catching conditions early before they become serious enough to require a hospital admission.

Cost is of course a major consideration here and developing such a sophisticated system wouldn’t be cheap, but as the costs of healthcare continue to skyrocket maybe some outside-the-box thinking such as this is should be encouraged. Much like the San Francisco Forward clinic, a cost-effective subscription-based system could possibly offer many who currently can’t afford big health insurance premiums greater access to medical care.

Source: newatlas.com

5 Facebook Bots To Support Your Health

AnalyticsAnywhere

HealthTap — A Doctor Bot

HealthTap is a larger health company that decided to democratize their wealth of health information by creating a chatbot.

You can ask all your burning medical questions here and get resources from HealthTap’s large database, as well as personalized responses from doctors! No more waiting rooms, the chatbot will see you now.

Atlas — A Fitness Bot

Atlas knows how hard it can be to keep up with a regular workout routine when your days are getting swamped.
Maybe not for pro-athletes but definitely for fitness enthusiasts of all kinds, Atlas is free and has more than a few tricks up his sleeve to keep you engaged with your workouts. This bot is currently in beta and sends personalized workout reminders on a schedule you provide along with motivational quotes (#justdoit). Very promising concept inside a bot and the Atlas makers plan to expand into workout plans and fitness tips very soon! Stay tuned.

Woebot — A Mood Bot

Woebot is a mood tracking bot with personality and a conversation designed that feels like talking to a bot therapist.
Backed by scientific research, Woebot can help reduce depression, share CBT resources, and learns from your conversations over time. The Woebot makers offer scalable pricing for individuals and the first 14 sessions are totally free.

Forksy — A Nutrition Bot

Forksy keeps track of your meals so you don’t have to. Whether you had three slices of pizza or a bagel with a little too much cream cheese, Forksy knows the dirty secrets of your diet. If you’re trying to be more health conscious, Forksy is a great option. The NLP capabilities are great and it feels as if you can just type in any food combination and get an instant result.

Izzy — A Period Tracker Bot

Izzy helps women track their periods and sends reminders to take birth control pills. This chatbot has a fun personality and tries to turn a not-so-fun topic into something more friendly and manageable.
It would be great to see clever NLP for topics unrelated to menstruation but Izzy takes on a great use case to bring chatbots closer to women.

Why Chatbots & Health? — Key Takeaway

Health apps and wearable devices took the world by storm, supporting users throughout their daily activities. It seemed normal in the era of the app store to download and try a couple of new apps on a weekly basis but app downloads have been steadily decreasing over the past years.

Chatbots are booming and bot developers are finding more use cases in different health industries. Some of the bots mentioned above are not as in-depth and far-reaching as their app competitors but chatbots in general seem to be a great solution for simple activities and quick feedback.

Why? There’s no need to require a Facebook user to leave Facebook and open another application to support a simple task. Chatbots don’t require that.

Brief explainer: I’m referring to Facebook’s current challenge to retain its users on the Facebook application and not lose them to other apps such as a fitness application when a user wants to get workout suggestions.

Implementing chatbots on Facebook creates a retention ecosystem, something that Facebook’s Chinese competitor WeChat has mastered. Users don’t have to leave Facebook anymore.

With such seamless yet effective interactions, chatbots are here to make our lives easier in different ways. The list above shows us that chatbots can help us stay on top of our fitness routine, track periods, track moods, provide us with dietary feedback, connect us with doctors and a lot more.

It seems inevitable that we will see a wave of chatbots disrupting the health space (& other industries) with users finding more ways to support their daily activities within the platforms they spend most of their time on such as Facebook.

Source: chatbotsmagazine.com

The Human Army Using Phones to Teach AI to Drive

AnalyticsAnywhere

As her fellow patients read dog-eared magazines or swipe through Instagram, Shari Forrest opens an app on her phone and gets busy training artificial intelligence.

Forrest isn’t an engineer or programmer. She writes textbooks for a living. But when the 54-year-old from suburban St. Louis needs a break or has a free moment, she logs on to Mighty AI, and whiles away her time identifying pedestrians and trash cans and other things you don’t want driverless cars running into. “If I am sitting waiting for a doctor’s appointment and I can make a few pennies, that’s not a bad deal,” she says.

The work is a pleasant distraction for Forrest, but absolutely essential to the coming ages of driverless cars. The volume of data needed to train the AI underpinning those vehicles staggers the imagination. The Googles and GMs of the world rarely mention it, but their shiny machines and humming data centers rely on a growing, and global, army of people like Forrest to help provide it.

You’ve probably heard by now that almost everyone expects AI to revolutionize almost everything. Automakers in particular love this idea, because robocars promise to increase safety, reduce congestion, and generally make life easier. “The automotive space is one of the hottest and most advanced fields applying machine learning,” says Matt Bencke, CEO of Mighty AI. He won’t name names, but claims his company is working with at least 10 automakers.

The challenge lies in teaching a computer how to drive. The DMV rule book provides a good place to start, because it covers rudimentary things like “Yield to pedestrians.” Ah, but what does a pedestrian look like? Well, a pedestrian usually has two legs. But a skirt can make two legs look like one. What about a fellow in a wheelchair, or a mother pushing a stroller? Is that a small child, or a large dog? Or a trash can? Any artificial intelligence controlling a two-ton chunk of steel must learn how to identify such things, and make sense of an often confusing world. This is second nature for humans, but utterly foreign to a computer.

Cue Forrest and 200,000 other Mighty AI users around the world.

The onboard cameras helping prototype robocars navigate the world photograph almost every environment and circumstance you can image. Automakers and tech companies send those photos by the millions to an outfit like Mighty AI, which makes a game of identifying everything in those photos. It sounds tedious, but Mighty AI makes it a 10 minute task with points, skills, and level-ups to keep it engaging. “It’s more like Candy Crush than a labor farm,” says Bencke. The monetary rewards, although small, help, too.

Forrest carefully draws a box around every person in each picture, then around every approaching car, and then around the tires on each car. That done, she zooms in, and working pixel-by-pixel, meticulously outlines things like trees. Click click, click. She selects a different color pointer and highlights traffic lights, a telegraph pole, a safety cone. When she’s finished, the scene is annotated in language a computer understands. Engineers call it a “semantic segmentation mask”.

The need for accuracy makes for painstaking work, but Forrest, who makes a few centers per picture, enjoys it. “It’s like why some adults color,” she says. “It’s become a relaxing task.”

Those millions of annotate photos help an AI identify patterns that help it understand, say, what a human looks like. Eventually the AI grows smart enough to draw boxes around pedestrians. People like Forrest will help double-check the AI’s work. Over time, AI will grow smart enough to reliably identify, say, kangaroos.

Relying on an army of amateurs might seem odd, but it remains the most efficient way of training AI. “It’s pretty much the only way,” says Premkumar Natarajan, who specializes in computer vision at the USC Information Sciences Institute. He’s been working in the field for more than two decades. Although there’s been some promising research into so-called unsupervised learning where computers learn with minimal input, but for now the intelligence in artificial intelligence depends on the quality of the data its trained on.

Bencke says his platform uses its own machine learning to determine what each member of the Mighty AI community is best at, then assign them those jobs. No one is getting rich doing this essential work, but for Forrest, that’s beside the point.

She says she made about $300 last year, money she put toward online shopping. She’s never seen an autonomous vehicle, much less ridden in one. But knowing that she’s helping make them smarter will make her more likely to trust the technology when she finally does.

Source: Wired

Using Artificial Intelligence to Reduce the Risk of Nonadherence in Patients on Anticoagulation Therapy

Past, Present and Future of AI / Machine Learning (Google I/O ’17)

 

We are in the middle of a major shift in computing that’s transitioning us from a mobile-first world into one that’s AI-first. AI will touch every industry and transform the products and services we use daily. Breakthroughs in machine learning have enabled dramatic improvements in the quality of Google Translate, made your photos easier to organize with Google Photos, and enabled improvements in Search, Maps, YouTube, and more.