Machine learning creates living atlas of the planet

Machine learning, combined with satellite imagery and Cloud computing, is enabling understanding of the world and making the food supply chain more efficient.


There are more than 7 billion people on Earth now, and roughly one in eight people do not have enough to eat. According to the World Bank, the human population will hit an astounding 9 billion by 2050. With rapidly increasing population, the growing need for food is becoming a grave concern.

The burden is now on technology to make up for the looming food crises in the coming decades. But fortunately there is no shortage of ideas and innovative minds are seeking solutions to combat this problem.

Machine learning to the rescue
Descartes Labs, a Los Alamos, New Mexico-based start-up is using machine learning to analyze satellite imagery to predict food supplies months in advance of current methods employed by the US government, a technique that could help predict food crises before they happen.

Descartes Labs pulls images from public databases like NASA’s Landsat and MODIS, ESA’s Sentinel missions and other private satellite imagery providers, including Planet. It also keeps a check on Google Earth and Amazon Web Services public datasets. This continuous up-to-date imagery is referred to as the ‘Living Atlas of the Plant’.

The commercial atlas, designed to provide real-time forecasts of commodity agriculture, uses decades of remotely sensed images stored on the Cloud to offer land use and land change analysis.

Descartes Labs cross-references the satellite information with other relevant data such as weather forecasts and prices of agricultural products. This data is then entered into the machine learning software, tracking and calculating future food supplies with amazing accuracy. By processing these images and data via their advanced machine learning algorithm, Descartes Labs collect remarkably in-depth information such as being able to distinguish individual crop fields and determining the specific field’s crop by analyzing how the sun’s light is reflecting off its surface. After the type of crop has been established, the machine learning program then monitors the field’s production levels.

“With machine learning techniques, we look at tons of pixels from satellites, and that tells us what’s growing,” says Mark Johnson, CEO and Co-founder, Descartes Labs.

How to tackle a data deluge
The total database includes approximately a petabyte — or 1015 bytes — of data. Descartes has actually reprocessed the whole 40-year archive starting with the first Landsat satellite imagery to offer completely Cloud-free view of land use and land change to create this ‘Living Atlas of the Planet’.

The data platform is said to have analyzed over 2.8 quadrillion multispectral pixels for this. It enables processing at petabytes per day rates using multi-source data to produce calibrated, georeferenced imagery stacks at desired points in time and space that can be used for pixel level or global scale analysis or for visualizing or measure changes such as floods, or changes in the condition of crops. “The platform is built for analysis. It is not built to store the data. This is a vastly different philosophy than traditional data platforms,” says Daniela Moody, Remote Sensing and Machine Learning Specialist, Descartes Labs.

The platform churns out imageries at specific locations for specific time at different wavelengths, thus offering unique insights into land cover changes over broad swaths of land. For instance, the NDVI (normalized difference vegetation index) reveals live green vegetation using a combination of red and near-infrared spectral bands (Figure 2). Combining NDVI with visible spectral bands allows a user to examine the landscape through many lenses. The platform offers both Web and API interfaces. While the Web interface offers options for visualizing data, whereas the API allows the user to interact directly with the data for specific analysis. The platform’s scalable Cloud infrastructure quickly ingests, analyzes, and creates predictions from the imagery.

Change is the only constant
The ability to have such fine-grained data on agricultural production will help in making the food supply chain more efficient. As Descartes Labs adds more geospatial data to its already robust database of earth imagery, these models will get even more accurate. Cloud computing and storage, combined with recent advances in machine learning and open software, are enabling understanding of the world at an unprecedented scale and detail.

Earth is not a static place, and researchers who study it need tools that keep up with the constant change. “We designed this platform to answer the problems of commodity agriculture,” Moody adds, “and in doing so we created a platform that is incredible and allows us to have a living atlas of the world.”


Deep Learning Is Going to Teach Us All the Lesson of Our Lives: Jobs Are for Machines


On December 2nd, 1942, a team of scientists led by Enrico Fermi came back from lunch and watched as humanity created the first self-sustaining nuclear reaction inside a pile of bricks and wood underneath a football field at the University of Chicago. Known to history as Chicago Pile-1, it was celebrated in silence with a single bottle of Chianti, for those who were there understood exactly what it meant for humankind, without any need for words.

Now, something new has occurred that, again, quietly changed the world forever. Like a whispered word in a foreign language, it was quiet in that you may have heard it, but its full meaning may not have been comprehended. However, it’s vital we understand this new language, and what it’s increasingly telling us, for the ramifications are set to alter everything we take for granted about the way our globalized economy functions, and the ways in which we as humans exist within it.

The language is a new class of machine learning known as deep learning, and the “whispered word” was a computer’s use of it to seemingly out of nowhere defeat three-time European Go champion Fan Hui, not once but five times in a row without defeat. Many who read this news, considered that as impressive, but in no way comparable to a match against Lee Se-dol instead, who many consider to be one of the world’s best living Go players, if not the best. Imagining such a grand duel of man versus machine, China’s top Go player predicted that Lee would not lose a single game, and Lee himself confidently expected to possibly lose one at the most.

What actually ended up happening when they faced off? Lee went on to lose all but one of their match’s five games. An AI named AlphaGo is now a better Go player than any human and has been granted the “divine” rank of 9 dan. In other words, its level of play borders on godlike. Go has officially fallen to machine, just as Jeopardy did before it to Watson, and chess before that to Deep Blue.

So, what is Go? Very simply, think of Go as Super Ultra Mega Chess. This may still sound like a small accomplishment, another feather in the cap of machines as they continue to prove themselves superior in the fun games we play, but it is no small accomplishment, and what’s happening is no game.

AlphaGo’s historic victory is a clear signal that we’ve gone from linear to parabolic. Advances in technology are now so visibly exponential in nature that we can expect to see a lot more milestones being crossed long before we would otherwise expect. These exponential advances, most notably in forms of artificial intelligence limited to specific tasks, we are entirely unprepared for as long as we continue to insist upon employment as our primary source of income.

This may all sound like exaggeration, so let’s take a few decade steps back, and look at what computer technology has been actively doing to human employment so far:

Let the above chart sink in. Do not be fooled into thinking this conversation about the automation of labor is set in the future. It’s already here. Computer technology is already eating jobs and has been since 1990.

Routine Work
All work can be divided into four types: routine and nonroutine, cognitive and manual. Routine work is the same stuff day in and day out, while nonroutine work varies. Within these two varieties, is the work that requires mostly our brains (cognitive) and the work that requires mostly our bodies (manual). Where once all four types saw growth, the stuff that is routine stagnated back in 1990. This happened because routine labor is easiest for technology to shoulder. Rules can be written for work that doesn’t change, and that work can be better handled by machines.

Distressingly, it’s exactly routine work that once formed the basis of the American middle class. It’s routine manual work that Henry Ford transformed by paying people middle class wages to perform, and it’s routine cognitive work that once filled US office spaces. Such jobs are now increasingly unavailable, leaving only two kinds of jobs with rosy outlooks: jobs that require so little thought, we pay people little to do them, and jobs that require so much thought, we pay people well to do them.

If we can now imagine our economy as a plane with four engines, where it can still fly on only two of them as long as they both keep roaring, we can avoid concerning ourselves with crashing. But what happens when our two remaining engines also fail? That’s what the advancing fields of robotics and AI represent to those final two engines, because for the first time, we are successfully teaching machines to learn.

Neural Networks
I’m a writer at heart, but my educational background happens to be in psychology and physics. I’m fascinated by both of them so my undergraduate focus ended up being in the physics of the human brain, otherwise known as cognitive neuroscience. I think once you start to look into how the human brain works, how our mass of interconnected neurons somehow results in what we describe as the mind, everything changes. At least it did for me.

As a quick primer in the way our brains function, they’re a giant network of interconnected cells. Some of these connections are short, and some are long. Some cells are only connected to one other, and some are connected to many. Electrical signals then pass through these connections, at various rates, and subsequent neural firings happen in turn. It’s all kind of like falling dominoes, but far faster, larger, and more complex. The result amazingly is us, and what we’ve been learning about how we work, we’ve now begun applying to the way machines work.

One of these applications is the creation of deep neural networks – kind of like pared-down virtual brains. They provide an avenue to machine learning that’s made incredible leaps that were previously thought to be much further down the road, if even possible at all. How? It’s not just the obvious growing capability of our computers and our expanding knowledge in the neurosciences, but the vastly growing expanse of our collective data, aka big data.

Big Data
Big data isn’t just some buzzword. It’s information, and when it comes to information, we’re creating more and more of it every day. In fact we’re creating so much that a 2013 report by SINTEF estimated that 90% of all information in the world had been created in the prior two years. This incredible rate of data creation is even doubling every 1.5 years thanks to the Internet, where in 2015 every minute we were liking 4.2 million things on Facebook, uploading 300 hours of video to YouTube, and sending 350,000 tweets. Everything we do is generating data like never before, and lots of data is exactly what machines need in order to learn to learn. Why?

Imagine programming a computer to recognize a chair. You’d need to enter a ton of instructions, and the result would still be a program detecting chairs that aren’t, and not detecting chairs that are. So how did we learn to detect chairs? Our parents pointed at a chair and said, “chair.” Then we thought we had that whole chair thing all figured out, so we pointed at a table and said “chair”, which is when our parents told us that was “table.” This is called reinforcement learning. The label “chair” gets connected to every chair we see, such that certain neural pathways are weighted and others aren’t. For “chair” to fire in our brains, what we perceive has to be close enough to our previous chair encounters. Essentially, our lives are big data filtered through our brains.

Deep Learning
The power of deep learning is that it’s a way of using massive amounts of data to get machines to operate more like we do without giving them explicit instructions. Instead of describing “chairness” to a computer, we instead just plug it into the Internet and feed it millions of pictures of chairs. It can then have a general idea of “chairness.” Next we test it with even more images. Where it’s wrong, we correct it, which further improves its “chairness” detection. Repetition of this process results in a computer that knows what a chair is when it sees it, for the most part as well as we can. The important difference though is that unlike us, it can then sort through millions of images within a matter of seconds.

This combination of deep learning and big data has resulted in astounding accomplishments just in the past year. Aside from the incredible accomplishment of AlphaGo, Google’s DeepMind AI learned how to read and comprehend what it read through hundreds of thousands of annotated news articles. DeepMind also taught itself to play dozens of Atari 2600 video games better than humans, just by looking at the screen and its score, and playing games repeatedly. An AI named Giraffe taught itself how to play chess in a similar manner using a dataset of 175 million chess positions, attaining International Master level status in just 72 hours by repeatedly playing itself. In 2015, an AI even passed a visual Turing test by learning to learn in a way that enabled it to be shown an unknown character in a fictional alphabet, then instantly reproduce that letter in a way that was entirely indistinguishable from a human given the same task. These are all major milestones in AI.

However, despite all these milestones, when asked to estimate when a computer would defeat a prominent Go player, the answer even just months prior to the announcement by Google of AlphaGo’s victory, was by experts essentially, “Maybe in another ten years.” A decade was considered a fair guess because Go is a game so complex I’ll just let Ken Jennings of Jeopardy fame, another former champion human defeated by AI, describe it:

Go is famously a more complex game than chess, with its larger board, longer games, and many more pieces. Google’s DeepMind artificial intelligence team likes to say that there are more possible Go boards than atoms in the known universe, but that vastly understates the computational problem. There are about 10¹⁷⁰ board positions in Go, and only 10⁸⁰ atoms in the universe. That means that if there were as many parallel universes as there are atoms in our universe (!), then the total number of atoms in all those universes combined would be close to the possibilities on a single Go board.

Such confounding complexity makes impossible any brute-force approach to scan every possible move to determine the next best move. But deep neural networks get around that barrier in the same way our own minds do, by learning to estimate what feels like the best move. We do this through observation and practice, and so did AlphaGo, by analyzing millions of professional games and playing itself millions of times. So the answer to when the game of Go would fall to machines wasn’t even close to ten years. The correct answer ended up being, “Any time now.”

Nonroutine Automation
Any time now. That’s the new go-to response in the 21st century for any question involving something new machines can do better than humans, and we need to try to wrap our heads around it.

We need to recognize what it means for exponential technological change to be entering the labor market space for nonroutine jobs for the first time ever. Machines that can learn mean nothing humans do as a job is uniquely safe anymore. From hamburgers to healthcare, machines can be created to successfully perform such tasks with no need or less need for humans, and at lower costs than humans.

Amelia is just one AI out there currently being beta-tested in companies right now. Created by IPsoft over the past 16 years, she’s learned how to perform the work of call center employees. She can learn in seconds what takes us months, and she can do it in 20 languages. Because she’s able to learn, she’s able to do more over time. In one company putting her through the paces, she successfully handled one of every ten calls in the first week, and by the end of the second month, she could resolve six of ten calls. Because of this, it’s been estimated that she can put 250 million people out of a job, worldwide.

Viv is an AI coming soon from the creators of Siri who’ll be our own personal assistant. She’ll perform tasks online for us, and even function as a Facebook News Feed on steroids by suggesting we consume the media she’ll know we’ll like best. In doing all of this for us, we’ll see far fewer ads, and that means the entire advertising industry — that industry the entire Internet is built upon — stands to be hugely disrupted.

A world with Amelia and Viv — and the countless other AI counterparts coming online soon — in combination with robots like Boston Dynamics’ next generation Atlas portends, is a world where machines can do all four types of jobs and that means serious societal reconsiderations. If a machine can do a job instead of a human, should any human be forced at the threat of destitution to perform that job? Should income itself remain coupled to employment, such that having a job is the only way to obtain income, when jobs for many are entirely unobtainable? If machines are performing an increasing percentage of our jobs for us, and not getting paid to do them, where does that money go instead? And what does it no longer buy? Is it even possible that many of the jobs we’re creating don’t need to exist at all, and only do because of the incomes they provide? These are questions we need to start asking, and fast.

Decoupling Income From Work
Fortunately, people are beginning to ask these questions, and there’s an answer that’s building up momentum. The idea is to put machines to work for us, but empower ourselves to seek out the forms of remaining work we as humans find most valuable, by simply providing everyone a monthly paycheck independent of work. This paycheck would be granted to all citizens unconditionally, and its name is universal basic income. By adopting UBI, aside from immunizing against the negative effects of automation, we’d also be decreasing the risks inherent in entrepreneurship, and the sizes of bureaucracies necessary to boost incomes. It’s for these reasons, it has cross-partisan support, and is even now in the beginning stages of possible implementation in countries like Switzerland, Finland, the Netherlands, and Canada.

The future is a place of accelerating changes. It seems unwise to continue looking at the future as if it were the past, where just because new jobs have historically appeared, they always will. The WEF started 2016 off by estimating the creation by 2020 of 2 million new jobs alongside the elimination of 7 million. That’s a net loss, not a net gain of 5 million jobs. In a frequently cited paper, an Oxford study estimated the automation of about half of all existing jobs by 2033. Meanwhile self-driving vehicles, again thanks to machine learning, have the capability of drastically impacting all economies — especially the US economy as I wrote last year about automating truck driving — by eliminating millions of jobs within a short span of time.

And now even the White House, in a stunning report to Congress, has put the probability at 83 percent that a worker making less than $20 an hour in 2010 will eventually lose their job to a machine. Even workers making as much as $40 an hour face odds of 31 percent. To ignore odds like these is tantamount to our now laughable “duck and cover” strategies for avoiding nuclear blasts during the Cold War.

All of this is why it’s those most knowledgeable in the AI field who are now actively sounding the alarm for basic income. During a panel discussion at the end of 2015 at Singularity University, prominent data scientist Jeremy Howard asked “Do you want half of people to starve because they literally can’t add economic value, or not?” before going on to suggest, ”If the answer is not, then the smartest way to distribute the wealth is by implementing a universal basic income.”

AI pioneer Chris Eliasmith, director of the Centre for Theoretical Neuroscience, warned about the immediate impacts of AI on society in an interview with Futurism, “AI is already having a big impact on our economies… My suspicion is that more countries will have to follow Finland’s lead in exploring basic income guarantees for people.”

Moshe Vardi expressed the same sentiment after speaking at the 2016 annual meeting of the American Association for the Advancement of Science about the emergence of intelligent machines, “we need to rethink the very basic structure of our economic system… we may have to consider instituting a basic income guarantee.”

Even Baidu’s chief scientist and founder of Google’s “Google Brain” deep learning project, Andrew Ng, during an onstage interview at this year’s Deep Learning Summit, expressed the shared notion that basic income must be “seriously considered” by governments, citing “a high chance that AI will create massive labor displacement.”

When those building the tools begin warning about the implications of their use, shouldn’t those wishing to use those tools listen with the utmost attention, especially when it’s the very livelihoods of millions of people at stake? If not then, what about when Nobel prize winning economists begin agreeing with them in increasing numbers?

No nation is yet ready for the changes ahead. High labor force non-participation leads to social instability, and a lack of consumers within consumer economies leads to economic instability. So let’s ask ourselves, what’s the purpose of the technologies we’re creating? What’s the purpose of a car that can drive for us, or artificial intelligence that can shoulder 60% of our workload? Is it to allow us to work more hours for even less pay? Or is it to enable us to choose how we work, and to decline any pay/hours we deem insufficient because we’re already earning the incomes that machines aren’t?

What’s the big lesson to learn, in a century when machines can learn?

I offer it’s that jobs are for machines, and life is for people.


How Machine Learning May Help Tackle Depression

By detecting trends that humans are unable to spot, researchers hope to treat the disorder more effectively.


Depression is a simple-sounding condition with complex origins that aren’t fully understood. Now, machine learning may enable scientists to unpick some of its mysteries in order to provide better treatment.

For patients to be diagnosed with Major Depressive Disorder, which is thought to be the result of a blend of genetic, environmental, and psychological factors, they have to display several of a long list of symptoms, such as fatigue or lack of concentration. Once diagnosed, they may receive cognitive behavioral therapy or medication to help ease their condition. But not every treatment works for every patient, as symptoms can vary widely.

Recently, many artificial intelligence researchers have begun to develop ways to apply machine learning to medical situations. Such approaches are able to spot trends and details across huge data sets that humans would never be able to, teasing out results that can be used to diagnose other patients. The New Yorker recently ran a particularly interesting essay about using the technique to make diagnoses from medical scans.

Similar approaches are being used to shed light on depression. A study published in Psychiatry Research earlier this year showed that MRI scans can be analyzed by machine-learning algorithms to establish the likelihood of someone suffering from the condition. By identifying subtle differences in scans of people who were and were not sufferers, the team found that they were able to identify which unseen patients were suffering with major depressive disorder from MRI scans with roughly 75 percent accuracy.

Perhaps more interestingly, Vox reports that researchers from Weill Cornell Medical College are following a similar tack to identify different types of depression. By having machine-learning algorithms interrogate data captured when the brain is in a resting state, the scientists have been able to categorize four different subtypes of the condition that manifest as different mixtures of anxiety and lack of pleasure.

Not all attempts to infer such fine-grained diagnoses from MRI scans have been successful in the past, of course. But the use of AI does provide much better odds of spotting a signal than when individual doctors pore over scans. At the very least, the experiments lend weight to the notion that there are different types of depression.

The approach could be just one part of a broader effort to use machine learning to spot subtle clues related to the condition. Researchers at New York University’s Langone Medical Center, for instance, are using machine-learning techniques to pick out vocal patterns that are particular to people with depression, as well as conditions like PTSD.

And the idea that there may be many types of depression could prove useful, according to Vox. It notes another recent study carried out by researchers at Emory University that found that machine learning was able to identify different patterns of brain activity in fMRI scans that correlated with the effectiveness of different forms of treatment.

In other words, it may be possible not just to use AI to identify unique types of depression, but also to establish how best to treat them. Such approaches are still a long way from providing clinically relevant results, but they do show that it may be possible to identify better ways to help sufferers in the future.

In the meantime, some researchers are also trying to develop AIs to ensure that depression doesn’t lead to tragic outcomes like self-harm or suicide. Last month, for instance, Wired reported that scientists at Florida State University had developed machine-learning software that analyzes patterns in health records to flag patients that may be at risk of suicidal thoughts. And Facebook claims it can do something similar by analyzing user content—but it remains to be seen how effective its interventions might be.

Source: MIT Technology Review

Machine Learning and AI @ FaceBook


FacebookAIMachine Learning (ML) and AI powering “Systems that Learn at scale” are at the bleeding edge of data science, deep learning and predictive search today.

Everyone is jumping on this AI enabled engagement (“ambient experience and convenience”) trend in retail, banking and even healthcare.

Salesforce CEO Marc Benioff said at a recent conference: “This is a huge shift going forward, which is that everybody wants systems that are smarter, everybody wants systems that are more predictive, everybody wants everything scored, everybody wants to understand what’s the next best offer, next best opportunity, how to make things a little bit more efficient.”

Facebook is a case study of where AI/ML are being used to transform user engagement and experiences. I am starting to see many leading firms investing in ML Accelerators and Platforms as part of their data science strategy.

According to Facebook software engineer Jeffrey Dunn, “Many of the experiences and interactions people have on Facebook today are made possible with AI. When you log in to Facebook, we use the power of machine learning to provide you with unique, personalized experiences. ML models are part of ranking and personalizing News Feed stories, filtering out offensive content, highlighting trending topics, ranking search results, and much more. ”

Take for instance photo display. Collectively, people will take 1 trillion photos this year with their devices. Most of these are on Facebook which has become the album of our everyday life. For its 1.65 billion monthly active users, FB is data mining to surface the right photo at the right moment (birthday, anniversary, vacation anniversary etc.). Talk about a scalable data driven digital engagement platform.

To ensure that other experiences on Facebook that could benefit from ML models, Facebook in late 2014 set out to redefine ML/AI platforms at Facebook from the ground up, and to put state-of-the-art algorithms at the fingertips of every Facebook engineer.  Until then it was a chore for engineers without a strong ML/AI background to take advantage of the data and algorithms.

FBLearner Flow, as the software is known, is filled with algorithms (e.g., sparse matrix, neural networks, deep learning, metric learning, kernel learning, compositional models, non-linear structured prediction) developed by Facebook’s AI/ML experts that can be accessed by more general engineers across the company to build different products.

CoreML group is a dedicated Facebook team established to work on state of the art infrastructure and applied research to bridge the gap between research and product.  CoreML has been working on FBLearner Flow since late 2014 to enable things like improved search ranking, text/sentiment classification, collaborative filtering/recommendation, payment fraud, click-through rate prediction, click-fraud detection, or spam detection.

Today, >25% of Facebook engineers are using APIs to help them leverage artificial intelligence (AI) and machine learning (ML).

“FBLearner Flow [is] capable of easily reusing algorithms in different products, scaling to run thousands of simultaneous custom experiments, and managing experiments with ease,” wrote Facebook software engineer Jeffrey Dunn, in a blog post titled “Introducing FBLearner Flow: Facebook’s AI backbone.”

“FBLearner Flow is used by more than 25% of Facebook’s engineering team,” wrote Dunn. “Since its inception, more than a million models have been trained, and our prediction service has grown to make more than 6 million predictions per second.”

The FBLearner Flow platform is similar to Microsoft’s Azure Machine Learning service and Airbnb’s open source Airflow platform, according to VentureBeat. Google leverage ML extensively already. Take for instance, Google Maps. When you ask about a location, you don’t just want to know how to get from point A to point B. Depending on the context, you may want to know what time is best to avoid the crowds, whether the store you’re looking for is open right now, or what the best things to do are in a destination you’re visiting for the first time.

Contextual AI is the most important technology anyone in the world is working on today, according to Dave Coplin, Microsoft’s chief envisioning officer, so it’s not all that surprising Facebook wants to put the technology into the hands of developers.


The Secret To A Perfectly Tailored Suit? For The Black Tux, It’s Machine Learning

Andrew Blackmon, cofounder and co-CEO of the menswear rental company, explains how a new algorithm is perfecting sizing and tailoring from afar.

When Andrew Blackmon and Patrick Coyne launched The Black Tux in 2013, they did so with the intent of helping men suit up without setting foot in a Men’s Wearhouse. The goal was to upgrade suit and tuxedo rentals by introducing higher quality garments and placing an emphasis on fit, qualities that Blackmon says the market was severely—and surprisingly—lacking. The Black Tux promised to do all this as an online retailer, which meant men could rent formalwear with a few clicks. The company charges about $95-$145 for suit and tuxedo rentals, depending on the design, while rentals of complete outfits—which include additional items like a dress shirt, leather shoes, cuff links, and a tie or bowtie—run between $150-$215.
Recently, The Black Tux unveiled a new and improved fit algorithm that claims to accurately predict how a suit should be tailored, based solely on the measurements customers provide online. The secret sauce: machine learning, built on a robust data set accrued over the past few years. As Black Tux feeds the algorithm more data on how customers’ measurements have historically correlated with suit sizes and alterations, it gets better at predicting which cuts will work best for future customers.
I spoke to Blackmon about The Black Tux’s unique approach to fit and garment quality, how human intuition factors into the fitting process, and why the company appeals to men looking for a little more flair.
Black Tux isn’t the only company catering to men searching for affordable, well-designed tuxedos. What sets Black Tux apart from the likes of Generation Tux or Combatant Gentlemen?
Andrew Blackmon: A few major things separate us from them. Number one is quality of garments and supply chain. The majority of rental garments in the U.S. are made from one or two manufacturers. When my cofounder and I set out to start this business, one of the things we were sort of upset with was the actual garment, which felt more like your dad’s old suit than something you wanted to get married in or go to prom in.
When we researched, we realized that the manufacturers making all the rental suits in the U.S. are also making uniforms—so they’re making really thick, stiff [suits that] feel almost like cardboard sometimes. We decided we needed to disrupt the supply chain, and make something that is closer to what you would buy in high-end retail stores for $1,000 than what you would see on somebody in a hotel uniform. So we source all of our wool from Italy, from one of the best wool manufacturers. We’ve developed our own wool with them, and then we send our suits to some of the finest suit makers in the world that are making suits for brands like Ralph Lauren and Burberry.

The second way is leveraging e-commerce to give people a convenient experience. We stand behind our quality so much that we’ll allow our users to book a home try-on for free anytime before their event. We are the only rental company that sends the suits two weeks before a customer’s event. Most companies send it one week before. We feel like the customer wants a little bit more time with the garment. We’re also the only ones that have retail stores. Right now, we have two, and we’re opening a bunch more in 2017, for the customer or the bride or groom that really want the hands-on experience, to see the fabric before they rent.
Obviously a suit has a number of moving parts. How do you take the measurements that your customers enter online and accurately translate those into a jacket, pants, and shirt?
We have a machine learning algorithm that trains itself as we get more data and gets better over time. But when we first started the business, we basically built an algorithm in Excel. We used a bunch of data that we had sourced from the government, actually. The government had a large database of men who were measured for Army uniforms. It was really interesting: We were able to access this data and build our algorithm around that. But what that doesn’t take into account is that the army may not be representative of our customers, so over time, we’ve gotten a lot better at refining what we need to collect from users and how we can create this machine learning system.
The other thing I would add is that the way we look at data is not necessarily as an end-all. We like to have a personal touch. What the algorithm does for us is take the person’s height, weight, and body type in questions they answer—or a person’s self-submitted measurements and combines it with the height and weight—and then says, based on the measurements of our garments and our historical performance and fit, this is what this person is most likely to wear with X% confidence.
Sometimes for people that are in between sizes, we want to ship something that we’re 99% confident is going to fit. If the confidence is lower, we have a team of fit specialists that will eyeball it and then make calls to customers, in case that it looks like the customer submitted the wrong information, or the customer needed a little help with their own measurements. We like to use data not to just spit out something and say, ‘This is what this person is going to rent,’ but to allow us to make the best prediction on our end.

Height is just one of the factors that feeds into The Black Tux’s fit algorithm.
What happens when somebody gets their outfit, and they’re not happy with the fit? You’ve said most of your customers order their outfits at least a month an advance, but you also accept rush orders up to four days prior to a customer’s event. How do you figure out what the issue is without being there in person, and make sure they have an outfit that fits in time for their event?
This is also where our team of fit specialists will come in. The tuxedo is due to arrive two weeks before the person’s event, so we have enough time to get them something else. Less than 8% of garments actually don’t fit on the first try, so we have a pretty high accuracy rate. But when they don’t, the customer calls our fit specialist team. They can do a phone or a video consultation, where they’ll actually show the fit specialist, ‘Hey here’s what doesn’t fit, what do you recommend?’ The customer actually keeps everything that they originally had, and we just send them—for free—a replacement item, and then they send everything back together after their event. So it’s really convenient for the customer. Most of the time, it’s just, ‘My shirt neck is too small,’ or ‘My jacket sleeves are too long.’ The other option we have is if the customer is in a bind, and doesn’t have time to do this—say, the customer got their shipment 14 days before their event, but didn’t open the box until 12 or 13 days before—we’ll allow them to take it to a local tailor, and we’ll refund their bill.

Looks from The Black Tux
It sounds like you adjust these garments pretty regularly to fit different body types. What does your inventory look like? Do your suits tend to have shorter lifespans?
We carry a very large bell curve of sizes. So where most rental outfits would probably just carry a 42 regular and a 42 long, we would carry a 42 extra short, short, long, extra long, extra-extra long. Our goal, actually, is that we wouldn’t need to tailor garments—that because we have so much variability in each size and such a sophisticated algorithm, we would be able to grab something off the rack and send it to the person. But of course there are certain body types that require tailoring, and we have to tailor, I think, somewhere between 5% to 10% of our garments before they go out. What happens is it goes out to the customer, comes back to us, and will be flagged in our system as “this garment was tailored.” And then we’ll reverse tailor it to what the size actually was. There are actually a lot of tailoring techniques that do not damage the garment—say, certain stitching machines that can still preserve the life of the garment and make adjustments. If you ever need to cut the fabric when you’re tailoring something, certainly that damages it, but we try not to do that.
One recent development is that some grooms have taken a page out of the bridal playbook and have started doing outfit changes between the wedding and reception. What are other changes you’ve noticed in terms of what grooms now want, and how are you trying to address them through Black Tux’s offerings?
We launched this company three and a half years ago. In that time, the awareness of grooms has greatly increased, and the desire of grooms to customize their wedding looks has increased. The tone is more, ‘Hey, here’s exactly what I want. I want to wear this for the ceremony and maybe this for the reception,” or, ‘I want to wear a peak lapel jacket with a diamond-shaped bowtie.’ They’re coming to us knowing what they want. What we’ve decided to do because of that is produce smaller runs of collections that will fit for these type of events. So we produce several different light dinner jackets—some with a white lapel, some with a black shawl collar. We’re never producing a huge amount of different styles, but we’re curating ones that we believe to be fashionable, and these are inspired by men’s shows that our designers are taking part in. So we’re basically just saying: Okay, we see that the guy has more discerning tastes right now. Let’s become the experts on formal wear and offer him things that he may not have known existed and things that he may be looking for that you can’t currently rent.
For a lot of brands, renting a suit is a pathway to buying the suit. Is a purchase option something you are considering?
I think you’re absolutely right: If you look at Men’s Wearhouse, their business is essentially built on their rental business. They have these rental customers and convert a pretty high percentage of them to buy something. We will experiment with that. We have very high demand; a lot of our customers call us and say, ‘Hey can I just keep this suit? This is a great suit—first time I’ve worn something that’s high quality, and I want to buy it from you.’ Right now, we don’t do it because we have a lot of demand for our rental service, and we need to get the suits back. But it is something that we’ve been thinking about.

So what do you think the future of suit buying will look like?
I think there will always be a market for rentals and sales. I think the market for rentals is expanding because people like us and others are offering a better experience, better quality, better assortment. If you own a tuxedo or suit, you’re going to wear that same thing to every event. Say you spend $1,000 on a suit. You wear that same one to 10 events—or you could spend, with us, $1,000 and wear 10 different suits to 10 different events. I think guys are more aware of that, and they really enjoy that ability.
Where I see the market going is definitely more online, but the reason we’re doing things like home try-on and stores is because for it to be completely online is not possible. I think this market will always be a hybrid of online and offline. A lot of people consider this a big purchase, and they want an experience with it—either via home try-on or via store—before they rent. We’re aware of that, and that’s why we’re opening our own retail locations.

Pavithra Mohan