Neural networks made easy

OnTheGo

If you’ve dug into any articles on artificial intelligence, you’ve almost certainly run into the term “neural network.” Modeled loosely on the human brain, artificial neural networks enable computers to learn from being fed data.

The efficacy of this powerful branch of machine learning, more than anything else, has been responsible for ushering in a new era of artificial intelligence, ending a long-lived “AI Winter.” Simply put, the neural network may well be one of the most fundamentally disruptive technologies in existence today.

This guide to neural networks aims to give you a conversational level of understanding of deep learning. To this end, we’ll avoid delving into the math and instead rely as much as possible on analogies and animations.

Thinking by brute force
One of the early schools of AI taught that if you load up as much information as possible into a powerful computer and give it as many directions as possible to understand that data, it ought to be able to “think.” This was the idea behind chess computers like IBM’s famous Deep Blue: By exhaustively programming every possible chess move into a computer, as well as known strategies, and then giving it sufficient power, IBM programmers created a machine that, in theory, could calculate every possible move and outcome into the future and pick the sequence of subsequent moves to outplay its opponent. This actually works, as chess masters learned in 1997.*

With this sort of computing, the machine relies on fixed rules that have been painstakingly pre-programmed by engineers — if this happens, then that happens; if this happens, do this — and so it isn’t human-style flexible learning as we know it at all. It’s powerful supercomputing, for sure, but not “thinking” per se.

Teaching machines to learn
Over the past decade, scientists have resurrected an old concept that doesn’t rely on a massive encyclopedic memory bank, but instead on a simple and systematic way of analyzing input data that’s loosely modeled after human thinking. Known as deep learning, or neural networks, this technology has been around since the 1940s, but because of today’s exponential proliferation of data — images, videos, voice searches, browsing habits and more — along with supercharged and affordable processors, it is at last able to begin to fulfill its true potential.

Machines — they’re just like us!
An artificial (as opposed to human) neural network (ANN) is an algorithmic construct that enables machines to learn everything from voice commands and playlist curation to music composition and image recognition.The typical ANN consists of thousands of interconnected artificial neurons, which are stacked sequentially in rows that are known as layers, forming millions of connections. In many cases, layers are only interconnected with the layer of neurons before and after them via inputs and outputs. (This is quite different from neurons in a human brain, which are interconnected every which way.)

This layered ANN is one of the main ways to go about machine learning today, and feeding it vast amounts of labeled data enables it to learn how to interpret that data like (and sometimes better than) a human.

Take, for example, image recognition, which relies on a particular type of neural network known as the convolutional neural network (CNN) — so called because it uses a mathematical process known as convolution to be able to analyze images in non-literal ways, such as identifying a partially obscured object or one that is viewable only from certain angles. (There are other types of neural networks, including recurrent neural networks and feed-forward neural networks, but these are less useful for identifying things like images, which is the example we’re going to use below.)

All aboard the network training
So how do neural networks learn? Let’s look at a very simple, yet effective, procedure called supervised learning. Here, we feed the neural network vast amounts of training data, labeled by humans so that a neural network can essentially fact-check itself as it’s learning.

Let’s say this labeled data consists of pictures of apples and oranges, respectively. The pictures are the data; “apple” and “orange” are the labels, depending on the picture. As pictures are fed in, the network breaks them down into their most basic components, i.e. edges, textures and shapes. As the picture propagates through the network, these basic components are combined to form more abstract concepts, i.e. curves and different colors which, when combined further, start to look like a stem, an entire orange, or both green and red apples.

At the end of this process, the network attempts to make a prediction as to what’s in the picture. At first, these predictions will appear as random guesses, as no real learning has taken place yet. If the input image is an apple, but “orange” is predicted, the network’s inner layers will need to be adjusted.

The adjustments are carried out through a process called backpropagation to increase the likelihood of predicting “apple” for that same image the next time around. This happens over and over until the predictions are more or less accurate and don’t seem to be improving. Just as when parents teach their kids to identify apples and oranges in real life, for computers too, practice makes perfect. If, in your head, you just thought “hey, that sounds like learning,” then you may have a career in AI.

So many layers…

Typically, a convolutional neural network has four essential layers of neurons besides the input and output layers:
■Convolution
■Activation
■Pooling
■Fully connected

Convolution
In the initial convolution layer or layers, thousands of neurons act as the first set of filters, scouring every part and pixel in the image, looking for patterns. As more and more images are processed, each neuron gradually learns to filter for specific features, which improves accuracy.

In the case of apples, one filter might be focused on finding the color red, while another might be looking for rounded edges and yet another might be identifying thin, stick-like stems. If you’ve ever had to clean out a cluttered basement to prepare for a garage sale or a big move — or worked with a professional organizer — then you know what it is to go through everything and sort it into different-themed piles (books, toys, electronics, objets d’art, clothes). That’s sort of what a convolutional layer does with an image by breaking it down into different features.

What’s particularly powerful — and one of the neural network’s main claims to fame — is that unlike earlier AI methods (Deep Blue and its ilk), these filters aren’t hand designed; they learn and refine themselves purely by looking at data.

The convolution layer essentially creates maps — different, broken-down versions of the picture, each dedicated to a different filtered feature — that indicate where its neurons see an instance (however partial) of the color red, stems, curves and the various other elements of, in this case, an apple. But because the convolution layer is fairly liberal in its identifying of features, it needs an extra set of eyes to make sure nothing of value is missed as a picture moves through the network.

Activation
One advantage of neural networks is that they are capable of learning in a nonlinear way, which, in mathless terms, means they are able to spot features in images that aren’t quite as obvious — pictures of apples on trees, some of them under direct sunlight and others in the shade, or piled into a bowl on a kitchen counter. This is all thanks to the activation layer, which serves to more or less highlight the valuable stuff — both the straightforward and harder-to-spot varieties.

In the world of our garage-sale organizer or clutter consultant, imagine that from each of those separated piles of things we’ve cherry-picked a few items — a handful of rare books, some classic t-shirts from our college days to wear ironically — that we might want to keep. We stick these “maybe” items on top of their respective category piles for another consideration later.

Pooling
All this “convolving” across an entire image generates a lot of information, and this can quickly become a computational nightmare. Enter the pooling layer, which shrinks it all into a more general and digestible form. There are many ways to go about this, but one of the most popular is “max pooling,” which edits down each feature map into a Reader’s Digest version of itself, so that only the best examples of redness, stem-ness or curviness are featured.

In the garage spring cleaning example, if we were using famed Japanese clutter consultant Marie Kondo’s principles, our pack rat would have to choose only the things that “spark joy” from the smaller assortment of favorites in each category pile, and sell or toss everything else. So now we still have all our piles categorized by type of item, but only consisting of the items we actually want to keep; everything else gets sold. (And this, by the way, ends our de-cluttering analogy to help describe the filtering and downsizing that goes on inside a neural network.)

At this point, a neural network designer can stack subsequent layered configurations of this sort — convolution, activation, pooling — and continue to filter down images to get higher-level information. In the case of identifying an apple in pictures, the images get filtered down over and over, with initial layers showing just barely discernable parts of an edge, a blip of red or just the tip of a stem, while subsequent, more filtered layers will show entire apples. Either way, when it’s time to start getting results, the fully connected layer comes into play.

Fully connected
Now it’s time to start getting answers. In the fully connected layer, each reduced, or “pooled,” feature map is “fully connected” to output nodes (neurons) that represent the items the neural network is learning to identify. If the network is tasked with learning how to spot cats, dogs, guinea pigs and gerbils, then it’ll have four output nodes. In the case of the neural network we’ve been describing, it’ll just have two output nodes: one for “apples” and one for “oranges.”

If the picture that has been fed through the network is of an apple, and the network has already undergone some training and is getting better with its predictions, then it’s likely that a good chunk of the feature maps contain quality instances of apple features. This is where these final output nodes start to fulfill their destiny, with a reverse election of sorts.

The job (which they’ve learned “on the job”) of both the apple and orange nodes is essentially to “vote” for the feature maps that contain their respective fruits. So, the more the “apple” node thinks a particular feature map contains “apple” features, the more votes it sends to that feature map. Both nodes have to vote on every single feature map, regardless of what it contains. So in this case, the “orange” node won’t send many votes to any of the feature maps, because they don’t really contain any “orange” features. In the end, the node that has sent the most votes out — in this example, the “apple” node — can be considered the network’s “answer,” though it’s not quite that simple.

Because the same network is looking for two different things — apples and oranges — the final output of the network is expressed as percentages. In this case, we’re assuming that the network is already a bit down the road in its training, so the predictions here might be, say, 75 percent “apple” and 25 percent “orange.” Or, if it’s earlier in the training, it might be more inaccurate and determine that it’s 20 percent “apple” and 80 percent “orange.” Oops.
If at first you don’t succeed, try, try, try again

So, in its early stages, the neural network spits out a bunch of wrong answers in the form of percentages. The 20 percent “apple” and 80 percent “orange” prediction is clearly wrong, but since this is supervised learning with labeled training data, the network is able to figure out where and how that error occurred through a system of checks and balances known as backpropagation.

Now, this is a mathless explanation, so suffice it to say that backpropagation sends feedback to the previous layer’s nodes about just how far off the answers were. That layer then sends the feedback to the previous layer, and on and on like a game of telephone until it’s back at convolution. Tweaks and adjustments are made to help each neuron better identify the data at every level when subsequent images go through the network.

This process is repeated over and over until the neural network is identifying apples and oranges in images with increasing accuracy, eventually ending up at 100 percent correct predictions — though many engineers consider 85 percent to be acceptable. And when that happens, the neural network is ready for prime time and can start identifying apples in pictures professionally.

Source: techcrunch.com

Big Data Is Filling Gender Data Gaps—And Pushing Us Closer to Gender Equality

OnTheGo

Imagine you are a government official in Nairobi, working to deploy resources to close educational achievement gaps throughout Kenya. You believe that the literacy rate varies widely in your country, but the available survey data for Kenya doesn’t include enough data about the country’s northern regions. You want to know where to direct programmatic resources, and you know you need detailed information to drive your decisions.

But you face a major challenge—the information does not exist.

Decision-makers want to use good data to inform policy and programs, but in many scenarios, quality, complete data is not available. And though this is true for large swaths of people around the world, this lack of information acutely impacts girls and women, who are often overlooked in data collection even when traditional surveys count their households. If we do not increase the availability and use of gender data, policymakers will not be able to make headway on national and global development agendas.

Gender data gaps are multiple and intersectional, and although some are closing, many persist despite the simultaneous explosion of new data sources emerging from new technologies. So, what if there was a way to utilize these new data sources to count those women and girls, and men and boys, who are left out by traditional surveys and other conventional data collection methods?

Big Data Meets Gender Data
“Big data” refers to large amounts of data collected passively from digital interactions with great variety and at a high rate of velocity. Cell phone use, credit card transactions, and social media posts all generate big data, as does satellite imagery which captures geospatial data.

In recent years, researchers have been examining the potential of big data to complement traditional data sources, but Data2X entered this space in 2014 because we observed that no one was investigating how big data could help increase the scope, scale, and quality of data about the lives and women and girls.

Data2X is a collaborative technical and advocacy platform that works with UN agencies, governments, civil society, academics, and the private sector to close gender data gaps, promote expanded and unbiased gender data collection, and use gender data to improve policies, strategies, and decision-making. We host partnerships which draw upon technical expertise, in-country knowledge, and advocacy insight to tackle and rectify gender data gaps. Across partnerships, this work necessitates experimental approaches.

And so, with this experimental approach in-hand, and with support from our funders, the William and Flora Hewlett Foundation and the Bill & Melinda Gates Foundation, Data2X launched four research pilots to build the evidence base for big data’s possible contributions to filling gender data gaps.

Think back to the hypothetical government official in Kenya trying to determine literacy rates in northern Kenya. This time, a researcher tells her that it’s possible – that by using satellite imagery to identify correlations between geospatial elements and well-being outcomes, the researcher can map the literacy rate for women across the entire country.

This is precisely what Flowminder Foundation, one of the four partner organizations in Data2X’s pilot research, was able to do. Researchers harnessed satellite imagery to fill data gaps, finding correlations between geospatial elements–such as accessibility, elevation, or distance to roads–and social and health outcomes for girls and women (as reported in traditional surveys) – such as literacy, access to contraception, and child stunting rates. Flowminder then mapped these phenomena, displaying continuous landscapes of gender inequality which can provide policymakers with timely information on regions with greatest inequality of outcomes and highest need for resources.

This finding, and many others, are outlined in a new Data2X report, “Big Data and the Well-Being of Women and Girls,” which for the first time showcases how big data sources can fill gender data gaps and inform policy on girls’ and women’s lives. In addition to the individual pilot research findings outlined in the report, there are four high-level takeaways from this first phase of our work:

Country Context is Key: The report affirms that in developing and implementing approaches to filling gender gaps, country context is paramount – and demands flexible experimentation. In the satellite imagery project, researchers’ success with models varied by country: models for modern contraceptive use performed strongly in Tanzania and Nigeria, whereas models for girls’ stunting rates were inadequate for all but one pilot country.

To Be Useful, Data Must Be Actionable: Even with effective data collection tools in place, data must be demand-driven and actionable for policymakers and in-country partners. Collaborating with National Statistics Offices, policymakers must articulate what information they need to make decisions and deploy resources to resolve gender inequalities, as well as their capacity to act on highly detailed data.

One Size Doesn’t Fit All: In filling gender data gaps, there is no one-size-fits-all solution. Researchers may find that in one setting, a combination of official census data and datasets made available through mobile operators sufficiently fills data gaps and provides information which meets policymakers’ needs. In another context, satellite imagery may be most effective at highlighting under-captured dimensions of girls’ and women’s lives in under-surveyed or resource-poor areas.

Ground Truth: Big data cannot stand alone. Researchers must “ground truth,” using conventional data sources to ensure that digital data enhances, but does not replace, information gathered from household surveys or official census reviews. We can never rely solely on data sources which carry implicit biases towards women and girls who experience fewer barriers to using technology and higher rates of literacy, leaving out populations with fewer resources.

Big data offers great promise to complement information captured in conventional data sources and provide new insights into potentially overlooked populations. There is significant potential for future, inventive applications of these data sources, opening up opportunities for researchers and data practitioners to apply big data to pressing gender-focused challenges.

When actionable, context-specific, and used in tandem with existing data, big data can strengthen policymakers’ evidence base for action, fill gender data gaps, and advance efforts to improve outcomes for girls and women.

Source: cfr.org

A Robot Took My Job – Was It a Robot or AI?

The argument in the popular press about robots taking our jobs fails in the most fundamental way to differentiate between robots and AI.  Here we try to identify how each contributes to job loss and what the future of AI Enhanced Robots means for employment.

OnTheGo

There’s been a lot of contradictory opinion in the press recently about future job loss from robotics and AI.  They range from Bill Gates’ hand wringing assertion that we should slow this down by taxing robots to Treasury Secretary Steve Mnuchin’s seemingly luddite observation “In terms of artificial intelligence taking over the jobs, I think we’re so far away from that that it’s not even on my radar screen.  I think it’s 50 or 100 more years.”

But these extreme end points of the conversation aren’t the worst of it.  There is essentially no effort in the popular press to differentiate ‘robot’ from ‘AI’.

AI enhanced robots are much in the press and much on the mind of data scientists.  So is it principally advances in deep learning (AI) that are the main cause of future job loss or robots as they currently exist?

Automation Always Meant Shifting Job Needs

Job loss, or more correctly task and skill redistribution due to automation has been going on at least since water wheels eliminated human labor in ancient civilizations.

Ned Ludd was the English apprentice who smashed two automated stocking frames in 1779 to protest the loss of manual work giving rise to the political movement and term Luddite. (Note that the automated stocking frame was invented in 1589 so almost 200 years elapsed before the social protest.)

Then there is John Maynard Keynes’s frequently cited prediction of widespread technological unemployment “due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour” (1933).

What has not changed is that skills we learned in our youth may not be valued labor throughout our working life spans.  As this speeds up this creates risk for the employed.  In the last 20 or 30 years it has caused a redistribution of employment into higher-pay high-skill jobs and an emptying out of repetitive manufacturing tasks into largely lower paying services jobs.

Several recent academic studies have put the percentage of US jobs at risk from automation in the range of 35% to 45%.  Keep in mind that these projections are based on the jobs available in the economy based roughly on a 2010 base line and don’t take into account how new or existing workers or industries will adjust their employment needs in light of a fast changing environment.  Jobs available in 2010 may simply not be needed (automation or not) in 2020 so the percentages will necessarily be somewhat overstated.

There are a number of ways to spot the vulnerable jobs and we discussed several techniques used by VCs in How to Put AI to Work.

Is the Cause Robotics, AI, or Both

The popular perception of robots is that they are getting smarter and much more capable fast.  Within data science we know that the hockey stick curve in capabilities we are currently experiencing is largely a function of deep learning techniques which are popularly grouped together as AI.

We also know that commercial applications of deep learning have been successfully adopted only since about 2015.  We pick that threshold date as the year when image classification by deep learning finally passed the human accuracy threshold (95% in the annual Image Net competition) and pretty much the same time that speech and text processing hit about 99% accuracy enabling the huge proliferation in chatbots.

Another candidate year might be 2011 marking IBM’s Watson victory on Jeopardy.  However, important as Watson is in AI it’s just beginning to gain commercial traction today.  Other deep learning technologies like reinforcement learning and generative adversarial neural nets are even further away from general commercial adoption.

The studies published about job loss speak in generalities about AI and ML but don’t attempt to show that adoption of AI enabled robots will accelerate job replacement.

It’s About the Robots – and they’re Not that Smart

Robots have been taking human jobs for decades.  The earliest industrial robot was designed and installed by General Motors (GM) in 1961.  By 1968 robots were available with 12 joints and could lift a person.  By 1974 robots were sufficiently sophisticated that they could assemble an automotive water pump.  Although these operated without human intervention and could be reprogramed to do other tasks, their capabilities did not advance significantly for another 20 years.

One of the most interesting and apocryphal stories about robotics is about the all-in adoption of industrial robots by GM in the 80’s.  GM fully committed to creating a ‘lights-out’ factory, meaning one that would operate completely without humans.  GM’s experiment was a failure and the economic loss was considerable.  The reason, they got ahead of what robots could actually deliver.  Although robots could be programmed to place, weld, paint, and screw they could not yet adapt to the small variations in the positioning of the materials versus the on-assembly line car.  Fit and finish, and the resultant rework were not up to the standards of human workers.

What changed in the 90’s was the arrival of machine vision.  Not the sophisticated image processing that we have today, but the ability of the robot to take in camera signals and adjust the physical positioning of the item ever so slightly to create repeatedly perfect alignment of parts.

In the late 90’s robots also gained sophistication in grasping with force feedback sensors allowing more delicate operations.

The significance is that none of this rises to the level of data science.  The use of controls based on predictive analytics that allowed industrial robots to make more sophisticated decisions using multiple inputs to decide on multiple outputs did not become common until the 2000’s with early IoT.  Even so, the economic demand for this level of capability is only a small fraction of total industrial robots.

What has happened is that the cost of industrial robots has been decreasing about 10% a year so that between 1990 and 2004 the price of robots fell 75% and continues to fall today.  If you are repetitively placing 10 or 20 fasteners in a manufactured device it no longer makes economic sense to pay $17 to $25 per hour to an employee to do that.  Either use a low cost robot or outsource the labor to a country where that average manufacturing wage is $3/hour.

Job disruption is a function of the continued adoption of mostly dumb industrial robots and not AI or even predictive analytics.

Will AI Enhanced Robots Accelerate Job Loss

Won’t AI enhanced robots make a difference in the future?  Yes they will.  However thinking about this requires splitting the question in several new ways.  The first split is over what we consider to be a robot.

Physical versus Virtual Robots

Robots by definition must operate without human intervention and be reprogrammable to different tasks.  The most important requirement however is that they be able to perform human like tasks replacing the need for the human.

Robots have always meant physical entities like giant mechanical arms or Rosie the Robot from the Jetsons.  But increasingly robots that are virtual are also able to act in the world and replace humans.   Particularly these virtual robots rely on the capabilities of deep learning for their capability.

  • Alexa, Siri, Hello Google and other chatbots can act on the physical world to operate lights, thermostats, locks, or even order goods over the internet.
  • Watson, a Question Answering Machine, inputs images, text, or speech and outputs images, text, or speech. Its current high value targets are medical diagnoses like interpreting radiology images.
  • Programmatic advertising recognizes and profiles millions of page views in real time and automatically places advertising on that page in a matter of milliseconds. It fulfills the economic contract between advertiser and media without human intervention.
  • Image processing enabled automatic error or damage detection.
  • Fully automated Customer Service Representatives reduce costs 60% to 80% compared to outsourced call centers.
  • Systems that automatically take action for fraud detection, condition monitoring, or anomaly detection are increasingly common.

In the physical world, the AI enhancement of robots has mostly to do with mobile robotics.  Industrial robots are almost universally bolted to the floor.  Mobile robotics however have use in both wage-replacing and personal use applications.

  • Self-driving personal use or shared service cars.
  • Self-driving trucks like Otto currently on the roads in Nevada delivering goods.
  • Mobile warehouse pick-and-pack robots.
  • Unmanned aerial vehicles, drones, and ships.
  • Robotic exo-skeletons for the disabled or to allow humans to perform with extra strength or stamina.
  • Even your Roomba is smart enough to fall in this category.

Increasingly in this conversation we need to expand our thinking about robots particularly to include AI enhanced virtual robots.

Labor Replacing Versus Personal Use Enhanced Robotics

The interesting thing about the examples above is that based on units deployed, far more AI enhanced robots, particularly in the virtual category are designed to enhance our personal lives by creating free time or leisure, and not to take our jobs.

We recently did a review of 30 ways in which Watson was being used by startups to create new applications.  Watson isn’t all of AI’s application to virtual robots but it’s interesting that 57% were aimed solely at consumers and those that could conceivably be used by business to reduce direct labor costs represented only 27%.

As entrepreneurs look at how to utilize AI enhanced virtual and physical robots they’ve approached these business opportunities with the classical mindset of identifying the pain points.  What are the things in our lives that compete for our time and make performing gainful labor more difficult: child care, elder care, health issues, home maintenance, commuting/driving, shopping, and more.  Automation of non-routine tasks is increasingly feasible with deep learning but these tasks are principally not industrial or wage driving.

It appears that the new phenomenon of AI enhanced robots will largely target making life easier and more time available.  There may be some applications where like current industrial robots they may automate human tasks and cause job displacement but it does not appear that AI enhancement will be an accelerator of this phenomenon.

Source: datasciencecentral.com

How Machine Learning May Help Tackle Depression

By detecting trends that humans are unable to spot, researchers hope to treat the disorder more effectively.

OnTheGo

Depression is a simple-sounding condition with complex origins that aren’t fully understood. Now, machine learning may enable scientists to unpick some of its mysteries in order to provide better treatment.

For patients to be diagnosed with Major Depressive Disorder, which is thought to be the result of a blend of genetic, environmental, and psychological factors, they have to display several of a long list of symptoms, such as fatigue or lack of concentration. Once diagnosed, they may receive cognitive behavioral therapy or medication to help ease their condition. But not every treatment works for every patient, as symptoms can vary widely.

Recently, many artificial intelligence researchers have begun to develop ways to apply machine learning to medical situations. Such approaches are able to spot trends and details across huge data sets that humans would never be able to, teasing out results that can be used to diagnose other patients. The New Yorker recently ran a particularly interesting essay about using the technique to make diagnoses from medical scans.

Similar approaches are being used to shed light on depression. A study published in Psychiatry Research earlier this year showed that MRI scans can be analyzed by machine-learning algorithms to establish the likelihood of someone suffering from the condition. By identifying subtle differences in scans of people who were and were not sufferers, the team found that they were able to identify which unseen patients were suffering with major depressive disorder from MRI scans with roughly 75 percent accuracy.

Perhaps more interestingly, Vox reports that researchers from Weill Cornell Medical College are following a similar tack to identify different types of depression. By having machine-learning algorithms interrogate data captured when the brain is in a resting state, the scientists have been able to categorize four different subtypes of the condition that manifest as different mixtures of anxiety and lack of pleasure.

Not all attempts to infer such fine-grained diagnoses from MRI scans have been successful in the past, of course. But the use of AI does provide much better odds of spotting a signal than when individual doctors pore over scans. At the very least, the experiments lend weight to the notion that there are different types of depression.

The approach could be just one part of a broader effort to use machine learning to spot subtle clues related to the condition. Researchers at New York University’s Langone Medical Center, for instance, are using machine-learning techniques to pick out vocal patterns that are particular to people with depression, as well as conditions like PTSD.

And the idea that there may be many types of depression could prove useful, according to Vox. It notes another recent study carried out by researchers at Emory University that found that machine learning was able to identify different patterns of brain activity in fMRI scans that correlated with the effectiveness of different forms of treatment.

In other words, it may be possible not just to use AI to identify unique types of depression, but also to establish how best to treat them. Such approaches are still a long way from providing clinically relevant results, but they do show that it may be possible to identify better ways to help sufferers in the future.

In the meantime, some researchers are also trying to develop AIs to ensure that depression doesn’t lead to tragic outcomes like self-harm or suicide. Last month, for instance, Wired reported that scientists at Florida State University had developed machine-learning software that analyzes patterns in health records to flag patients that may be at risk of suicidal thoughts. And Facebook claims it can do something similar by analyzing user content—but it remains to be seen how effective its interventions might be.

Source: MIT Technology Review

How to build a high performing analytics team?

OnTheGo

Having BI capabilities have proven to help organizations improve efficiency and stay ahead in overall competitiveness.  With digital transformation taking over sectors, businesses continue to evolve into data driven models. Each day more businesses are focusing on insights from data that drive better decisions and strengthen customer relationships.

Self-service tools such as Tableau, Alteryx, Qlikview, Power BI etc. have introduced a dash of ease into ways businesses convert their data into insights, making business intelligence (BI) initiatives go mainstream.

Choosing analytics providers, structuring an effective BI ecosystem is easy. The difficulty that decision makers face is building a good quality BI team. In the light of this talent crisis, an accelerating number of companies are motivated to hire any resource they can get, but those approaches mostly go dysfunctional. This not only incurs excessive cost burden for businesses but also makes room of inefficiency leading to the failure of BI endeavors.

Some challenges enterprises face while building BI teams:
•Lack of efficient analytics workforce makes way for inevitable competitive lag.
•It is tough for organizations to fulfil skill development needs of their large teams.
•Technical resources are masters of tools but lack the art of business storytelling with analytics.
•Hiring specific SMEs for varied functions of the BI lifecycle drains budget and keeps ROIs in doubt.
•Businesses waste time over manpower fulfilment when they could be leveraging a BI solution for business gains.
•Longer learning curves because of the diversity of tools and the stern need for accuracy in BI ecosystems
•No accountability for the inevitable technical or team related roadblocks that can appear amidst a BI lifecycle.

The options
Every business has its unique demands when it comes to building a data workforce. A different approach is needed for each instance and there are several ways, to begin with. The first step being gauging the magnitude of the BI initiative and the expectations from it. Once that’s in place businesses can start to make decisions whether to hire, outsource or augment.

Outsource – If it is right for you
Organizations leveraging data for certain projects or for improvements of business processes need not go that way. If BI is a support initiative, partnering with third parties to execute analytics initiatives is the best bet.

Having a full-time in-house analytics team an attractive proposition but is expensive to manage and especially when companies are new to BI and unsure of ROIs. Outsourcing is an easier, faster and cheaper way to jumpstart analytics endeavors for such businesses.

In-house teams – build and improve over time
Large enterprises having data and analytics as their core business strategy need to put all BI components i.e. people, processes, and platforms in place. Certain business models cannot allow data to seep out of the organization? Other than those reasons, if businesses know that once rolled out BI initiatives will not be taken back, they can go ahead and start hiring specialists to build in-house BI teams.

In-house data teams can determine and control data lineage, but how do businesses ensure a perfect team that is capable of translating data into success? The answer is – if candidates have the aptitude for analytics and know a tool well in the BI arena, they are potentially capable of evolving as BI needs of businesses do and make great hires that cannot be easily poached. The major takeaway here is hiring people who are masters of their tool and constantly provide opportunities to learn and grow so that the team is up to date, always.

Extended data teams – The best model so far
Finding a qualified match for any role in the IT industry is not an easy job but with the diversity of tools, processes, and evolution involved in BI presents some unique roadblocks in hiring the right talent for BI endeavors.

There’s a lot of maturation time in hiring an internal team and training them to attain optimum results from a BI initiative. An extended team can help chart a course through BI endeavors, with a flexibility that these teams can be involved at any stage of the BI lifecycle.

Extended teams came into the picture to free businesses of unruly time and monetary investments. They also eliminate the worries of hiring, training, and fear of losing seasoned BI experts. The best part is that as a customer you can call off the engagement, the minute the projects starts to derail, saving cost and time resources

Source: useready.com

Stanford sociologists encourage researchers to study human behavior with help of existing online communities, big data

A group of Stanford experts are encouraging more researchers who study social interaction to conduct studies that examine online environments and use big data.

OnTheGo

The internet dominates our world and each one of us is leaving a larger digital footprint as more time passes. Those footprints are ripe for studying, experts say.

people standing against a wall interact with their phones; a graphic is superimposed to show their connections
A new paper urges sociologists and social psychologists to focus on developing online research studies with the help of big data to advance theories of social interaction and structure. (Image credit: pixelfit / Getty Images)

In a recently published paper, a group of Stanford sociology experts encourage other sociologists and social psychologists to focus on developing online research studies with the help of big data in order to advance the theories of social interaction and structure.

Companies have long used information they gather about their online customers to get insights into performance of their products, a process called A/B testing. Researchers in other fields, such as computer science, have also been taking advantage of the growing amount of data.

But the standard for many experiments on social interactions remains limited to face-to-face laboratory studies, said Paolo Parigi, a lead author of the study, titled “Online Field Experiments: Studying Social Interactions in Context.”

Parigi, along with co-authors Karen Cook, a professor of sociology, and Jessica Santana, a graduate student in sociology, are urging more sociology researchers to take advantage of the internet.

“What I think is exciting is that we now have data on interactions to a level of precision that was unthinkable 20 years ago,” said Parigi, who is also an adjunct professor in the Department of Civil and Environmental Engineering.

Online field experiments
In the new study, the researchers make a case for “online field experiments” that could be embedded within the structure of existing communities on the internet.

The researchers differentiate online field experiments from online lab experiments, which create a controlled online situation instead of using preexisting environments that have engaged participants.

“The internet is not just another mechanism for recruiting more subjects,” Parigi said. “There is now space for what we call computational social sciences that lies at the intersection of sociology, psychology, computer science and other technical sciences, through which we can try to understand human behavior as it is shaped and illuminated by online platforms.”

As part of this type of experiment, researchers would utilize online platforms to take advantage of big data and predictive algorithms. Recruiting and retaining participants for such field studies is therefore more challenging and time-consuming because of the need for a close partnership with the platforms.

But online field experiments allow researchers to gain an enhanced look at certain human behaviors that cannot be replicated in a laboratory environment, the researchers said.

For example, theories about how and why people trust each other can be better examined in the online environments, the researchers said, because the context of different complex social relationships is recorded. In laboratory experiments, researchers can only isolate the type of trust that occurs between strangers, which is called “thin” trust.

Most recently, Cook and Parigi have used the field experiment design to research the development of trust in online sharing communities, such as Airbnb, a home and room rental service. The results of the study are scheduled to be published later this year. More information about that experiment is available at stanfordexchange.org.

“It’s a new social world out there,” Cook said, “and it keeps expanding.”

Ethics of studying internet behavior
Using big data does come with a greater need for ethical responsibility. In order for the online studies of social interactions to be as accurate as possible, researchers require access to private information for their participants.

One solution that protects participants’ privacy is linking their information, such as names or email addresses, to unique identifiers, which could be a set of letters or numbers assigned to each research subject. The administrators of the platform would then provide those identifiers to researchers without compromising privacy.

It’s also important to make sure researchers acquire the permission of the online platforms’ participants. Transparency is key in those situations, Cook said.

The research was funded by the National Science Foundation.

Source: Stanford News

Adobe developing AI that turns selfies into self-portraits

OnTheGo

We all know the trademarks of a bad selfie: Unflattering angles, front-camera distortion, distracting backgrounds. Adobe is testing artificial intelligence tools to automatically correct these common issues and infuse selfies with more pleasing portraiture effects.

Adobe shared some of its research on selfie photography in the video below, and the potential results are compelling. That warped-face effect from the too-close front-facing camera? With a few taps, the typical selfie transforms into an image that appears to have been taken from a further distance, as would be seen in a more traditional portrait.

There’s also an automatic masking tool, which could add bokeh and depth-of-field effects similar to the iPhone 7 Plus’ Portrait Mode, blurring out distracting backgrounds for more emphasis on the subject, without the need for a dual-lens camera or other specific hardware.

In addition, Adobe hinted at the potential for automatic styling effects, similar to those seen recently in some of its other AI-driven photo retouching research, only selfie-specific.

Adobe has not confirmed if and when these tools will hit its lineup of smartphone apps, but if they are as easy and effective as this behind-the-scenes look suggests, they could be helping amateur photographers achieve more professional-looking results sometime in the not-so-distant future.

Source: New Atlas magazine