Using Artificial Intelligence to Reduce the Risk of Nonadherence in Patients on Anticoagulation Therapy

Researchers are using VR to make dentist visits less painful

Patients in a study reported less pain, as long as they viewed nature scenes.
OnTheGo

Like airlines, dentists understand that the more they can distract you from what they’re doing, the better off everyone will be. UK researchers wanted see if virtual reality can ease patient pain and anxiety, so they enlisted 79 people who needed a tooth pulled or cavity filled. Patients were divided into three groups: One that viewed a VR coastal scene, one a VR city, and the other, no virtual reality at all.

The result? Folks that viewed the ocean VR experienced “significantly less pain” than the other two groups, showing its therapeutic potential for stressful events. Furthermore, follow ups showed that the coastal VR patients experienced less “recalled pain” memories after the fact.

Notably, the city VR was no more effective at reducing patient pain and stress than no VR, so the trick seems to depend on using calming scenes. While that seems incredibly obvious, the psychologists thought VR could just be distracting patients from all the drilling and poking, much as a TV does, but that proved not to be the case. “Our findings are in line with literature, showing that contact with nature, even indirect contact through windows, can influence physical and mental well-being,” the paper explains.

The researchers note that in previous studies, VR has been shown to reduce patient dependence on pain medication. “Our research supports the previous positive findings of VR distraction in acute pain management, and suggests that VR nature can be used in combination with traditional [medication].” The next step, they suggest, would be to vary the content of natural environments (using a forest instead of a coastal scene, for instance) to see if the can determine exactly how it reduces pain. We’d recommend they check out the zen content out there, and avoid any games.

Source: engadget.com

The Big (Unstructured) Data Problem

OnTheGo

The face of data breaches changed last year. The one that marked that change for me was the breach that involved former Secretary of State Colin Powell’s Gmail account. Targeted to expose the Hillary Clinton campaign, Colin Powell’s emails were posted on DCLinks.com for everyone to read. One of them had an attachment listing Salesforce’s acquisition targets and the details of its M&A strategy. Colin Powell, a member of Salesforce’s board, had access, through his personal email account, to sensitive information. When his personal email was hacked, all of that sensitive information was exposed — and blasted out in the headlines.

Corporations are trying to lock down sensitive information, most of it in structured systems and in data centers with a variety of security solutions. As it is getting harder for hackers to get to the data they want, they are finding the weakest path to that data and evolving their attack vector. Unstructured data is that new attack vector.

Most enterprises do not understand how much sensitive data they have, and when we consider how much unstructured data (emails, PDFs and other documents) a typical enterprise has under management, the red flags are clear and present. Analysts at Gartner (gated) estimate that upward of 80% of enterprise data today is unstructured. This is a big data problem, to say the least. As the level of unstructured data rises and hackers shift their focus to it, unstructured data is an issue that can no longer be placed on the enterprise IT back burner.

What Exactly Is Unstructured Data?

Unstructured data is any data that resides in emails, files, PDFs or documents. Sensitive unstructured data is usually data that was first created in a protected structured system such as SAP Financials for example and then exported into an Excel spreadsheet for easier consumption by audiences who are not SAP users.

Let me give you a very common example in any public company: Every quarter, a PR department receives the final quarterly financial numbers via email ahead of the earnings announcement in order to prepare a press release. The PR draft will be shared via email by a select group within the company before being approved and ready to be distributed out on the news wires. When pulling that financial information from the ERP system — a system that usually lives behind the corporate firewall with strong security and identity controls in place and with business owners who govern access to the systems and data within — we’ve instantly taken that formerly safe data and shared it freely by email as an Excel file.

A hacker could easily try to hack the credentials of a key employee rather than break into the network and tediously make his or her way to the ERP system. The path to getting the coveted earnings data can be easily shortened by focusing on its unstructured form shared via email or stored in files with limited security.

Right now, enterprises are woefully unprepared. Nearly 80% of enterprises have very little visibility into what’s happening across their unstructured data, let alone how to manage it. Enterprises are simply not ready to protect data in this form because they don’t understand just how much they have. Worse yet, they don’t even know what lies within those unstructured data files or who owns these files. Based on a recent survey created by my company, as many as 71% of enterprises are struggling with how to manage and protect unstructured data.

This is especially concerning when we consider the looming General Data Protection Regulation (GDPR) deadline. When that regulation takes effect in May 2018, any consumer data living in these unmanaged files that is exposed during a breach would immediately open the organization up to incredibly steep penalties. While regulations like GDPR put fear into companies, it may be a while before they start to take action. Many companies are struggling to strike the right balance between focusing on reacting to security threats versus time spent evaluating the broader picture of proactively managing risk for their company.

The Path Forward

Enterprises simply cannot afford to ignore the big unstructured data problem any longer. They need an actionable plan, one that starts with this four-step process:

•Find your unstructured data. Sensitive data is most likely spread out across both structured systems (i.e., your ERP application) and unstructured data (i.e., an Excel spreadsheet with exported data from your ERP app) that lives in a file share or the numerous cloud storage systems companies use today for easier cross-company sharing and collaboration.
•Classify and assign an owner to that data. Not all data has value, but even some stale data may still be of sensitive nature. Take the time to review all data and classify it to help you focus only on the most sensitive areas. Then assign owners to the classified unstructured data. If you do not know whom it belongs to, ask the many consumers of that data; they usually always point in the same direction — its natural owner.
•Understand who has access to your data. It’s extremely important to understand who has access to all sensitive company information, so access controls need to be placed on both structured and unstructured data.
•Put parameters around your data. Sensitive data should be accessed on a “need to know” basis, meaning only a select few in the company should have regular access to your more sensitive files, the ones that could have serious consequences if they ended up in the wrong hands.

With these steps in place, you can better avoid anyone within your company from having access to a file that they don’t need to do their job and ultimately minimize the risk of a breach. And although there are data access governance solutions that help corporations protect unstructured data, very few enterprises today have such a program in place. Ultimately, these solutions will need to find their way into enterprises as hackers once again change their attack vector to easier prey.

Source: Forbes

Voiceitt lets people with speech impairments use voice-controlled technology

OnTheGo

Voice-controlled technology like Amazon Echo, Siri or hands-free features in Google Maps are things we’re starting to take for granted. But as Mary Meeker’s 2017 Internet Trends Report noted, voice controls are changing computer-human interfaces, and industries, broadly. Speech recognition or voice controls are being added to medical devices and business applications, even vehicles and industrial robotics.

But there’s a problem — voice systems have been built for standard speech today. That leaves out millions of people who live with speech impairments, or who just have a strong accent. Now, a Tel Aviv-based startup called Voiceitt has raised $2 million in seed funding to translate into clear words speech that’s not easily intelligible.

The startup, which was co-founded by CEO Danny Weissberg and CTO Stas Tiomkin, is a graduate of the DreamIt Health accelerator. Investors in Voiceitt’s seed round include Amit Technion, Dreamit Ventures, Quake Capital, Buffalo Angels, 1,000 Angels and other angels.

Here’s how Voiceitt works: Users fire up the company’s app and it asks them to compose then read short, useful sentences out loud, like “I’m thirsty,” or “Turn off the lights.” The software records and begins to learn the speaker’s particular pronunciation. A caregiver can type phrases into the app if the user is not able to do so independently.

After a brief training period, the Voiceitt app can turn the user’s statements into normalized speech, which it outputs in the form of audio or text messages, instantly. Voice-controlled apps and devices can easily understand the newly generated audio or written messages. But Voiceitt also can be used to help people with speech impediments communicate face to face with other people.

Dreamit’s Karen Griffith Gryga said investors view Voiceitt as a technology that’s starting with “the thin edge of the wedge,” in the market for assistive tech. But it could be expanded to help people with strong accents use whatever voice-enabled technology Seattle or Silicon Valley comes up with next.

Weissberg explained that he came up with the idea for Voiceitt after his grandmother suffered from speech impairments following a stroke. The CEO said, “I realized how we take for granted the way we communicate by speaking. Losing this is really terrible, one of the hardest aspects of stroke recovery. So I didn’t say, right away, let’s start a company. But I began to talk with speech therapists and occupational therapists, and to learn everything I can about the problem and whether there was a market in need, there.”

An early version of Voiceitt will be available next year, but the app is in beta tests now. The company’s pilot customers are hospitals and schools, and people there who have speech differences because of a health condition, like those with cerebral palsy, Down syndrome, Parkinson’s or who are recovering from a traumatic brain injury or stroke.

Long-term, Weissberg said, “This could really be an accessibility extension to speech recognition for anyone, Google, Amazon, Apple, IBM or Microsoft. We’d love to function like a major OEM and work with all the major platforms.”

Source: techcrunch.com

Using Cell Phone Data to Predict the Next Epidemic

Whom you call is linked to where you travel, which dictates how viruses spread.Champion

Can big data about whom we call be used to predict how a viral epidemic will spread?

It seems unlikely. After all, viruses do not spread over a cell network; they need us to interact with people in person.

Yet, it turns out that the patterns in whom we call can be used to predict patterns in where we travel, according to new research from Kellogg’s Dashun Wang. This in turn can shed light on how an epidemic would spread.

Both phone calls and physical travel are highly influenced by geography. The further away a shopping mall or post office is from our home, after all, the less likely we are to visit it. Similarly, our friends who live in the neighborhood are a lot likelier to hear from us frequently than our extended family in Alberta.

But Wang and colleagues were able to take this a step further. By analyzing a huge amount of data on where people travel and whom they call, they were able to determine the mathematical formula that illustrates the link between how distance impacts these two very different activities. This understanding provides a framework for using data about long-distance interactions to predict physical ones—and vice versa.

As humans, we do not like to think that someone could anticipate our actions, says Wang, an associate professor of management and organizations. But his evidence says otherwise. “It’s just fascinating to see this kind of deep mathematical relationship in human behavior,” he says.

Wang’s conclusions were based on the analysis of three massive troves of cell phone data collected for billing purposes. The data, from three nations spanning two continents, included geographic information about where cell phone users traveled, as well as information about each phone call placed or received, and how far a user was from the person on the other end of the line.

The discovery of this underlying relationship between physical and nonphysical interactions has significant practical implications. For example, the researchers were able to model the spread of a hypothetical virus, which started in a few randomly selected people and then spread to others in the vicinity, using only the data about the flow of phone calls between various parties. Those predictions were remarkably similar to ones generated by actual information about where users traveled and thus where they would be likely to spread or contract a disease.

“I think that’s a great example to illustrate the opportunities brought about by big data,” Wang says. “The paper represents a major step in our quantitative understanding of how geography governs the way in which we are connected. These insights can be particularly relevant in a business world that is becoming increasingly interconnected.”

Source: Kellogg Insight

Past, Present and Future of AI / Machine Learning (Google I/O ’17)

 

We are in the middle of a major shift in computing that’s transitioning us from a mobile-first world into one that’s AI-first. AI will touch every industry and transform the products and services we use daily. Breakthroughs in machine learning have enabled dramatic improvements in the quality of Google Translate, made your photos easier to organize with Google Photos, and enabled improvements in Search, Maps, YouTube, and more.

 

Shiny vs Useful: Which trends in the analytics market are business ready?

OnTheGo

Business analytics continues to be a hot segment in the enterprise software market and a core component of digital transformation for every organization. But there are many specific advances that are at differing points along the continuum of market readiness for actual use.

It is critical that technology leaders recognize the difference between mature trends that can be applied to real-world business scenarios today versus those that are still taking shape but make for awe-inspiring vendor demos. These trends fall into categories ranked from least to most mature in the market: artificial intelligence (AI), natural language processing (NLP), and embedded analytics.

Artificial augments actual human intelligence

The hype and excitement surrounding AI, which encompasses machine learning (ML) and deep learning, has surpassed that of big data in today’s market. The notion of AI completely replacing and automating manual analytical tasks done by humans today is far from application to most real-world use cases. In fact, full automation of analytical workflows should not even be considered the final goal — now or in the future.

The term assistive intelligence is a more appropriate phrase for the AI acronym, and is far more palatable for analysts who view automation as a threat. This concept of assistive intelligence, where analyst or business user skills are augmented by embedded advanced analytic capabilities and machine learning algorithms, is being adopted by a growing number of organizations in the market today. The utility of these types of smart capabilities has proven useful in assisting with data preparation and integration, as well as analytical processes such as the detection of patterns, correlations, outliers and anomalies in data.

Natural interactions improve accessibility of analytics

Natural Language Processing (NLP) and Natural Language Generation (NLG) are often used interchangeably but serve completely different purposes. While both enable natural interactions with analytics platforms, NLP can be thought of as the question-asking part of the equation, whereas NLG is used to render findings and insights in natural language to the user.

Of the two, NLP is more recognizable in the mainstream market as natural language interfaces increasingly become more commonplace in our personal lives through Siri, Cortana, Alexa, Google Home, etc. Analytics vendors are adding NLP functionality into their product offerings to capitalize on this consumer trend and reach a broader range of business users who may find a natural language interface less intimidating than traditional means of analysis. It is inevitable that NLP will become a widely used core component of an analytics platform but it is not currently being utilized across a broad enough range of users or use cases to be considered mainstream in today’s market.

On the other hand, NLG has been in the market for several years but only recently has it been incorporated into mainstream analytics tools to augment the visual representation of data. Many text-based summaries of sporting events, player statistics, mutual fund performance, etc., are created automatically using NLG technology. Increasingly, NLG capabilities are also being used as the delivery mechanism to make AI-based output more consumable to mainstream users.

Recently, analytics vendors have been forging partnerships with NLG vendors to leverage their expertise in adding another dimension to data visualization, where key insights are automatically identified and expressed in a natural language narrative to accompany the visualization. While the combination of business analytics and NLG is relatively new, it is gaining awareness and traction in the market and has opened the door to new uses cases for organizations to explore.

Embedded analytics brings insights closer to action

The true value of analytics is realized when insights can inform decision-making to improve business outcomes. By embedding analytics into applications and systems, where decision-makers conduct normal business, a barrier to adoption is removed and insights are delivered directly to the person who can take immediate action.

Modern analytics platform vendors have made it incredibly easy for organizations to adopt an embedded strategy to proliferate analytic content to line-of-business users previously unreachable by traditional means. And organizations are now extending similar capabilities to customers, partners, suppliers, etc., in an effort to increase competitive differentiation and, in some cases, new revenue streams through monetization of data assets and analytic applications.

These innovations present technology leaders with a unique opportunity to lead their organizations into an era where data analysis is the foundation for all business decisions. Every organization will embark on this journey at its own pace. Some will be early adopters of new innovations and some will only adopt when the majority of the market has successfully implemented.

Ultimately, organizational readiness to adopt any new technology will be determined by end users and their ability and willingness to adopt new innovations and embrace process change.

Source: Tableau