Can AI Address Health Care’s Red-Tape Problem?

analytics anywhere

Productivity in the United States’ health care industry is declining — and has been ever since World War II. As the cost of treating patients continues to rise, life expectancy in America is beginning to fall. But there is mounting evidence that artificial intelligence (AI) can reverse the downward spiral in productivity by automating the system’s labyrinth of labor-intensive, inefficient administrative tasks, many of which have little to do with treating patients.

Administrative and operational inefficiencies account for nearly one third of the U.S. health care system’s $3 trillion in annual costs. Labor is the industry’s single largest operating expense, with six out of every 10 people who work in health care never interacting with patients. Even those who do can spend as little as 27% of their time working directly with patients. The rest is spent in front of computers, performing administrative tasks.

Using AI-powered tools capable of processing vast amounts of data and making real-time recommendations, some hospitals and insurers are discovering that they can reduce administrative hours, especially in the areas of regulatory documentation and fraudulent claims. This allows health care employees to devote more of their time to patients and focus on meeting their needs more efficiently.

To be sure, as we’ve seen with the adoption of electronic health records (EHR), the health care industry has a track record of dragging its feet when it comes to adopting new technologies — and for failing to maximize efficiency gains from new technologies. It was among the last industries to accept the need to digitize, and by and large has designed digital systems that doctors and medical staff dislike, contributing to warnings about burnout in the industry.

Adopting AI, however, doesn’t require the Herculean effort electronic health records (EHRs) did. Where EHRs required billions of dollars in investment and multi-year commitments from health systems, AI is more about targeted solutions. It involves productivity improvements made in increments by individual organizations without the prerequisite collaboration and standardization across health care players required with EHR adoption.

Indeed, AI solutions dealing with cost-cutting and reducing bureaucracy — where AI could have the biggest impact on productivity — are already producing the kind of internal gains that suggest much more is possible in health care players’ back offices. In most cases, these are experiments launched by individual hospitals or insurers.

Here, we analyze three ways AI is chipping away at mundane, administrative tasks at various health care providers and achieving new efficiencies.

Faster Hospital Bed Assignments

Quickly assigning patients to beds is critical to both the patients’ recovery and the financial health of hospitals. Large hospitals typically employ teams of 50 or more bed managers who spend the bulk of their day making calls and sending faxes to various departments vying for their share of the beds available. This job is made more complex by the unique requirements of each patient and the timing of incoming bed requests, so it’s not always a case of not enough beds but rather not enough of the right type at the right time.

Enter AI with the capability to help hospitals more accurately anticipate demand for beds and assign them more efficiently. For instance, by combining bed availability data and patient clinical data with projected future bed requests, an AI-powered control center at Johns Hopkins Hospital has been able to foresee bottlenecks and suggest corrective actions to avoid them, sometimes days in advance.

As a result, since the hospital introduced its new system two years ago, Johns Hopkins can assign beds 30% faster. This has reduced the need to keep surgery patients in recovery rooms longer than necessary by 80% and cut the wait time for beds for incoming emergency room patients by 20%. The new efficiencies also permitted Hopkins to accept 60% more transfer patients from other hospitals.

All of these improvements mean more hospital revenue. Hopkins’s success has prompted Humber River Hospital in Toronto and Tampa General Hospital in Florida to create their own AI-powered control centers as well.

Easier and Improved Documentation

Rapid collection, analysis and validation of health records is another place where AI has begun to make a difference. Health care providers typically spend nearly $39 billion every year to ensure that their electronic health records comply with about 600 federal guidelines. Hospitals assign about 60 people to this task on average, one quarter of whom are doctors and nurses.

This calculus changes when providers use an AI-powered tool developed in cooperation with electronic health record vendor Cerner Corporation. Embedded in physicians’ workflow, the AI tool created by Nuance Communications offers real-time suggestions to doctors on how to comply with federal guidelines by analyzing both patient clinical data and administrative data.

By following the AI tool’s recommendations, some health care providers have cut the time spent on documentation by up to 45% while simultaneously making their records 36% more compliant.

Automated Fraud Detection

Fraud, waste, and abuse also continues to be a consistent drain. Despite an army of claims investigators, it annually costs the industry as much as $200 billion.

While AI won’t eliminate those problems, it does help insurers better identify the claims that investigators should review — in many cases, even before they are paid — to more efficiently reduce the number of suspect claims making it through the system. For example, startup Fraudscope has already saved insurers more than $1 billion by using machine learning algorithms to identify potentially fraudulent claims and alert investigators prior to payment. Its AI system also prioritizes the claims that will yield the most savings, ensuring that time and resources are used where they will have the greatest impact.

Getting Ready for AI

When it comes to cutting health care’s administrative burden through AI, we are only beginning to scratch the surface. But the industry’s ability to amplify that impact will be constrained unless it moves to remove certain impediments.

First, healthcare organizations must simplify and standardize data and processes before AI algorithms can work with them. For example, efficiently finding available hospital beds can’t happen unless all departments define bed space in the same terms.

Second, health care providers will have to break down the barriers that usually exist between customized and conflicting information technology systems in different departments. AI can only automate the transfer of patients from operating rooms to intensive care units (ICU) if both departments’ IT systems are able to communicate with each other.

Finally, the industry’s productivity will not improve as long as too many health care personnel continue in jobs that don’t add value to the business by improving outcomes. Health care players need to begin reducing their workforces by taking advantage of the industry’s 20% attrition rate and automating tasks, rather than filling positions on autopilot.

The task of improving productivity in health care by automating administrative tasks with AI will not be completed quickly or easily. But the progress already achieved by AI solutions is encouraging enough for some to wonder whether re-investing savings from it might also ultimately cut the overall cost of health care as well as improve its quality. For an industry known for its glacial approach to change, AI offers more than a little light at the end of a long tunnel.

Source: Harvard Business Review

Advertisements

Come and say hello

Tech Summit

NYU is hosting the inaugural Technology Summit to celebrate and showcase innovative and emerging technologies used in teaching, learning, research, administration, and entrepreneurial efforts in tech, and beyond.

I’ll be giving a keynote on AI in Healthcare. Come and say hi on 11/14 at Kimmel Center for University Life, 60 Washington Square S., New York, NY 10012

Voting Machines Are Still Absurdly Vulnerable to Attacks

analytics anywhere

While Russian interference operations in the 2016 US presidential elections focused on misinformation and targeted hacking, officials have scrambled ever since to shore up the nation’s vulnerable election infrastructure. New research, though, shows they haven’t done nearly enough, particularly when it comes to voting machines.

The report details vulnerabilities in seven models of voting machines and vote counters, found during the DefCon security conference’s Voting Village event. All of the models are in active use around the US, and the vulnerabilities—from weak password protections to elaborate avenues for remote access—number in the dozens. The findings also connect to larger efforts to safeguard US elections, including initiatives to expand oversight of voting machine vendors and efforts to fund state and local election security upgrades.

“We didn’t discover a lot of new vulnerabilities,” says Matt Blaze, a computer science professor at the University of Pennsylvania and one of the organizers of the Voting Village, who has been analyzing voting machine security for more than 10 years. “What we discovered was vulnerabilities that we know about are easy to find, easy to reengineer, and have not been fixed over the course of more than a decade of knowing about them. And to me that is both the unsurprising and terribly disturbing lesson that came out of the Voting Village.”

Many of the weaknesses Voting Village participants found were frustratingly basic, underscoring the need for a reckoning with manufacturers. One device, the “ExpressPoll-5000,” has root password of “password.” The administrator password is “pasta.”

Like many of the vulnerabilities detailed in the report, that knowledge could only be used in an attack if perpetrators had physical access to the machines. And even the remotely exploitable bugs would be difficult—though certainly not impossible—to leverage in practice. Additionally, election security researchers emphasize that the efforts of countries like Russia are more likely to focus on disinformation and weaponized leaks than on actively changing votes. Those turn out to be more efficient ways to rattle a democracy.

But nation states actors aren’t the only people who might be tempted to hack the vote. And a detailed accounting of just how bad voting machine security also underpins a number of broader election security discussions. Namely, state and local election officials need funding to replace outdated equipment and employ specialized IT staff that can update and maintain devices. Voting machines also need stronger security to protect against criminal activities. And election officials need failsafes for voting machines in general, so that a glitch or technical failure doesn’t derail an election in itself.

“For those of us who have followed the state of our nation’s election infrastructure, none of this is new information,” Representatives Robert Brady of Pennsylvania, and Bennie Thompson of Mississippi, co-chairs of the Congressional Task Force on Election Security, said in a statement on Thursday. “We have known for years that our nation’s voting systems are vulnerable.”

Analyzing voting machines for flaws raises another important controversy about the role of vendors in improving device security. Many of the machines participants analyzed during the Voting Village run software written in the early 2000s, or even the 1990s. Some vulnerabilities detailed in the report were disclosed years ago and still haven’t been resolved. In particular, one ballot counter made by Election Systems & Software, the Model 650, has a flaw in its update architecture first documented in 2007 that persists. Voting Village participants also found a network vulnerability in the same device—which 26 states and the District of Columbia all currently use. ES&S stopped manufacturing the Model 650 in 2008, and notes that “the base-level security protections on the M650 are not as advanced as the security protections that exist on the voting machines ES&S manufactures today.” The company still sells the decade-old device, though.

“At its core, a voting machine is a computer which can be compromised by skilled hackers who have full access and unlimited time,” the company said in a statement. “While there’s no evidence that any vote in a US election has ever been compromised by a cybersecurity breach, ES&S agrees the cybersecurity of the nation’s voting systems can and should be improved.”

Congress has worked recently to investigate voting machine vendor accountability, but progress has been slow. In July, for example, only one of the three top vendors sent a representative to a Senate Rules Committee election security hearing, prompting an outcry from lawmakers.

“This report underscores that when you’re using technology there can be a variety of problems, and with something as important as election results you want to get it right,” says David Becker, executive director of the Center for Election Innovation and Research. “The question I hear from the states and counties, though, is just ‘how are we going to pay for it?’ They would love to have skilled IT staff, they would love to hold trainings for their workers, they would love to replace their old equipment. But you can’t just wave a magic wand and do that, you need significant funding.”

Elections officials have made significant progress on improving election infrastructure defenses and establishing channels for information-sharing, but as the midterm elections loom, replacing vulnerable voting machines—and finding the funding to do it—remain troublingly unfinished business.

Source: Wired

Driverless car makers could face jail if AI causes harm

AI technologies which harm workers could lead to their creators being prosecuted, according to the British government.

analytics anywhere

Makers of driverless vehicles and other artificial intelligence systems could face jail and multi-million pound fines if their creations harm workers, according to the Department of Work and Pensions.

Responding to a written parliamentary question, government spokesperson Baroness Buscombe confirmed that existing health and safety law “applies to artificial intelligence and machine learning software”.This clarifies one aspect of the law around AI, a subject of considerable debate in academic, legal and governmental circles.

Under the Health and Safety Act of 1974, directors found guilty of “consent or connivance” or neglect can face up to two years in prison.

This provision of the Health and Safety Act is “hard to prosecute,” said Michael Appleby, a health and safety lawyer at Fisher Scoggins Waters, “because directors have to have their hands on the system.”

However, when AI systems are built by startups, it might be easier to establish a clear link between the director and the software product.

Companies can also be prosecuted under the Act, with fines relative to the firm’s turnover. If the company has a revenue greater than £50 million, the fines can be unlimited.

The Health and Safety Act has never been applied to a case of artificial intelligence and machine learning software, so these provisions will need to be tested in court.

Source: Sky.com

3 ways to make better decisions by thinking like a computer

If you ever struggle to make decisions, here’s a talk for you. Cognitive scientist Tom Griffiths shows how we can apply the logic of computers to untangle tricky human problems, sharing three practical strategies for making better decisions — on everything from finding a home to choosing which restaurant to go to tonight.

How Virtual Reality Will Drive The Future Of Business

analytics anywhere

In 1961, the first minicomputer, called the PDP-1, arrived at the MIT Electrical Engineering Department. It was a revolutionary machine but, as with all things that are truly new and different, no one really knew what to do with it. Lacking any better ideas, a few of the proto-hackers in residence decided to build a game. That’s how Spacewar! was born.

Today, the creation of the Spacewar is considered a seminal event in computer history. Because it was a game, it encouraged experimentation. Hackers tried to figure out how to, say, simulate gravity or add accurate constellations of stars and by doing so would push the capabilities of the machine and themselves.

Tech investor Chris Dixon has said that the next big thing always starts out being dismissed as a toy. Yet it’s because so many technologies start out as toys that we are able to experiment with and improve them. As virtual reality becomes increasingly viable, this human-machine co-evolution will only accelerate because, to create a new future, we first have to imagine it.

From Spacewar! To Real War

Growing up in Australia, Pete Morrison always thought he’d be a plumber like his father. His mother, however, had other plans. She noticed his interest in computers and how, from a young age, he spent hours tinkering on the family’s primitive Commodore 64. She pushed him to go to college. Lacking funds to do so, Pete entered the Army to finance his education.

As a Signal Corps Officer, he put his technical skills to good use, but much like the MIT geeks four decades earlier, he soon found himself preoccupied with video games. The military had commissioned a study of simulations at the Australian Defence Force Academy, where he was a student and Pete got involved with testing games. One was Operation Flashpoint, developed by some young geeks at a Prague based company called Bohemia Interactive.

“It quickly became clear that the game could be effective for training military personnel,” Morrison told me. “Before Operation Flashpoint, to train a soldier you had to go out into the field, which was expensive and time consuming. We realized that with this type of computer game, you could design training that would allow them to hone cognitive skills, which would make the in-the-field training that much more effective.”

“Also,” he continued, “because the game was so engaging we got a much deeper level of immersion, which made the training more effective and led the Australian Military to ramp up investments in video games as training tools.”

The Simulation Economy

In the industrial age, experimentation was expensive and unwieldy. Thomas Edison famously observed that if he tried 10,000 things that didn’t work, he didn’t see them as failures, but stepping stones to his next great invention. It was, of course, an ultimately effective process, but incredibly gruelling and time consuming.

Today, however, we increasingly live in a simulation economy where we can test things out in a virtual world of bits and avoid much of the mess of failing in the real world. Consider how today we battle-test different business models and scenarios in Excel. That was much more cumbersome and time consuming when spreadsheets were on paper, so we rarely did it. Now, it’s a routine activity that we do all the time.

As computers have become exponentially more powerful and software algorithms has become much more sophisticated, the usage of simulations have expanded. We use CAD software to design products and structures as well as high performance supercomputers to model weather and even invent advanced materials. When you can try out thousands of possibilities easily and cheaply, you are more likely to identify an optimal solution.

The next era of simulation will be powered by virtual reality and it is almost upon us. Just as Pete Morrison found that ordinary video games could improve tactics in the real world, virtual reality offers the possibility to take training to an entirely new level.

Enter Virtual Reality

In 2005, Morrison left the military and started working directly with Bohemia Interactive. Together, they launched a new company in 2007, Bohemia Interactive Simulations, to focus on the military business. In recent years, the firm has been increasingly focused on applying its expertise to virtual reality platforms like Oculus Rift and Magic Leap.

“The advantage of virtual reality is that we can potentially replace dome projection systems, which cost hundreds of thousands dollars, with a VR system that costs hundreds of dollars and achieve the same or greater level of immersion,” Morrison says. “That can be a huge cost saver for militaries worldwide and revolutionize how we train soldiers”

Yet, like most technologies, virtual reality is quickly moving from high-end early adopters to more mainstream markets. Strivr, for example, got its start by designing virtual reality systems to train $20 million NFL quarterbacks. It now helps train employees at companies like Walmart, United Rentals and Jet Blue by simulating real-life work environments.

Training your employees in a classroom can help teach them basic principles and, in some cases, help build important skills. With virtual reality, however, you can put them in a realistic environment of, say, a sales floor on Black Friday, a construction site or a $50 million airplane at a fraction of the cost. In some cases, training efficiency rates have increased by as much as 40%.

How Humans And Technology Co-Evolve

In recent years, we have come to think of technology in opposition to humanity. We hear that robots are going to take our jobs, that tablets and smartphones are eroding our children’s skills and so on. Yet we often fail to take note of the potential for machines to make us better, to enhance our skills and to make us smarter.

For example, as the digital age comes to an end, we need to invent new computing architectures, like quantum computing, to drive advancement forward. The problem is that, although the technology is progressing rapidly, very few people know how to program a quantum computer, which works fundamentally differently than classical machines.

It was with that in mind that IBM created Hello Quantum, a video game that helps teach the principles of quantum algorithms. “We thought, what better way for those unfamiliar with the principles of quantum mechanics to dip their toe into the topic than through a game? The puzzles are fun, so even those who don’t necessarily plan to study quantum physics will come away with a better understanding of it.” Talia Gershon at IBM Research says.

All too often, we see playing games as just “goofing off,” in order to escape from the “real world.” The truth is that, by allowing us to go beyond our immediate context, games allow us to learn skills that would be difficult, and in some cases impossible, for us to experience directly. That has the potential to enhance not only our skills, but our lives.

The truth is that humans don’t compete with machines, we co-evolve with them. Yes, they make some skills obsolete, but they open the door for us to learn new ones and that can enhance and enrich our lives. As the skills we need to learn increasingly exceed our everyday experience, we’ll find ourselves playing more games.

Source: Digital Tonto

How AI is Revolutionizing Marketing

businessman hand working with new modern computer and business sWith computing power now virtually limitless, the possible applications of artificial intelligence on all aspects of consumer culture likewise seem to be without end. This phenomenon has the potential to launch marketing into a new golden age, and companies that do not take advantage of the new technologies at their disposal are severely limiting their reach and potential. AI and machine learning can be a boon to marketers like no other, but not until these marketers understand how it can work for them and for customers.

AI’s burgeoning presence in marketing not only provides a more satisfying customer experience, but it also boosts marketing campaign effectiveness and opens opportunities for businesses of all sizes and in all industries. Here are four of the myriad ways AI is transforming the marketing landscape for the better.

Personalized Content

Perhaps AI’s most valuable impact in marketing is its ability to understand user preferences and interpret data from them to present consumers with the content most relevant to their interests. Just like Netflix observes what users watch and recommends similar titles accordingly, websites use AI to evaluate extensive amounts of data from users’ browsing habits and present them with information that suits their preferences exactly.

According to Infosys, 86% of consumers noted that personalization impacted their purchases. And 56% of consumers actually expect their interactions with brands to be personalized. Any company that misses out on this opportunity to connect with consumers is running the risk of losing business and customer loyalty.

Predictive Analytics

Predictive analytics can provide marketers with insights into consumer behaviors by creating detailed mathematical models based on data received from each individual customer. A report by Aberdeen Group summarized:

“Predictive technologies can enable more precise segmentation of potential buyers and facilitate a deeper understanding of those buyers, their needs, and their motivations. By optimizing the marketing offer and message directed at these buyers, predictive analytics provides an effective path to delivering better marketing ROI – as evidenced by the superior click-through rates and incremental sales lift.”

Their study revealed a 76% increase in click-through rates through the use of predictive analytics. By programming AI to efficiently carry out these processes, marketing teams can observe the patterns and develop effective strategies accordingly.

Customer Engagement

AI can also be employed to identify certain customer segments that are not as engaged as others, and curate content based on the information it gathers from them. Companies like Dynamic Yield have developed AI engines to acquire insights on each customer through millions of data points that, when analyzed, can determine which customers are loyal to your brand and which ones need more incentivizing. This allows for deeper consumer relationships, where each individual customer feels seen and has their preferences acknowledged. Targeted communications like these have the potential to increase revenue growth by 10-30%, according to McKinsey & Company.

Efficiency and Revenue Growth

It is no secret that AI can outperform humans at many menial tasks. But developing AI systems to take over simple processes can free up the time and capabilities of a business’s human employees, so they can focus more on their specialized skills. Their time can then be redirected to focus on other more complex areas, like content creation, and leave the busy work to the machines. How much does it actually help, you may ask? According to Harvard Business Review and Infosys, the addition of AI has been shown to reduce customer acquisition costs by 50% and increase revenue by 43%.

The possibilities of this new frontier in marketing are vast, as are the rewards. From opening new job opportunities in AI programming, to helping to prevent fraud, the inclusion of computer learning in marketing efforts means benefits for consumers and creators alike.

The numbers don’t lie: There is profit and efficiency in computer learning. Companies that employ it are developing lasting customer relationships and rapidly outpacing their competitors. If you haven’t incorporated AI into your company’s marketing strategy, you’re falling behind. The landscape is being transformed before our eyes—it’s time to take advantage of the new opportunities while they last.

Source: LinkedIn