AI is all about instant customer satisfaction

analytics anywhere

Our brains are wired to love and become addicted to instant rewards. Any delay in satisfaction creates stress. Just remember how you feel when a web page takes over 3 seconds to load. We crave technologies that go even faster than our brains. The Google, Amazon, Booking.com, and Uber of this world have been harnessing the benefits of instant reward to boost their sales for years. AI is just the next logical step and there is no way back because the faster you go, the more consumers buy from you, and the faster you want to go.

The goal is not to replace human work but to expand your capacity to deliver the instant value and relevance that your customers crave and that you are not currently able to provide. Hotels chronically complain about how understaffed they are and how hard it is to keep pace. So 2 choices here: 1- embrace AI as an opportunity or 2- keep running a Formula 1 race with a bicycle.

Trending AI Articles:

  1. Natural vs Artificial Neural Networks
  2. A Short Machine Learning Explanation
  3. A.I. of the People, by the People, for the People
  4. Face detection with OpenCV and Deep Learning from image

Cloud AI — the opportunity for hotels

The market for AI is no longer the privilege of a few multi-billion dollar companies. Cloud AI solutions have become widely available for hotels that can massively capitalize on its power at virtually no cost.

Big Data: A new generation of booking engines led by companies such as Avvio are able to learn from customer demographics and adapt their display to better fit the preferences of each customer.

Chatbots: Technologies such as Quicktext and Zoe bot engage customers on your direct channels to help your online visitors access immediately, relevant information while capturing data on them that either the chatbot or you are able to action to increase sales.

Grow out of your terminator fantasy

Some people mix fiction and reality either because they are afraid of AI’s potential, or on the contrary, they expect to see a full human being. This confusion happens because we use terms such as intelligence, neural networks, deep learning etc. It is true that AI is inspired by our brain, but overall, we frequently get inspired by nature to solve challenges. Most of the time we can recognize where the inspiration comes from but the final product is usually quite far from the original model.

With AI it is exactly the same thing. We can use some basic logic but it remains very focused on a specific use case. So, if you want to be able to profit from AI, you need to have realistic expectations. For instance, chatbots are currently able to manage frequently requested tasks such as giving particular information, booking a room, locating and finding relevant places around the hotel etc. They deal with some repetitive tasks that none of your employees want to do and chatbots have become very good at it — even better than humans. However, virtual assistants are not able to serve the customers outside of their perimeter. That’s when you move from autopilot to manual. Taking AI for what it is, rather than your wildest dreams, will enable you to realize that it can benefit your business today.

Source: Becoming Human

Advertisements

AI predictions for 2019 from Yann LeCun, Hilary Mason, Andrew Ng, and Rumman Chowdhury

analyticsanywhere

Artificial intelligence is cast all at once as the technology that will save the world and end it.

To cut through the noise and hype, VentureBeat spoke with luminaries whose views on the right way to do AI have been informed by years of working with some of the biggest tech and industry companies on the planet.

Below find insights from Google Brain cofounder Andrew Ng, Cloudera general manager of ML and Fast Forward Labs founder Hilary Mason, Facebook AI Research founder Yann LeCun, and Accenture’s responsible AI global lead Dr. Rumman Chowdhury. We wanted to get a sense of what they saw as the key milestones of 2018 and hear what they think is in store for 2019.

Amid a recap of the year and predictions for the future, some said they were encouraged to be hearing fewer Terminator AI apocalypse scenarios, as more people understand what AI can and cannot do. But these experts also stressed a continued need for computer and data scientists in the field to adopt responsible ethics as they advance artificial intelligence.

Dr. Rumman Chowdhury

Dr. Rumman Chowdhury is managing director of the Applied Intelligence division at Accenture and global lead of its Responsible AI initiative, and was named to BBC’s 100 Women list in 2017. Last year, I had the honor of sharing the stage with her in Boston at Affectiva’s conference to discuss matters of trust surrounding artificial intelligence. She regularly speaks to audiences around the world on the topic.

For the sake of time, she responded to questions about AI predictions for 2019 via email. All responses from the other people in this article were shared in phone interviews.

Chowdhury said in 2018 she was happy to see growth in public understanding of the capabilities and limits of AI and to hear a more balanced discussion of the threats AI poses — beyond fears of a global takeover by intelligent machines as in The Terminator. “With that comes increasing awareness and questions about privacy and security, and the role AI may play in shaping us and future generations,” she said.

Public awareness of AI still isn’t where she thinks it needs to be, however, and in the year ahead Chowdhury hopes to see more people take advantage of educational resources to understand AI systems and be able to intelligently question AI decisions.

She has been pleasantly surprised by the speed with which tech companies and people in the AI ecosystem have begun to consider the ethical implications of their work. But she wants to see the AI community do more to “move beyond virtue signaling to real action.”

“As for the ethics and AI field — beyond the trolley problem — I’d like to see us digging into the difficult questions AI will raise, the ones that have no clear answer. What is the ‘right’ balance of AI- and IoT-enabled monitoring that allows for security but resists a punitive surveillance state that reinforces existing racial discrimination? How should we shape the redistribution of gains from advanced technology so we are not further increasing the divide between the haves and have-nots? What level of exposure to children allows them to be ‘AI natives’ but not manipulated or homogenized? How do we scale and automate education using AI but still enable creativity and independent thought to flourish?” she asked.

In the year ahead, Chowdhury expects to see more government scrutiny and regulation of tech around the world.

“AI and the power that is wielded by the global tech giants raises a lot of questions about how to regulate the industry and the technology,” she said. “In 2019, we will have to start coming up with the answers to these questions — how do you regulate a technology when it is a multipurpose tool with context-specific outcomes? How do you create regulation that doesn’t stifle innovation or favor large companies (who can absorb the cost of compliance) over small startups? At what level do we regulate? International? National? Local?”

She also expects to see the continued evolution of AI’s role in geopolitical matters.

“This is more than a technology, it is an economy- and society-shaper. We reflect, scale, and enforce our values in this technology, and our industry needs to be less naive about the implications of what we build and how we build it,” she said. For this to happen, she believes people need to move beyond the idea common in the AI industry that if we don’t build it, China will, as if creation alone is where power lies.

“I hope regulators, technologists, and researchers realize that our AI race is about more than just compute power and technical acumen, just like the Cold War was about more than nuclear capabilities,” she said. “We hold the responsibility of recreating the world in a way that is more just, more fair, and more equitable while we have the rare opportunity to do so. This moment in time is fleeting; let’s not squander it.”

<pOn a consumer level, she believes 2019 will see more use of AI in the home. Many people have become much more accustomed to using smart speakers like Google Home and Amazon Echo, as well as a host of smart devices. On this front, she’s curious to see if anything especially interesting emerges from the Consumer Electronics Show — set to kick off in Las Vegas in the second week of January — that might further integrate artificial intelligence into people’s daily lives.

“I think we’re all waiting for a robot butler,” she said.

Andrew Ng

I always laugh more than I expect to when I hear Andrew Ng deliver a whiteboard session at a conference or in an online course. Perhaps because it’s easy to laugh with someone who is both passionate and having a good time.

Ng is an adjunct computer science professor at Stanford University whose name is well known in AI circles for a number of different reasons.

He’s the cofounder of Google Brain, an initiative to spread AI throughout Google’s many products, and the founder of Landing AI, a company that helps businesses integrate AI into their operations.

He’s also the instructor of some of the most popular machine learning courses on YouTube and Coursera, an online learning company he founded, and he founded deeplearning.ai and wrote the book Deep Learning Yearning.

After more than three years there, in 2017 he left his post as chief AI scientist for Baidu, another tech giant that he helped transform into an AI company.

Finally, he’s also part of the $175 million AI Fund and on the board of driverless car company Drive.ai.

Ng spoke with VentureBeat earlier this month when he released the AI Transformation Playbook, a short read about how companies can unlock the positive impacts of artificial intelligence for their own companies.

One major area of progress or change he expects to see in 2019 is AI being used in applications outside of tech or software companies. The biggest untapped opportunities in AI lie beyond the software industry, he said, citing use cases from a McKinsey report that found that AI will generate $13 trillion in GDP by 2030.

“I think a lot of the stories to be told next year [2019] will be in AI applications outside the software industry. As an industry, we’ve done a decent job helping companies like Google and Baidu but also Facebook and Microsoft — which I have nothing to do with — but even companies like Square and Airbnb, Pinterest, are starting to use some AI capabilities. I think the next massive wave of value creation will be when you can get a manufacturing company or agriculture devices company or a health care company to develop dozens of AI solutions to help their businesses.”

Like Chowdhury, Ng was surprised by growth in understanding in what AI can and cannot do in 2018, and pleased that conversations can take place without focusing on the killer robot scenario or fear of artificial general intelligence.

Ng said he intentionally responded to my questions with answers he didn’t expect many others to have.

“I’m trying to cite deliberately a couple of areas which I think are really important for practical applications. I think there are barriers to practical applications of AI, and I think there’s promising progress in some places on these problems,” he said.

In the year ahead, Ng is excited to see progress in two specific areas in AI/ML research that help advance the field as a whole. One is AI that can arrive at accurate conclusions with less data, something called “few shot learning” by some in the field.

“I think the first wave of deep learning progress was mainly big companies with a ton of data training very large neural networks, right? So if you want to build a speech recognition system, train it on 100,000 hours of data. Want to train a machine translation system? Train it on a gazillion pairs of sentences of parallel corpora, and that creates a lot of breakthrough results,” Ng said. “Increasingly I’m seeing results on small data where you want to try to take in results even if you have 1,000 images.”

The other is advances in computer vision referred to as “generalizability.” A computer vision system might work great when trained with pristine images from a high-end X-ray machine at Stanford University. And many advanced companies and researchers in the field have created systems that outperform a human radiologist, but they aren’t very nimble.

“But if you take your trained model and you apply it to an X-ray taken from a lower-end X-ray machine or taken from a different hospital, where the images are a bit blurrier and maybe the X-ray technician has the patient slightly turned to their right so the angle’s a little bit off, it turns out that human radiologists are much better at generalizing to this new context than today’s learning algorithms. And so I think interesting research [is on] trying to improve the generalizability of learning algorithms in new domains,” he said.

Yann LeCun

Yann LeCun is a professor at New York University, Facebook chief AI scientist, and founding director of Facebook AI Research (FAIR), a division of the company that created PyTorch 1.0 and Caffe2, as well as a number of AI systems — like the text translation AI tools Facebook uses billions of times a day or advanced reinforcement learning systems that play Go.

LeCun believes the open source policy FAIR adopts for its research and tools has helped nudge other large tech companies to do the same, something he believes has moved the AI field forward as a whole. LeCun spoke with VentureBeat last month ahead of the NeurIPS conference and the fifth anniversary of FAIR, an organization he describes as interested in the “technical, mathematical underbelly of machine learning that makes it all work.”

“It gets the entire field moving forward faster when more people communicate about the research, and that’s actually a pretty big impact,” he said. “The speed of progress you’re seeing today in AI is largely because of the fact that more people are communicating faster and more efficiently and doing more open research than they were in the past.”

On the ethics front, LeCun is happy to see progress in simply considering the ethical implications of work and the dangers of biased decision-making.

“The fact that this is seen as a problem that people should pay attention to is now well established. This was not the case two or three years ago,” he said.

LeCun said he does not believe ethics and bias in AI have become a major problem that require immediate action yet, but he believes people should be ready for that.

“I don’t think there are … huge life and death issues yet that need to be urgently solved, but they will come and we need to … understand those issues and prevent those issues before they occur,” he said.

Like Ng, LeCun wants to see more AI systems capable of the flexibility that can lead to robust AI systems that do not require pristine input data or exact conditions for accurate output.

LeCun said researchers can already manage perception rather well with deep learning but that a missing piece is an understanding of the overall architecture of a complete AI system.

He said that teaching machines to learn through observation of the world will require self-supervised learning, or model-based reinforcement learning.

“Different people give it different names, but essentially human babies and animals learn how the world works by observing and figure out this huge amount of background information about it, and we don’t know how to do this with machines yet, but that’s one of the big challenges,” he said. “The prize for that is essentially making real progress in AI, as well as machines, to have a bit of common sense and virtual assistants that are not frustrating to talk to and have a wider range of topics and discussions.”

For applications that will help internally at Facebook, LeCun said significant progress toward self-supervised learning will be important, as well as AI that requires less data to return accurate results.

“On the way to solving that problem, we’re hoping to find ways to reduce the amount of data that’s necessary for any particular task like machine translation or image recognition or things like this, and we’re already making progress in that direction; we’re already making an impact on the services that are used by Facebook by using weakly supervised or self-supervised learning for translation and image recognition. So those are things that are actually not just long term, they also have very short term consequences,” he said.

In the future, LeCun wants to see progress made toward AI that can establish causal relationships between events. That’s the ability to not just learn by observation, but to have the practical understanding, for example, that if people are using umbrellas, it’s probably raining.

“That would be very important, because if you want a machine to learn models of the world by observation, it has to be able to know what it can influence to change the state of the world and that there are things you can’t do,” he said. “You know if you are in a room and a table is in front of you and there is an object on top of it like a water bottle, you know you can push the water bottle and it’s going to move, but you can’t move the table because it’s big and heavy — things like this related to causality.”

Hilary Mason

After Cloudera acquired Fast Forward Labs in 2017, Hilary Mason became Cloudera’s general manager of machine learning. Fast Forward Labs, while absorbed into Cloudera, is still in operation, producing applied machine learning reports and advising customers to help them see six months to two years into the future.

One advancement in AI that surprised Mason in 2018 was related to multitask learning, which can train a single neural network to apply multiple kinds of labels when inferring, for example, objects seen in an image.

Fast Forward Labs has also been advising customers on the ethical implications of AI systems. Mason sees a wider awareness for the necessity of putting some kind of ethical framework in place.

“This is something that since we founded Fast Forward — so, five years ago — we’ve been writing about ethics in every report but this year [2018] people have really started to pick up and pay attention, and I think next year we’ll start to see the consequences or some accountability in the space for companies and for people who pay no attention to this,” Mason said. “What I’m not saying very clearly is that I hope that the practice of data science and AI evolve as such that it becomes the default expectation that both technical folks and business leaders creating products with AI will be accounting for ethics and issues of bias and the development of those products, whereas today it is not the default that anyone thinks about those things.”

As more AI systems become part of business operations in the year ahead, Mason expects that product managers and product leaders will begin to make more contributions on the AI front because they’re in the best position to do so.

“I think it’s clearly the people who have the idea of the whole product in mind and understand the business understand what would be valuable and not valuable, who are in the best position to make these decisions about where they should invest,” she said. “So if you want my prediction, I think in the same way we expect all of those people to be minimally competent using something like spreadsheets to do simple modeling, we will soon expect them to be minimally competent in recognizing where AI opportunities in their own products are.”

The democratization of AI, or expansion to corners of a company beyond data science teams, is something that several companies have emphasized, including Google Cloud AI products like Kubeflow Pipelines and AI Hub as well as advice from the CI&T consultancy to ensure AI systems are actually utilized within a company.

Mason also thinks more and more businesses will need to form structures to manage multiple AI systems.

Like an analogy sometimes used to describe challenges faced by people working in DevOps, Mason said, managing a single system can be done with hand-deployed custom scripts, and cron jobs can manage a few dozen. But when you’re managing tens or hundreds of systems, in an enterprise that has security, governance, and risk requirements, you need professional, robust tooling.

Businesses are shifting from having pockets of competency or even brilliance to having a systematic way to pursue machine learning and AI opportunities, she said.

The emphasis on containers for deploying AI makes sense to Mason, since Cloudera recently launched its own container-based machine learning platform. She believes this trend will continue in years ahead so companies can choose between on-premise AI or AI deployed in the cloud.

Finally, Mason believes the business of AI will continue to evolve, with common practices across the industry, not just within individual companies.

“I think we will see a continuing evolution of the professional practice of AI,” she said. “Right now, if you’re a data scientist or an ML engineer at one company and you move to another company, your job will be completely different: different tooling, different expectations, different reporting structures. I think we’ll see consistency there,” she said.

Source: venturebeat.com

When AI and Data Analytics Meet Healthcare

analytics anywhere

The Telegraph covered a robotic revolution in the healthcare sector and predicted an increase in robotic systems in hospitals in the coming decade. Insights from 2016 indicate that about 86% of healthcare provider organizations and technology vendors to healthcare are using artificial intelligence technology. Institutions across the globe are adopting to automation, machine learning, and artificial intelligence (AI) including doctors, hospitals, insurance companies, and industries with ties to healthcare.

Here are a few of the many ways AI and data analytics are paving the road to better healthcare.

1. Mining Medical Records and Devising Treatment Plans

In a day, a radiologist attends to almost 200 patients and 3000 medical images. Today, every person who visits a medical practitioner has their medical record created. The number of records will only grow in the coming years. Analyzing this data and determining a treatment plan consumes valuable time. AI can help reduce the workload and expedite the medical process with the help of something called as a Patient Data Mining.

MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) created ‘ICU Intervene’, a machine-learning approach that collects a significant amount of ICU data ranging from medical to demographic details. Through this data, the AI can determine the types of treatment the patients need and quicken the diagnosis to save critical time.

“Data gathered and presented by AI algorithms will enable healthcare providers and doctors to see patients’ health risks and take more precise, early action to prevent, lessen the impact of or forestall disease progression. These interventions will curb healthcare costs and lead to improved patient health outcomes,” said Derek Gordon, COO of Lumiat, to Cygnismedia.

2. Assisting in Repetitive Jobs and Future Prediction

Routine jobs such as X-rays, CT scans, and data entry can be offloaded to an AI assistant.

In cardiology and radiology, not only does analysis and compilation of data consume crucial time but is also prone to trial and error. AI can prove to be more accurate and helpful in such scenarios. It can read CT scans and medical reports to provide a diagnosis of similar images stored in the database.

In fact, a Chicago start-up, Careskore uses a cloud-based predictive analytics platform. Using Zeus algorithm in real time, Careskore predicts the likeliness of an individual’s hospitalization after studying a range of data which includes a combination of behavioral, demographic and clinical data.

3. Blending Physical and Virtual Consultations

Chat bots used in the healthcare sector interact with the patients through telephone, text, or website to schedule appointments and follow-ups, billing, processing 24×7 urgent requests for customer care, and so on. They help in reducing the overall administrative cost of the hospital.

Medical Virtual Assistants (MVA) collect and compile a patient’s medical and demographic details. M-health apps help people track their health and notify patients about upcoming appointments. They are also programmed to answer the basic health-related or medical queries of a patient.

4. Medication Management

AI-enabled systems can track patients’ data and suggest treatments based on analysis. An Israeli start-up developed AI algorithms closely accurate or even more precise than humans when it comes to the early detection of conditions such as coronary aneurysms, brain bleeds, malignant tissue in breast mammography, and osteoporosis. Such assistance can significantly augment the medical procedure.

IBM Watson launched a unique programme for oncologists where AI will study all the structured and unstructured data of a patient, and suggest treatment pathways to the doctor.

5. Finding the right talent in Healthcare

As the healthcare industry grows, there is always a need for qualified healthcare professionals. Often, hiring managers receive hundreds of resumes per open role. When shortlisting candidates for interviews, they use various data points such as filtering out candidates with too many or too few years of experience. Beyond this level of filtering, many companies are using AI chatbot software for recruitment. For example, Accenture uses Min, an AI virtual recruiter to hire data scientists in Singapore. This helps recruiters save time, improve efficiency, and make fair hiring decisions. For candidates, the chatbot engages, interviews, and shortlists them 24/7.

6. Helping People Make Better Health Choices

Based on the demographic, behavioral, and medical data of people, AI-enabled systems can predict health risks in advance and warn people accordingly. Six months after El Camino Hospital in Silicon Valley applied artificial intelligence, the rate of patients with fatal diseases fell by a 39 percent.

As per OECD estimates and figures from The United States Institute of Medicine, the top 15 countries by healthcare expenditure waste an average of between $1,100 and $1,700 per person annually. Health App Solutions offered by AI helps healthcare systems avoid needless hospitalizations.

Not only does AI help Doctors by advising treatment solutions, but also enables people to lead a better and healthier lifestyle.

Source: datasciencecentral.com

Can AI Address Health Care’s Red-Tape Problem?

analytics anywhere

Productivity in the United States’ health care industry is declining — and has been ever since World War II. As the cost of treating patients continues to rise, life expectancy in America is beginning to fall. But there is mounting evidence that artificial intelligence (AI) can reverse the downward spiral in productivity by automating the system’s labyrinth of labor-intensive, inefficient administrative tasks, many of which have little to do with treating patients.

Administrative and operational inefficiencies account for nearly one third of the U.S. health care system’s $3 trillion in annual costs. Labor is the industry’s single largest operating expense, with six out of every 10 people who work in health care never interacting with patients. Even those who do can spend as little as 27% of their time working directly with patients. The rest is spent in front of computers, performing administrative tasks.

Using AI-powered tools capable of processing vast amounts of data and making real-time recommendations, some hospitals and insurers are discovering that they can reduce administrative hours, especially in the areas of regulatory documentation and fraudulent claims. This allows health care employees to devote more of their time to patients and focus on meeting their needs more efficiently.

To be sure, as we’ve seen with the adoption of electronic health records (EHR), the health care industry has a track record of dragging its feet when it comes to adopting new technologies — and for failing to maximize efficiency gains from new technologies. It was among the last industries to accept the need to digitize, and by and large has designed digital systems that doctors and medical staff dislike, contributing to warnings about burnout in the industry.

Adopting AI, however, doesn’t require the Herculean effort electronic health records (EHRs) did. Where EHRs required billions of dollars in investment and multi-year commitments from health systems, AI is more about targeted solutions. It involves productivity improvements made in increments by individual organizations without the prerequisite collaboration and standardization across health care players required with EHR adoption.

Indeed, AI solutions dealing with cost-cutting and reducing bureaucracy — where AI could have the biggest impact on productivity — are already producing the kind of internal gains that suggest much more is possible in health care players’ back offices. In most cases, these are experiments launched by individual hospitals or insurers.

Here, we analyze three ways AI is chipping away at mundane, administrative tasks at various health care providers and achieving new efficiencies.

Faster Hospital Bed Assignments

Quickly assigning patients to beds is critical to both the patients’ recovery and the financial health of hospitals. Large hospitals typically employ teams of 50 or more bed managers who spend the bulk of their day making calls and sending faxes to various departments vying for their share of the beds available. This job is made more complex by the unique requirements of each patient and the timing of incoming bed requests, so it’s not always a case of not enough beds but rather not enough of the right type at the right time.

Enter AI with the capability to help hospitals more accurately anticipate demand for beds and assign them more efficiently. For instance, by combining bed availability data and patient clinical data with projected future bed requests, an AI-powered control center at Johns Hopkins Hospital has been able to foresee bottlenecks and suggest corrective actions to avoid them, sometimes days in advance.

As a result, since the hospital introduced its new system two years ago, Johns Hopkins can assign beds 30% faster. This has reduced the need to keep surgery patients in recovery rooms longer than necessary by 80% and cut the wait time for beds for incoming emergency room patients by 20%. The new efficiencies also permitted Hopkins to accept 60% more transfer patients from other hospitals.

All of these improvements mean more hospital revenue. Hopkins’s success has prompted Humber River Hospital in Toronto and Tampa General Hospital in Florida to create their own AI-powered control centers as well.

Easier and Improved Documentation

Rapid collection, analysis and validation of health records is another place where AI has begun to make a difference. Health care providers typically spend nearly $39 billion every year to ensure that their electronic health records comply with about 600 federal guidelines. Hospitals assign about 60 people to this task on average, one quarter of whom are doctors and nurses.

This calculus changes when providers use an AI-powered tool developed in cooperation with electronic health record vendor Cerner Corporation. Embedded in physicians’ workflow, the AI tool created by Nuance Communications offers real-time suggestions to doctors on how to comply with federal guidelines by analyzing both patient clinical data and administrative data.

By following the AI tool’s recommendations, some health care providers have cut the time spent on documentation by up to 45% while simultaneously making their records 36% more compliant.

Automated Fraud Detection

Fraud, waste, and abuse also continues to be a consistent drain. Despite an army of claims investigators, it annually costs the industry as much as $200 billion.

While AI won’t eliminate those problems, it does help insurers better identify the claims that investigators should review — in many cases, even before they are paid — to more efficiently reduce the number of suspect claims making it through the system. For example, startup Fraudscope has already saved insurers more than $1 billion by using machine learning algorithms to identify potentially fraudulent claims and alert investigators prior to payment. Its AI system also prioritizes the claims that will yield the most savings, ensuring that time and resources are used where they will have the greatest impact.

Getting Ready for AI

When it comes to cutting health care’s administrative burden through AI, we are only beginning to scratch the surface. But the industry’s ability to amplify that impact will be constrained unless it moves to remove certain impediments.

First, healthcare organizations must simplify and standardize data and processes before AI algorithms can work with them. For example, efficiently finding available hospital beds can’t happen unless all departments define bed space in the same terms.

Second, health care providers will have to break down the barriers that usually exist between customized and conflicting information technology systems in different departments. AI can only automate the transfer of patients from operating rooms to intensive care units (ICU) if both departments’ IT systems are able to communicate with each other.

Finally, the industry’s productivity will not improve as long as too many health care personnel continue in jobs that don’t add value to the business by improving outcomes. Health care players need to begin reducing their workforces by taking advantage of the industry’s 20% attrition rate and automating tasks, rather than filling positions on autopilot.

The task of improving productivity in health care by automating administrative tasks with AI will not be completed quickly or easily. But the progress already achieved by AI solutions is encouraging enough for some to wonder whether re-investing savings from it might also ultimately cut the overall cost of health care as well as improve its quality. For an industry known for its glacial approach to change, AI offers more than a little light at the end of a long tunnel.

Source: Harvard Business Review

Driverless car makers could face jail if AI causes harm

AI technologies which harm workers could lead to their creators being prosecuted, according to the British government.

analytics anywhere

Makers of driverless vehicles and other artificial intelligence systems could face jail and multi-million pound fines if their creations harm workers, according to the Department of Work and Pensions.

Responding to a written parliamentary question, government spokesperson Baroness Buscombe confirmed that existing health and safety law “applies to artificial intelligence and machine learning software”.This clarifies one aspect of the law around AI, a subject of considerable debate in academic, legal and governmental circles.

Under the Health and Safety Act of 1974, directors found guilty of “consent or connivance” or neglect can face up to two years in prison.

This provision of the Health and Safety Act is “hard to prosecute,” said Michael Appleby, a health and safety lawyer at Fisher Scoggins Waters, “because directors have to have their hands on the system.”

However, when AI systems are built by startups, it might be easier to establish a clear link between the director and the software product.

Companies can also be prosecuted under the Act, with fines relative to the firm’s turnover. If the company has a revenue greater than £50 million, the fines can be unlimited.

The Health and Safety Act has never been applied to a case of artificial intelligence and machine learning software, so these provisions will need to be tested in court.

Source: Sky.com

3 ways to make better decisions by thinking like a computer

If you ever struggle to make decisions, here’s a talk for you. Cognitive scientist Tom Griffiths shows how we can apply the logic of computers to untangle tricky human problems, sharing three practical strategies for making better decisions — on everything from finding a home to choosing which restaurant to go to tonight.