Using Artificial Intelligence to Reduce the Risk of Nonadherence in Patients on Anticoagulation Therapy

Past, Present and Future of AI / Machine Learning (Google I/O ’17)

 

We are in the middle of a major shift in computing that’s transitioning us from a mobile-first world into one that’s AI-first. AI will touch every industry and transform the products and services we use daily. Breakthroughs in machine learning have enabled dramatic improvements in the quality of Google Translate, made your photos easier to organize with Google Photos, and enabled improvements in Search, Maps, YouTube, and more.

 

Artificial intelligence prevails at predicting Supreme Court decisions

OnTheGo

“See you in the Supreme Court!” President Donald Trump tweeted last week, responding to lower court holds on his national security policies. But is taking cases all the way to the highest court in the land a good idea? Artificial intelligence may soon have the answer. A new study shows that computers can do a better job than legal scholars at predicting Supreme Court decisions, even with less information.

Several other studies have guessed at justices’ behavior with algorithms. A 2011 project, for example, used the votes of any eight justices from 1953 to 2004 to predict the vote of the ninth in those same cases, with 83% accuracy. A 2004 paper tried seeing into the future, by using decisions from the nine justices who’d been on the court since 1994 to predict the outcomes of cases in the 2002 term. That method had an accuracy of 75%.

The new study draws on a much richer set of data to predict the behavior of any set of justices at any time. Researchers used the Supreme Court Database, which contains information on cases dating back to 1791, to build a general algorithm for predicting any justice’s vote at any time. They drew on 16 features of each vote, including the justice, the term, the issue, and the court of origin. Researchers also added other factors, such as whether oral arguments were heard.

For each year from 1816 to 2015, the team created a machine-learning statistical model called a random forest. It looked at all prior years and found associations between case features and decision outcomes. Decision outcomes included whether the court reversed a lower court’s decision and how each justice voted. The model then looked at the features of each case for that year and predicted decision outcomes. Finally, the algorithm was fed information about the outcomes, which allowed it to update its strategy and move on to the next year.

From 1816 until 2015, the algorithm correctly predicted 70.2% of the court’s 28,000 decisions and 71.9% of the justices’ 240,000 votes, the authors report in PLOS ONE. That bests the popular betting strategy of “always guess reverse,” which has been the case in 63% of Supreme Court cases over the last 35 terms. It’s also better than another strategy that uses rulings from the previous 10 years to automatically go with a “reverse” or an “affirm” prediction. Even knowledgeable legal experts are only about 66% accurate at predicting cases, the 2004 study found. “Every time we’ve kept score, it hasn’t been a terribly pretty picture for humans,” says the study’s lead author, Daniel Katz, a law professor at Illinois Institute of Technology in Chicago.

Roger Guimerà, a physicist at Rovira i Virgili University in Tarragona, Spain, and lead author of the 2011 study, says the new algorithm “is rigorous and well done.” Andrew Martin, a political scientist at the University of Michigan in Ann Arbor and an author of the 2004 study, commends the new team for producing an algorithm that works well over 2 centuries. “They’re curating really large data sets and using state-of-the-art methods,” he says. “That’s scientifically really important.”

Outside the lab, bankers and lawyers might put the new algorithm to practical use. Investors could bet on companies that might benefit from a likely ruling. And appellants could decide whether to take a case to the Supreme Court based on their chances of winning. “The lawyers who typically argue these cases are not exactly bargain basement priced,” Katz says.

Attorneys might also plug different variables into the model to forge their best path to a Supreme Court victory, including which lower court circuits are likely to rule in their favor, or the best type of plaintiff for a case. Michael Bommarito, a researcher at Chicago-Kent College of Law and study co-author, offers a real example in National Federation of Independent Business v. Sebelius, in which the Affordable Care Act was on the line: “One of the things that made that really interesting was: Was it about free speech, was it about taxation, was it about some kind of health rights issues?” The algorithm might have helped the plaintiffs decide which issue to highlight.

Future extensions of the algorithm could include the full text of oral arguments or even expert predictions. According to Katz: “We believe the blend of experts, crowds, and algorithms is the secret sauce for the whole thing.”

Source: Science Magazine

The battle to build chips for the AI boom is about to get serious

OnTheGo

This month, the MIT Technology Review analyzes the blooming of the AI. As machine learning has blossomed, the technique has become the hot ticket for businesses keen to innovate (or at least, sound like they plan to). That’s proven to be good news for anyone building hardware that runs AI software—and until now, that really meant Nvidia, which happily found that the graphics processors it had been making for years were surprisingly well-suited to crunching AI problems. But our own Tom Simonite explains that Nvidia’s dominance may be about to slide.

Car Makers Drive Hard Towards AI Advances

OnTheGo

Today’s cars are all about mobility — not just the kind that transports people and things, but also data mobility. Today’s cars are more connected, and they are generating a lot more data that car manufacturers are working to collect, process, and apply to AI developments.

When the average person thinks about the connected car — whether it is fully automated or packed with sensors that alert the driver to possible dangers — what comes to mind is the experience for the person in that driver’s seat. In fact, the information the driver sees represents only a tiny fraction of all the data collected through the sensor system. The amount of data collected is indeed vast, and car makers are now working on ways to ingest and process it effectively.

Ford just reported that $200 million of its planned $1.2 billion investment in three Michigan manufacturing facilities is earmarked for “an advanced data center to support the company’s expansion to an auto and a mobility company.” The company already has one in operation. It is building the second in anticipation of “its data usage to increase 1,000 percent — driven by manufacturing and business needs and new mobility services, such as more connected, autonomous and electrified vehicles.”

Separately, Toyota has announced that it is partnering with the NTT Group to jointly tackle driving issues such as preventing accidents and managing traffic by leveraging big data. The goal is to achieve “a sustainable Smart Mobility Society in the future.”

The path toward that future may begin as early as 2018. That’s the year slated for Toyota’s field trial “to assess the feasibility and usability of representative services in the connected car field.”

To get to that point, it will be working with NTT on four areas of collaboration:
1.Platform for data collection, accumulation, and analysis
2.IoT networks and data centers
3.Next-generation communication technologies (5G, edge computing)
4.Agents

The last category is one that draws on AI through a system that makes sense of the data both within and outside the car. Its application is what enables motorists to experience “user-friendly services,” including hands-free activation technology.

Toyota is also applying AI within the context of developing next-generation energy that reduces emissions. The company’s Research Institute (TRI) plans to spend about $35 million over the next four years on research in collaboration with universities and other research organizations. They include Stanford University, the Massachusetts Institute of Technology, the University of Michigan, the University at Buffalo, the University of Connecticut, and Ilika, a materials science company based in the UK.

TRI Chief Science Officer Eric Krotkov, declared, “Toyota recognizes that artificial intelligence is a vital basic technology that can be leveraged across a range of industries, and we are proud to use it to expand the boundaries of materials science.”

In applying AI to material discovery and development, TRI and its partners plan not just to develop pioneering “models and materials for batteries and fuel cells” but to come up with whole new approaches to the application “of machine learning, artificial intelligence, and materials informatics” to accelerate progress. The plan is to innovate “automated materials discovery systems that integrate simulation, machine learning, artificial intelligence, and/or robotics.”

Though TRI is a distinct entity owned by Toyota, its research goals dovetail with those of the car company’s collaboration with NTT. TRI’s founding principles include a commitment to apply automated technologies to improving safety and enabling people who are incapable of driving under existing conditions to enjoy mobility. While mobility in that context means transportation of people, it is helped along by mobile data. It is on that basis that analytics and AI can be used to identify new possibilities.

Source: allanalytics.com

Innovation Through Crowdsourcing and AI

Young business woman with ipad

If artificial intelligence (AI) is the future, the future is now, and it’s all around us. Despite what science fiction and futuristic fantasy may have you believe, AI isn’t all about recreating human consciousness. Rather, it’s a practical, efficient way to help business technology get smarter as a product gains traction. AI allows companies to use insights from a large community of users to continually improve upon their products.

However, AI isn’t all games and robots. It takes a cross-sectional skill set to successfully implement good AI, and in order to do so, companies need to both understand their consumers’ motivations and capitalize on them using the right tools.

AI and Crowdsourcing: Better Together
Plenty of businesses rely on data from the usual suspects — business analytics, internal data, information gathered by employees — but few understand how to actively manage data contributed by users. Alexa and Siri are prime examples of how AI can leverage this crowdsourced information to improve the customer-company relationship.

Using crowdsourcing to gather human-contributed information and funneling that information through AI technology is the simplest path toward more meaningful insights. This method allows business owners to stop hunting down insights one at a time and to instead receive targeted data to inform smarter business decisions. This collaboration produces results that are greater than the sum of the parts.

The value lies in asking the right questions at the right time using AI and reporting the findings to the people who could benefit from the information. Collectively, crowdsourcing and AI produce truly intelligent market research.

Teamwork Makes the Dream Work
Companies must integrate crowdsourcing and AI to produce a scalable, intelligent model capable of handling their needs indefinitely. Many implement AI without really identifying how it can help them. Other, more tech-focused companies often find themselves trying to lay crowdsourcing on top of existing technology without properly understanding how to motivate their crowds.

Crowdsourcing matches people to questions that the community needs answered. Those with information to share can provide feedback on their fields of expertise according to the needs of those searching for that information. Meanwhile, AI technology filters out the answers and extracts meaningful intelligence from them, creating a powerful advantage over companies that fail to combine these tools to their full potential. What advantages, you ask? Here are a few:

1. A streamlined end-user experience. Alexa, Siri, Waze, and Skype Translator are all embodiments of the improved end-user experience thanks to crowdsourced AI insights. In the early stages, using these tools can be frustrating as they continue to gather data.

Waze took traffic navigation — something very few people like — and improved it with real-time updates, personalized vocal guides, and other features.

A wealth of information is the foundation. AI and crowdsourcing can build on that base to create a valuable, magical experience.

2. It brings outsiders into the fold. Waze began by gathering information on the patterns of power commuters eventually building up enough data via crowdsourcing and expert consultations to create optimal route maps for its users. Thanks to the app, people new to an area can have the same driving experience as someone who has lived there for 10 years.

The powerful crowdsourcing-AI combination has the capability to bring any outsider into any inner circle. The more comfortable a user is with the information — especially if he provides it — the more likely he is to become a repeat visitor.

3. Lots of intelligence, all in one spot. Currently, business leaders must track down information in silos. For example, only a specific department can answer specific questions, and help is often stalled while a department waits for approval from another division.

When done properly, this streamlining of processes even allows leaders to see connections they otherwise might have overlooked.

While some technologies introduce only complications to established processes, the power of combining crowdsourcing with AI is worth the disruption. If your company is looking for better insights and new advantages, consider the benefits of this powerful merger.

Source: innovationexcellence.com

AI-powered diagnostic device takes out Tricorder XPrize

OnTheGo

Launched in 2012, the Qualcomm Tricorder XPrize tasked competing teams with developing a portable and versatile medical diagnostics machine that would give people “unprecedented access” to information about their health. The contest has now been run and won, with an AI-powered device awarded top honors and US$2.5 million for its trouble.

This particular XPrize – a series of competitions aimed at solving global issues – was created to encourage the development of a device that mimicked the iconic Tricorder from Star Trek. More specifically, this meant the ability to diagnose 13 conditions including anemia, diabetes, sleep apnea and urinary tract infections, along with the ability to detect three of five additional diseases: HIV, hypertension, melanoma, shingles and strep throat.

The competition was whittled down to ten finalists in 2014, and then again to two in December last year. The Taiwan-based Dynamical Biomarkers Group took second place with its prototype for a smartphone-based diagnostics device, but was beaten out by Final Frontier Medical Devices from Pennsylvania.

The winning machine is called DxtER and uses artificial intelligence to teach itself to diagnose medical conditions. It does this by using a set of non-invasive sensors to check vital signs, body chemistry and biological functions and draws on data from clinical emergency medicine and actual patients. All this data is then synthesized by the AI engine and the device spits out a “quick and accurate assessment.”

In addition to the $2.5 million, the Final Frontier and Dynamical Biomarkers Group teams (which received a not-too-shabby $1 million for second place) will benefit from ongoing support and funding from XPrize and its partners. This includes R&D partnerships with the US Food and Drug Administration and the University of California San Diego. Meanwhile, Lowe’s Home Improvements has committed to distributing a consumer-ready version of the device, while the General Hospital of Maputo in Mozambique will provide it to its doctors, nurses and patients.

“We could not be more pleased with the quality of innovation and performance of the teams who competed, particularly with teams Final Frontier and Dynamical Biomarkers Group,” said Marcus Shingles, CEO of the XPrize Foundation. “Although this XPrize competition phase has ended, XPrize, Qualcomm Foundation, and a network of strategic partners are committed and excited to now be entering a new phase which will support these teams in their attempt to scale impact and the continued evolution of the Tricorder device through a series of new post-competition initiatives.”

Source: Newatlas.com