As financial products become more sophisticated and consumer choices multiply, people at all economic levels face bewildering decisions about what options are best for them. In what kind of funds should I invest my retirement money? Should I choose a fixed or adjustable mortgage? Which credit card makes the most sense for me? Among the most complicated choices consumers must make is selecting a health insurance plan. The rules governing coverage are varied and complex, and costs and benefits are difficult to calculate. People often make poor decisions and wind up choosing insurance that costs more and doesn’t provide the best service.
Research has found that efforts to help consumers make financial decisions through education or provision of information have limited impact. But could algorithm-based online tools that assist or even replace human decision making do a better job? Algorithm-based advice is already in wide use in areas such as retirement planning and is increasingly available in the health insurance market. For example, the Centers for Medicare and Medicaid Services gives beneficiaries the option of using online software to guide them in plan choice.
But surprisingly little research has been done on how algorithm-based assistance affects consumer choices.
We found that a custom-designed decision support tool had several positive effects, including increased plan switching, more time spent choosing a plan, cost reduction, and greater satisfaction with the decision-making process.
This policy brief explores the benefits and some of the challenges that tools like these offer.
Enrolling in Medicare Part D requires beneficiaries to choose among private plans offered where they live. Part D is available either as a stand-alone prescription drug plan or as a feature bundled into a Medicare Advantage medical and drug plan, and recipients pay a premium for the benefit. Our study looked at choices of stand-alone plans, examining whether a carefully designed algorithmic online tool can improve choices, help beneficiaries save money, and boost consumer satisfaction. We also wanted to identify the characteristics of people most or least likely to take advantage of such a resource.
We carried out a field trial during the 2017 Medicare Part D open enrollment period in November and December 2016 in cooperation with the Palo Alto Medical Foundation (PAMF), a large Northern California physician group.
Our decision tool, called CHOICE, featured user-friendly design, automatically imported beneficiary prescription drug information from the PAMF database, projected total cost, and assigned a quality rating to each plan based on consumer evaluations. We also contracted with a third-party that constructed a personalized “expert score” for each insurance plan based on projected cost and quality ratings.
A group of 29,451 PAMF patients between 66 and 85 eligible for standalone Medicare drug coverage were invited to take part, but 928 people ultimately completed the study. Most of those participants live in the heart of Silicon Valley—one of the most affluent, educated, and technologically proficient areas of the country. Racial and ethnic minorities, women, and people who live in areas with lower incomes and education levels were less likely to respond to our invitation. In these respects, participants were not representative of the Medicare beneficiary population, but were likely the types of Medicare enrollees that are most likely to use algorithmic decision support.
Enrollees were randomly divided into three groups:
We looked at four primary outcomes to assess the effects of using CHOICE: whether study participants switched Part D plans for 2017; changes in beneficiaries’ expected monthly costs; how satisfied people were with the selection process; and how much conflict they experienced in making a decision, which reflected factors such as their confidence that they made the right choice and their understanding of risks and benefits. We also considered the amount of time people reported spending on making a choice and whether they enrolled in one of the three expert recommended plans.
Our main finding was that use of our algorithmic decision support tool helped improve the plan choice process in several ways.
People who used the tool were more likely to actively shop and switch plans compared with members of the control group. They also took more time making a decision, saved more money, and reported less conflict and more satisfaction with the decision-making process.
These effects were significantly more pronounced among study participants in the information plus expert recommendation group. People in that group switched plans at a 38 percent rate compared with a 28 percent rate for members of the control group. By contrast, switching in the information-only group was not significantly greater than in the control group.
We estimate that people in the information plus expert recommendation group saved an average of about $71 per month in premiums and out-of-pocket costs, while those in the individual analysis group saved about $18.
If we applied those larger savings numbers to the nearly 25 million people enrolled in Medicare Part D —and assuming an equivalent rate of participation as in our experiment—total annual savings would be on the order of $680 million. This is particularly notable given the tool itself cost less than $1.8 million to develop.
It’s important to stress that a relatively small subset of those we approached ultimately chose to take part in the trial, and they differed in important ways from those who did not participate. Many of them had more experience with information technology as measured by their use of their physician’s electronic medical record.
We were also able to use machine learning methods to predict that those who chose not to participate in the trial would have likely been even more responsive on average to the algorithmic recommendation.
Our study has three important implications for policymakers.
First, our research indicates that a well-designed algorithmic decision support tool can help people make better financial choices. While our tool incorporated many features intended to simplify the user experience, we found that individually customized information was most effective when accompanied by an expert recommendation.
Second, when algorithmic recommendations are bundled with a web-based tool, they are unlikely to reach the types of consumers who might benefit the most. Our trial disproportionately attracted consumers who tended to be more affluent, better educated, and used online resources more than average.
However, we found that those who did not enroll in the trial were those who were likely to respond the most to algorithmic advice.
We believe public policy initiatives involving more intensive intervention potentially could widen use of these resources. Many public benefit or insurance programs, including Social Security and Medicare, require beneficiaries to make complex choices. Currently, Medicare offers a decision support tool on its website, but use of it is voluntary, difficult for many older adults to use, and it likely doesn’t reach many of the people who need it most.
A better approach may be to find ways of engaging people less inclined to use decision support software, perhaps by reaching out to them with personalized information and expert recommendations in ways that don’t require them to access that information online.
The results of our study also highlight areas of caution. We found that people not only learned more about product features, they also changed the way they value those products in response to an “expert recommendation.” This underscores the importance of making sure consumers can evaluate the criteria underlying those recommendations to protect themselves against fraud and manipulation.
Research reported in this presentation was funded through a Patient-Centered Outcomes Research Institute (PCORI) Award (CDR-1306-03598). The statements in this presentation are solely the responsibility of the authors and do not necessarily represent the views of the Patient-Centered Outcomes Research Institute (PCORI), its Board of Governors or Methodology Committee.
 Ming Tai-Seale, Cheryl Stults, Amy Meehan, Roman Klimke, Ting Pun, Albert Solomon Chan, Alison Baskin, Sayeh Fattahi
i Stults, Cheryl D., Alison Baskin, Ming Tai-Seale, and M. Kate Bundorf, “Patient Experiences in Selecting a Medicare Part D Prescription Drug Plan,” Journal of Patient Experience, 2018, 5 (2), 147–152.
ii Stults, Cheryl D., Sayeh Fattahi, Amy Meehan, M. Kate Bundorf, Albert S. Chan, Ting Pun, and Ming Tai-Seale, “Comparative Usability Study of a Newly Created Patient-Centered Tool and Medicare.gov Plan Finder to Help Medicare Beneficiaries Choose Prescription Drug Plans,” Journal of Patient Experience, 2018, 6 (1), 81–86.
iii Bundorf, M. Kate, Maria Polyakova, Cheryl Stults, Amy Meehan, Roman Klimke, Ting Pun, Albert Solomon Chan, and Ming Tai-Seale, “Machine- Based Expert Recommendations and Insurance Choices Among Medicare Part D Enrollees,” Health Affairs, 2019, 38 (3), 482-490.
iv Bundorf, M. Kate, Maria Polyakova, and Ming Tai-Seale, “How Do Humans Interact with Algorithms? Experimental Evidence from Health Insurance,” 2019 NBER Working Paper 25976.