Beyond the health tech hype: generating evidence in digital health
From Evidation Health and Rock Health
New disruptive digital health solutions are entering the market in droves, with great potential to improve health outcomes and lower costs. Few, however, have clinical and economic evidence to substantiate claims of clinical and economic utility. In the absence of this data, payers, providers and other healthcare purchasers may struggle to identify the most impactful digital health products and services. We hosted Lantern, Mango Health, and CrowdMed, with Evidation Health as moderator to discuss opportunities and strategies to differentiate your company with evidence of impact. Here’s what we learned.
Evidation Health: How are you thinking about evidence generation strategy?
Megan Jones (Lantern): We’re a bit biased in how we think about generating evidence, because we emerged out of a lab at Stanford which developed and ran clinical trials on mobile-based cognitive behavioral therapy (CBT) programs. Their research showed mobile CBT works—but the challenge was commercialization and making the product accessible. For programs coming out of research, they need to be functioning as well as they do in controlled studies. When we create a new program for Lantern, we go through the clinical review process (small randomized controlled trials on safety and accuracy and then work with a third party for further evidence generation so the results and evidence is unbiased)
Jared Heyman (CrowdMed): We started as a B2C company, but then shifted to focus on enterprise customers (B2B2C model). We discovered that if you are trying to partner with health plans, at-risk providers, employers—these are very risk averse, conservative, evidence-based organizations. We knew we had to have evidence to back our claims, especially around health outcomes, cost reduction, and utilization—and it was not good enough to have just testimonials, we needed to have actual clinical evidence.
Dr. Carolyn Jasik (Mango Health): We started as a B2C company as well and later pivoted to B2B2C. Most of the company came from a tech/gaming background. Now, every time we start a thinking of a new product, we always do a clinical supplement, including a literature search to see what has been tested and what has worked in the past—and then review the research with our product manager. Once we identify the gaps, and how can we fill those gaps with enterprise partnerships, we work with every partner on what they are looking for in terms of evidence—and every partnership has an evaluation plan in place.
“For programs coming out of research, they need to be functioning as well as they do in controlled studies.”
Why generate evidence?
Jared (CrowdMed): When I met with Sean [Duffy] from Omada, the first thing he said was you need a peer-reviewed, published academic paper that backs up your claims and shows your product works. You can’t just come in with marketing claims—academic literature is the only thing people will read.
Carolyn (Mango): We found our enterprise customers were asking for data and we needed to share that data in a transparent way. We learned it isn’t always necessary to have an actual paper during these conversations, but we should at least have a plan to write a paper, and understand the limitations of your plan and data to help show we knew what to do with clinical evidence.
Megan (Lantern): You need to show that your product works clinically—that’s first step, but not the entire process. It is not enough to convince someone to pay for it—you need to answer other questions. Does it save money? Do people use the product? You need to assess other metrics—and different outcomes matter for different business partners
What are the methods of generating evidence you have considered or used?
Megan (Lantern): The most important thing to us is objectivity in evidence generation—that is critical. We want to design all evidence to limit conflict of interest. We started with a lot of academic collaborations and it was important that recruitment, data management, analysis, and authorship be handled externally to keep objective expectations for peer-reviewed publications. We also needed to find common ground between academic patterns (understanding their goals and how to align with them). We may look for other opportunities to prove metrics are right for other customers or employees.
We have a lot of support from strategic partners but it’s a much slower process, especially for the normal sales cycle. If you need to generate evidence that a product release relies on, you need that data and evidence quickly. Collaboration is critical to get evidence really quickly.
Jared (CrowdMed): We also needed third-party objectivity and speed, especially since startups move fast. We had neither the expertise nor bandwidth to do studies ourselves and we didn’t have enterprise partners yet. The whole point was to get enterprises on board with this evidence. In the future it would be great do a similar study in partnership with a health plan, looking at plan-specific members. If we could prove that we get the same results in their population, that would be helpful, but we wanted to keep it simple at first to optimize for speed.
Carolyn (Mango): You need to figure out a way to generate evidence that makes sense for your team—put most resources in sales and secure enterprise partnerships so you can learn from those partners. We need to know what type of data they want to see to understand the product is doing what they want to do. You need to look at what stage your company is in, what you are trying to accomplish, and how evidence generation contributes to that. It is helpful to have people on your team who speak the same language as enterprise partners. We have been asked what our evaluation plan is—and being prepared for that was really helpful.
“The most important thing to us is objectivity in evidence generation”
What’s your current evidence strategy?
Jared (CrowdMed): We have one published paper in the JMIR, and another one under review. The first one was written independently by academics and looked at patient-provided data that we already collected. We collect tons of clinical and outcomes data that show how we impact work productivity and compare to the traditional medical system. The data was patient-reported, which is imperfect—but still data. Our second paper was written by Evidation and is currently under peer review. It looks at health plan data before and after our product is implemented—specifically, what was the provider visit frequency and patient’s medical costs before and after CrowdMed. That was not patient-reported and showed the hard dollars that were charged [to the health plan]—demonstrating that we have a statistically significant and large impact on these metrics. Having self-reported data is good, but medical claims data is a lot more solid and will have more splash when published.
Megan (Lantern): Our first step was repeating replicated results from programs that prevented the onset of eating disorders. We needed to follow subjects for four years to show we could prevent the onset of eating disorders in high-risk groups, and that we can identify high-risk individuals. This has been published. We have a number of studies underway, looking at things like whether our stress program reduces burnout, and how our mood program actually improves mood. We are now doing that with different populations. A lot of research has been done with college students but now we need to look at provider systems. With psych products, it is not enough that the impact is statistically significant—you need to see a degree of change that is actually clinically meaningful.
Carolyn (Mango): We first used in-app data to look at self-reported changes in different outcomes—and had third parties analyze it, which was a great way to prove clinical validation. It is a great idea to not do this in-house—it’s a lot more valuable and easier for people to think about and understand. In our second project, we are writing a paper with enterprise partners, which is not very fast when it comes to publications. Our third model is more of a prospective study design where we design trials and implement them in a center.
Megan (Lantern): It’s important for us to understand what success looks like in our mind. We know clinically that it works—so then we need to check ROI and cost-effectiveness and what things will contribute to long-term success of the product. We need to know what is gating entry to that conversation—if we can’t show our product works, then we can’t have that conversation. We need early and strategic partnerships—design studies would give them the data they need but they also need to not be burdensome for the company.
Carolyn (Mango Health): For us, doing partnerships early on allowed us to get pharmacy claim data. Pulling the trigger to do a big study was a difficult decision, but it is important to know when is the best time to run different studies.
Jared (CrowdMed): It’s always good to try to do just enough to get to next stage—you don’t want too much that you are overwhelmed. As soon as we decided to make our business model pivot, we knew that having published evidence would be the cost of admission—what we’d need to be taken seriously at a large health plan. We didn’t want overkill on our evidence generation process, but we needed it to be credible for a CMO at a large plan that we can reduce costs and provider frequency. Once that step was done, we focused on what evidence would get us to the next stage
“Collaboration is critical to get evidence really quickly.”
What challenges have you encountered in efforts to date?
Carolyn (Mango): We were not always set up for research. At first we were not designing studies properly from the start—they were observational and difficult to draw conclusions. The second thing we learned was that if you think you may need IRB approval, you probably do. There is a fine line between a research study and a quality improvement study. To publish, you need IRB involvement and subject consent.
Megan (Lantern): We had two learnings. One was dealing with the blurring line between company program creation and research. We needed to involve advisors to know what the product was from the beginning, and set clear expectations on what can and cannot change once research starts. We don’t want to change the product once research starts. The second learning is to design a study that is related to how you plan on selling or implementing in the population you want to sell in. You need to validate what we are going to turn around and sell—so when we do sell, we can we deliver the results at the same expectations.
Jared (CrowdMed): Try and start with as large of a sample size as you can, especially if you are looking at retrospective data.
What’s your one sentence advice?
Megan (Lantern): Having somebody on your team who can assist with evidence generation and who can infuse evidence generation in the entire company is critical—this doesn’t prevent you from working with external partners—it probably helps.
Jared (CrowdMed): Don’t do it alone—leverage external expertise.
Carolyn (Mango): Put evidence generation on the MVP list—make it a priority from the beginning. Thinking about evidence generation too late is a problem if you are trying to secure enterprise partnerships.