This article features the discussion highlights from 1st Fellows.fund Round Table Discussion took place on July 17th, 2021. Speakers include: Vijay K Narayanan, Daniel Kokotov, Lei Yang, Haixun Wang, Xuedong Huang, Anshul Pande, Gang Hua, Alex Ren
Full video of Fellows Fund Summit 2021: Tech Entrepreneurship in the AI Era
Fireside chat between Eric Yuan and Xuedong Huang
Challenges for AI in Healthcare
Hello everyone, my name is Anshul Pande, I'm the CTO of Stanford Children's Health. We're just in Palo Alto deeply focused on pediatric and maternal health, and tied to Stanford University School of Medicine.
I’d like to share some insights here. In the last three years, we've been on a quite high-speed journey around digital transformation in particular - not tremendously accelerated with COVID, because there wasn't any other option available - but it's great as a technologist to be in the middle of this because we were getting pushback from either providers or patients won't take it. So it's a perfect window to push through a tremendous amount of change within the organization. Besides the process changes, going to a 40 times increase requires a whole new set of toolsets to support it.
The second piece is that it brought up the idea of “hospital at home” or “remote patient monitoring” which had been in a very slow burn incubation for 5-7 years. Because there was this bigger desire of not going to the hospital if you don't have to, or stay in hospital for as limited capacity as possible. And we'll take care of you at home as much as possible. In this way, the data streams that are coming back are much bigger because the monitors we usually only provide in the hospitals are now being available at home.
However, the physicians are unprepared for how to handle this much data coming in. Let’s take diabetes as an example. If you've been doing remote patient monitoring with diabetes for seven years until now. During the first few years, how do you bring this data back in?
Do you use Apple and Microsoft, Google's core structures to this piece? After all of this data coming in, a physician has 15 minutes with the patient, how do you make it valuable? How do you use this much data and provide some guidance which is meaningful in terms of how they should change their lifestyle or recover as fast as possible?
Since last year, we've been working on a visualization layer that helps us visualize data gracefully and tie-up with inputs from the patient to correlate how their body is handling sugar or how the medication works for them. Hence we can provide them a better framework on how to take better care of them. That’s going to grow for us in terms of the data streaming becoming 10x of other disease cases. However, it requires a lot of machine learning, because we can't adjust models fast enough to apply data on top of what we’re getting at the moment. How to take care of the discrepancy will be a big area.
We are in a country that has higher radiologists per capita than any other country in the world. The biggest challenge is that as these machines start getting deployed in other countries, the ability to use real-time machine learning models where somebody who has just trained to use the machine can get a quick interpretation to help a local doctor versus needing a specialist radiologist. The last test that we did the model actually performed equal to or better than about 20 radiologists with different experiences across 26 different disease states. So it's really interesting how quickly the models are getting better. The trickier part is can you use that model automatically in a brand new data set. So the re-learning portion is still complicated.
Vijay K Narayanan
One thing I keep hearing about is that it works great when the data comes from one or a few machines, but when you come from a totally new machine; the quality is different; the imaging parameters are different; models don't handle that very well. How much of that do you see as a problem? How much of that is being solved?
On the imaging side, the data quality is just increasing tremendously. In the last two years, some of the imaging we're doing, the image size we got is 10x or 20x, and that's one modality. Part of it is the products are getting better. The actual imaging machines are getting better, but the other part of the computing engines are getting better so you can slice them. Think of a solid model, the slicing was its thing. Now it's even thinner. On the one side, yes, that adds to the complexity of the problem. On the other side, you have more fidelity. So at the end of the day, that's what we care for. I believe it will remain a problem but I don't think it'll remain a problem for long. It'll definitely remain a problem from a model’s perspective because you need to tweak your model parameters based on the image quality whether using indigenous models or not.
We have the generic solution working right now although “last mile solution” is missing. But how are you going to marry that stack with the problem in the business sense? It means there is money waiting for your solution.
I think it is about education. From a pure technology perspective, Machine learning is just one hammer with multiple other handles. Part of it is just re-educating to figure out, “here are the tools available”, “here's an example of how to use the tool to solve the problem”.On the industrial side, when I meet my compatriots, a lot of them don't even know what the hammer could be. Another part of it is just opening up your eyes in terms of what’s available.
You need a hammer, you also need ammunition -- how to get data. And once you have both, you basically solved the problem.
I want to add something. The data and the tools we have, whether it is machine learning or not. If you want to solve a problem, you need to understand the domain first. By marrying domain knowledge with machine learning, then you can potentially solve the problem.
The difference between variants of models is the data. If you have more data, you can even leverage last-generation solutions. How to have the data matching the problem that society needs? That is the key miss link. The industrial revolution will get better and more robust. And the amount of data that people need to customize will be less. This is amazing in terms of improvements, but how can you get this hammer and marry that with the problem you have.
For example, in my speech recognition startup. When people got a word wrong and ask us to fix it. I can’t fix the wrong word. I can’t just tune some parameters to make it correct.
That actually brings up a very important problem. Think about it, you have errors in the model. In the traditional software system, usually, it's a bug, then you fix it in certain ways. But the problem with the current statistical-based AI model is that you simply don't know how to fix it. And that is long maintainable. You don't know, the kind of virus you are introducing now is going to cause you some other trouble.
Vijay K Narayanan
If you look at these action engagement systems. Usually you take the data across multiple data systems, pulling data into models that come with the predictions, then you have to transfer the data into different systems to take an action. If it's a CRM system, you shall have a Sales Lead later that will turn the prediction into actual action. However, it works differently with systems. It's not just an integration challenge, you're also changing the entire process.
Given an example in healthcare. Let’s say, radiologists. Yes, you can come up with a great model. But you still have to figure out many variables such as how the radiologists' day-to-day work change as a result; What is change management there; Some of these things are codified for decades and radiologists are specifically trained for those processes.
Current systems work so why do you need to change it? They’re going to take a long time. It takes a long time to change the process.
You may provide a 20% or 40% better return, but risks lie if you don’t know if the system is going to give you disastrous answers. In radiology, false negatives are so much more costly than false positives. It's not going to be a cookbook. So solving the problem is very important and companies start out as ML companies tend to underestimate the last-mile challenge. Sometimes we've seen solutions on a technology level but the next step in terms of how to use these predictions to change the business process on a factory level needs more attention.
Join Fellows Fund club to receive more updates about AI Startup Investment Opportunities