My previous post discussed Challenges to doing ML in healthcare. This one suggests ways to apply AI in that overcome these challenges to realize the potential of AI in the real “healthcare” world.
Choose use cases well, and design to get quick wins to prove ROI early. You will likely encounter a lot of resistance to change in healthcare. To overcome this, pick projects that focused on specific applications where AI is relatively easy to apply and does not take too long to realize returns on investment (ROI). Most institutions have annual budgets, try to conceive of AI applications that have at least some part of the ROI realizable in one year. This is not always easy as savings in healthcare accrue typically overtime. But if you can design a project that shows some savings within one year, you will much more likely get buy in from senior management.
Augment rather than replace humans. Being human myself, I might be inherently biased in favor of keeping humans useful. But realistically, this is good for AI. Building AI that accounts for all the complex human functions is hard. Following the Pareto principle, we would do best to apply AI to repetitive/highly cognitively demanding tasks that are relatively easy to build while keeping the highly complex, judgment requiring tasks to humans. This also helps in realizing ROI faster. The algorithm you build will need to mature over time. Close human involvement also enables timely identification of issues, so you can adjust the AI application quickly.
Communicate at the appropriate level so that relevant stakeholders appreciate what you’re doing. AI algorithms can get complex, fast. Choosing language that fits the level of expertise of the audience will help you keep people’s attention. They won’t view what you do as “black boxes” and will be more likely to see how the AI can help their work.
Build data reliability checks up front to gauge what is possible and develop realistic expectations. Some aspects of the data reliability will be generic, applicable to all medical conditions while others will be highly specific to each condition. The development of this checking methodology will be iterative and data source specific. But ultimately, if you do not have the critical elements of data you need, your AI algorithm cannot perform well. It’s helpful to know this upfront so everyone’s expectations are realistic. Click here to read my post on issues with health data.
Build a robust, scaleable data ingest pipeline. Healthcare data usually exist in disparate, isolated and differently structured data silos. For AI to work real time, the data ingest process needs to be able to take in data from these sources and do the necessary data checking and scrubbing, with minimum human intervention. This requires more upfront tech development, but without this, the workflow you build will be prone to errors and expensive to maintain.
Use more human supervision in feature building and selection. Health data can be highly fragmented, lengthy and complex. This creates more noise than signal, making automated feature creation and selection difficult. This is an area of machine learning that benefits the most from human expertise/domain knowledge. This is where I spend at least 60% of my time in any predictive modeling project. Click here to read my post on feature engineering.
Allay human fear by building checks into AI processes. One source of resistance is the fear that humans have that machine processes will run amok. Design checks that identify errors, and alert human operators to intervene. Build reports that transparently demonstrate that the process is working as intended. Where specific recommendations were found by AI, retain the rationale (salient features used in ML) for how the AI arrived at the finding and make this available to the end user.
Thanks for reading. Please click here to subscribe to my newsletter.