Earlier this year I was fortunate to once again participate in one of Arc Fusion’s Jeffersonian-style dinner talks, this time with a constellation of tech and design luminaries, such as Ray Kurzweil (Google), Lorie Fiber (IBM Watson), Paul Saffo (Stanford), and Bruce MacGregor (IDEO). Arc Fusion’s mission is to convene top scientists, entrepreneurs, investors, engineers, artists, and doers to explore the frontiers of health, IT, and biomedicine.
This “recipe for dialogue” worked incredibly well, and the overall tone of conversation was very positive, focusing on the potential power and impact of exponential information technologies. Led by insights from Ray Kurzweil, who has exhaustively studied the growth curves of various digital technologies, we discussed how the next two or three “doublings” of computing power are about to unleash completely astonishing breakthroughs—including the possibility of extending the human lifespan indefinitely.
Even before these coming leaps in computing power arrive, we’re already seeing some of the ways in which artificial intelligence (AI) has impacted healthcare, as cloud-based systems, such as IBM Watson, turn their vast computing power on fast-moving fields like oncology:
It’s clear that AI appears poised to increase patient safety, drive early prevention by predictive modeling, and, eventually, maybe even extend human lifespans to unimaginable lengths. During the concluding remarks of the dinner, though, I found it interesting that I was the only contrarian to the overall theme of techno-optimism. Maybe I’ve watched too many sci-fi movies, but as we race into this brave new world of artificial and augmented intelligence, I believe there are some very legitimate and potentially troubling issues that we will need to address, especially with regard to healthcare. Here are my three doses of healthy skepticism:
1). Accidental Errors and Outages
In our current healthcare system, a diagnostic error by a single physician or team has limited impact. As AI grows in reach, errors have the potential to negatively impact patients across the country. Also, as HCPs and health systems start to rely on AI health algorithms, what if there is an outage and sudden lack of access? Will HCPs still have enough of their “natural” problem-solving skills to make wise decisions, or will they have become overly reliant on expert systems, rendering them essentially nothing more than order takers from the AI health algorithms?
2). Malicious Hacks and Computer Takeovers
It does not require a great deal of imagination to see how powerfully integrated AI health algorithms could become a target in the coming years, either by a lone wolf with access to the system or organized terrorist groups. Further out on the timeline, what happens if some future descendant of IBM Watson decides that “population thinning” is in the best interest of mankind, and creates a super virus? Today it might seem far-fetched, but given the pace of technological change that is on the horizon, such scenarios that seem to be drawn straight from science fiction are becoming more plausible every day!
3). Lack of Local Decision Making and Ethics
It’s not clear whether AI systems can become medical ethicists and develop insights that match those of a hospital’s own ethics committee. Either way, it’s not difficult to imagine a future in which the AI recommendation is the one that gets approved and paid for, regardless of whether families or the local care team are aligned with the decision. An algorithm about when to withdraw care can probably be written, but it’s unclear if we really want to create a system in which such decisions are made in the cloud, rather than by the people who are intimately involved with the particular patient.
It’s obvious that in both the short term and long term, the intersection of AI and healthcare holds tremendous potential. However, it’s also hard to ignore the continued cautionary tales that drive so many Hollywood blockbusters, and are also found in the long tradition of dystopian science fiction novels. Moving forward, I believe that we need to maintain some healthy doses of skepticism as these advances continue in the healthcare space, being especially watchful for buggy code, malicious hacks, and a growing lack of local autonomy and responsibility within the decision-making process.
Latest posts by Chris Mycek (see all)
- Training Wheels for the IoT - September 12, 2017
- Pharma: From laggards to leaders? - April 25, 2017
- 3 Key Takeaways from the Forrester Research Marketing Forum - April 18, 2017