Dr. McKernan discusses recent research findings, and how this relates to the ethical implications of using AI in practice.
A patient goes into the emergency room for a broken toe, is given a series of standardized tests, has their data fed into an algorithm, and—though they haven’t mentioned feeling depressed or having any suicidal thoughts—the machine identifies them as at high risk of suicide in the next week. Though the patient didn’t ask for help, medical professionals must now broach the subject of suicide and find some way of intervening.
Though useful, the algorithm will likely raise ethical issues that cannot be foreseen or resolved until they play out in practice. “It’s not really outlined in our ethical or practice guidelines how to use AI yet,” McKernan says. For example, if, as Walsh hopes, all emergency room visitors are automatically run through the suicide-prediction algorithm, should they be told this is happening? McKernan says yes: all patients must be told that hospitals are monitoring data and assessing suicide risk, she adds. If a computer algorithm deems them to be at risk, they should know to expect a call.
Read more here.
See other related articles here.
Thank you to Quartz for the article.