‘There will come a time when it isn’t ‘They're spying on me through my phone’ anymore. Eventually, it will be ‘My phone is spying on me.’
Numerous therapy apps are already available. Most were developed for profit, with greatly varying quality, little testing, and no regulation. Commercial apps often push outlandish claims: “Once you download our app, our technology starts to get an idea for how you tap, scroll and type on your smartphone—a new way to measure things like your stress, mental health symptoms, and well-being.” “You can track your measurements in the mobile app, and they’re shared with your clinical team, so they can provide you with more personalized care.” Therapy apps are pretty scary stuff, but it’s the mining of big data sets using machine learning that really terrifies me.
The idea seems so superficially appealing. Machine learning allows computers to analyze huge data sets, revealing patterns too subtle and obscure to be picked up by us mere humans. Promoters promise a brave new world of more rapid, rational, and personalized diagnosis and treatment for mental and substance use disorders. Why depend on error-prone humans when we can substitute the precision of hi-tech data science?
The possible benefits are so obvious.
Tracking how people use the internet might identify who has psychiatric problems even before they become aware of them; might help prevent suicides or violent behavior; might determine risk factors for mental illness; might improve treatment selection; and might be used to evaluate progress and identify relapse.
The hype is so easy to spin. Data mining is an inexpensive way to improve the individual patient’s mental health and the overall mental health of our society. Machine learning can even predict the future—identifying people at risk for later mental disorders, allowing us to intervene to prevent them.
Well, folks, what looks too good to be true is almost never true. In my view, mining big data sets with machine learning to diagnose psychiatric disorder is a disaster waiting to happen.
Why is it so scary and potentially evil? First off, follow the money. Big private equity money is being put into the big data mining startups. This encourages the exuberant “fake it until you make it” hype pumping up future technical potentials and ignoring obvious risks. The main customers for findings of big data analytics will be drug companies, insurance companies, and big healthcare systems—industries that have in common a terrible track record when it comes to choosing greedy profit over patient welfare.
Second, the hype is hype. Screening for psychiatric disorders in the general population has a long and doleful record of inaccuracy, misuse, and misallocation of scarce resources. There is always a huge false positive rate, falsely identifying as mentally ill individuals who have some psychiatric/psychological symptoms, but not at a level of severity or duration to produce clinically significant impairment or to require professional attention.
My nightmare scenario: the worried well will be misidentified as psychiatrically sick and start receiving repetitive pop-ups announcing that their pattern of smart-phone use suggests they may need mental health help. Soon they are flooded with ads promoting therapy apps, treatment centers, and psych medications. An incredible 12% of adults already take psychotropic medication, many without clear indication, often causing more harm than good. Data mining will help dig out an ever-larger pool of people stigmatized by false diagnosis and mistreated by psychotropic over-medication. And meanwhile, services for people with severe mental illness (who desperately do need help) will continue to be shamefully underfunded (because there’s no profit to be gained in treating them).
And finally, data mining digging for psychiatric disorders is an incredible invasion of privacy and a very slippery slope toward a dangerous surveillance state. The idea of an ever-vigilant Big Brother monitoring your every click to determine your state of mind terrifies me and should terrify you.
It is very easy to make diagnostic mistakes, very hard to correct them—and people are often haunted for life by the mislabels they carry. Rather than improving precision, I fear that machine learning will provide a pseudo-precise profusion of mistaken mislabeling. Diagnoses should always be individual, cautious, carefully done, and written in pencil—not based on untried, unregulated, overinclusive, obscenely profitable, computer algorithms.
File under: Musings and Reflections, Therapy & Technology
Philip K. Dick
Our smartphones spy on us day and night. They know where we go, who we know, what we buy, what we read, how much we exercise, our vital signs, the meds we take, even our patterns of sleep. So it's no great leap for savvy tech entrepreneurs to hype the idea that our smartphones can be the missing link to better mental health.
Like what you are reading? For more stimulating stories, thought-provoking articles and new video announcements, sign up for our monthly newsletter.
Numerous therapy apps are already available. Most were developed for profit, with greatly varying quality, little testing, and no regulation. Commercial apps often push outlandish claims: “Once you download our app, our technology starts to get an idea for how you tap, scroll and type on your smartphone—a new way to measure things like your stress, mental health symptoms, and well-being.” “You can track your measurements in the mobile app, and they’re shared with your clinical team, so they can provide you with more personalized care.” Therapy apps are pretty scary stuff, but it’s the mining of big data sets using machine learning that really terrifies me.
The idea seems so superficially appealing. Machine learning allows computers to analyze huge data sets, revealing patterns too subtle and obscure to be picked up by us mere humans. Promoters promise a brave new world of more rapid, rational, and personalized diagnosis and treatment for mental and substance use disorders. Why depend on error-prone humans when we can substitute the precision of hi-tech data science?
The possible benefits are so obvious.
Tracking how people use the internet might identify who has psychiatric problems even before they become aware of them; might help prevent suicides or violent behavior; might determine risk factors for mental illness; might improve treatment selection; and might be used to evaluate progress and identify relapse.
The hype is so easy to spin. Data mining is an inexpensive way to improve the individual patient’s mental health and the overall mental health of our society. Machine learning can even predict the future—identifying people at risk for later mental disorders, allowing us to intervene to prevent them.
Well, folks, what looks too good to be true is almost never true. In my view, mining big data sets with machine learning to diagnose psychiatric disorder is a disaster waiting to happen.
Why is it so scary and potentially evil? First off, follow the money. Big private equity money is being put into the big data mining startups. This encourages the exuberant “fake it until you make it” hype pumping up future technical potentials and ignoring obvious risks. The main customers for findings of big data analytics will be drug companies, insurance companies, and big healthcare systems—industries that have in common a terrible track record when it comes to choosing greedy profit over patient welfare.
Second, the hype is hype. Screening for psychiatric disorders in the general population has a long and doleful record of inaccuracy, misuse, and misallocation of scarce resources. There is always a huge false positive rate, falsely identifying as mentally ill individuals who have some psychiatric/psychological symptoms, but not at a level of severity or duration to produce clinically significant impairment or to require professional attention.
My nightmare scenario: the worried well will be misidentified as psychiatrically sick and start receiving repetitive pop-ups announcing that their pattern of smart-phone use suggests they may need mental health help. Soon they are flooded with ads promoting therapy apps, treatment centers, and psych medications. An incredible 12% of adults already take psychotropic medication, many without clear indication, often causing more harm than good. Data mining will help dig out an ever-larger pool of people stigmatized by false diagnosis and mistreated by psychotropic over-medication. And meanwhile, services for people with severe mental illness (who desperately do need help) will continue to be shamefully underfunded (because there’s no profit to be gained in treating them).
And finally, data mining digging for psychiatric disorders is an incredible invasion of privacy and a very slippery slope toward a dangerous surveillance state. The idea of an ever-vigilant Big Brother monitoring your every click to determine your state of mind terrifies me and should terrify you.
It is very easy to make diagnostic mistakes, very hard to correct them—and people are often haunted for life by the mislabels they carry. Rather than improving precision, I fear that machine learning will provide a pseudo-precise profusion of mistaken mislabeling. Diagnoses should always be individual, cautious, carefully done, and written in pencil—not based on untried, unregulated, overinclusive, obscenely profitable, computer algorithms.
File under: Musings and Reflections, Therapy & Technology