My girlfriend, “Emily,” who likes to hack her health, recently purchased a clever little gadget called the Ōura Ring. From the outside, it looks like an ordinary silver ring, but it’s lined with sensors that monitor heart rate, respiration, temperature, body motion and other variables. Algorithms analyze data and draw conclusions, displayed on her iPhone. The ring tells Emily how much exercise and sleep she’s getting, and it advises her, in a gently bossy way, on how she might change her routines to be healthier. Maybe go to sleep a little earlier tonight, exercise a little more tomorrow. The Ōura app even provides recordings of boring stories, read by someone with a wonderfully soporific voice, to help her fall asleep.
The ring is an almost magical piece of engineering. All that sensory and analytic power packed into that tiny, elegant package! And the logic behind the ring seems, at first glance, unassailable. The ring transmits more and more data from users to its maker, Ōura, which keeps refining its algorithms to make its “precise, personalized health insights” more accurate. Ideally, the ring will help you cultivate healthier habits and alert you to problems requiring medical intervention. That’s Emily’s hope. But when she urged me to get an Ōura Ring, I shook my head and said, No way. I worry that the ring is making her unhealthily anxious about her health. (Emily disagrees; see her response below.) It would surely have a similar effect on me.
A technology reviewer for the New York Times touches on my concerns. “I can’t say whether the Ōura Ring has made me healthier, but it has made me more health-conscious,” Justin Redman writes. “I looked to my Ōura Ring data each morning for an affirmation that I was okay (and didn’t have COVID-19). That eventually led to a somewhat unhealthy dependency on the ring and its data. Before you purchase a device like this, you need to ask yourself whether you’ll use the data to make better choices, or whether it will cause you unnecessary stress.”
We should ask ourselves this question about all our digital devices. On balance, are they good for us? We live in the age of “big data,” in which companies gather more and more information about us via the internet, smartphones and other technologies. Health and fitness devices such as the Ōura Ring and Fitbit are just one manifestation of this trend. The market for “wearable” health-tracking devices is growing fast, according to a recent report in a health care–business journal, with tech giants such as Google, Amazon and Apple as well as smaller companies such as Ōura, competing for customers. “Demand has skyrocketed during the COVID-19 pandemic and is only expected to accelerate in 2021,” the report states.
Devices such as the Ōura Ring are supposed to empower us, by giving us more control over our health, and fitness trackers do apparently nudge some users into exercising more. A recent meta-analysis of 28 studies involving 7,454 healthy adults concludes that “interventions using smartphone apps or physical activity trackers have a significant small-to-moderate effect in increasing physical activity,” equal to 1,850 steps per day. But a 2019 review in the American Journal of Medicine found “little benefit of the devices on chronic disease health outcomes. Wearable devices play a role as a facilitator in motivating and accelerating physical activity, but current data do not suggest other consistent health benefits.”
Researchers have raised concerns about devices that monitor diet and exercise. A 2016 study in the Journal of the American Medical Association found that subjects wearing devices that track calorie intake and exercise lost less weight than controls. In 2017 psychologists at Virginia Commonwealth University associated such devices with an increased risk of “eating disorder symptomology,” such as binging and purging. “Although preliminary, overall results suggest that for some individuals, these devices might do more harm than good,” the researchers state.
Another concern: Americans are already overtested for potential medical problems, notably cancer. Over-testing of asymptomatic men and women can lead to overdiagnosis (flagging of microtumors and other anomalies that never would have compromised health) and overtreatment. For every woman whose life is extended by a mammogram, as many as 10 women receive unnecessary treatment for breast cancer, including surgery, chemotherapy and radiation, according to a 2013 meta-analysis by the Cochrane network, which conducts impartial evaluations of medical interventions. And “more than 200 women will experience important psychological distress including anxiety and uncertainty for years because of false positive findings.”
The ratio is even worse for PSA (prostate-specific antigen) tests for prostate cancer in men. An analysis by the group NNT, which like Cochrane evaluates medical procedures, concludes that PSA tests do not reduce mortality, and they lead to unnecessary biopsies and treatments in one in five men. “The strategy of routinely screening all men with PSA tests leads to interventions that are not saving lives and may be causing harm,” the authors conclude. Consumer devices such as Ōura and Fitbit, I fear, will make us more anxious and hence more likely to get unnecessary tests, leading to still more overdiagnosis and overtreatment.
Artificial-intelligence enthusiasts claim that it will improve the accuracy of tests. That was one alleged application of IBM’s much-touted AI program, Watson. After Watson won the television game show Jeopardy in 2011, IBM sought to exploit that public-relations victory by adapting Watson for medical applications. As Eliza Strickland of the technology journal IEEE Spectrum reported in 2019, IBM raised hopes that Watson “could reduce diagnosis errors, optimize treatments, and even alleviate doctor shortages—not by replacing doctors but by helping them do their jobs faster and better.” But neither Watson nor other AI products developed by IBM have fulfilled their promise. Many of IBM’s efforts “have fizzled,” Stricklan stated, and a few “have failed spectacularly.”
AI pioneer Geoffrey Hinton nonetheless declared in 2016 that within five years deep learning will outperform radiologists and render them obsolete. Needless to say, that hasn’t happened. As economist Gary Smith and technology analyst Jeffrey Funk report in the business journal Quartz, only 11 percent of radiologists employ AI for interpreting images, and 72 percent of the rest have no plans to do so. The reason, Smith and Funk assert, “is poor performance.” Only 5.7 percent of the users “reported that AI always works” while 94 percent “reported inconsistent performance.”
Claims that big data plus artificial intelligence will revolutionize health care “are mostly hype,” Funk told me via e-mail. “Too often AI and big data are trained on a limited data set and then are used in situations in which the data is not relevant. Solving this problem will require much more training, which dramatically raises the cost and often leads to lower explanatory power.” This problem helps explain why artificial intelligence keeps failing to live up to its hype.
Consider this: The U.S. is a leading inventor, marketer and adopter of medical technologies, including those involving big data and AI, and yet U.S. health care is abysmal. Although the U.S. spends much more on health care per capita than any other nation, the health of the U.S. population lags behind that of comparable industrialized countries and even Costa Rica, which spends less than one tenth as much per capita on health care.
My guess is that a growing number of people will become dependent on health-tracking devices, even if they doubt the benefits. Consider this recent review of the Ōura Ring. The reviewer, Chris, says he hoped the ring would help him “identify things I’ve been doing wrong and fix them so I could sleep like a baby and become superhuman.” That didn’t happen, he acknowledges; after 11 months, neither his sleep nor any other components of his health improved.
Chris nonetheless compulsively checks his ring’s output every morning, in the same way that he checks Instagram and other social media sites. Just as likes and positive comments give him a “dopamine hit,” so do Ōura data indicating that he got a good night’s sleep. If he lost the ring, Chris says, he would buy a new one. Will the Ōura Ring and other devices empower consumers by helping them to take charge of their health? I doubt it. But the devices certainly empower and enrich the companies that make them. The data we generate with our digital devices help companies make them even more addictive.
“Emily” responds: “I don't think your observations of my use of the Ōura Ring are fair. The ring helps me track the details of my sleep and heart rates so I'm learning about my energy and stress patterns, and it’s definitely helped me. I'm not dependent on it—that’s your fear. And if I was, I'd rather be dependent on this than on a lot of other things. It’s another tool, like an exercise bike or anything else. I’m giving it the two months that Ōura suggests I give it. That will provide a baseline for my activity and sleep goals using my own metrics, rather than comparing me to everyone else the way most medicine does. That is the future of wearable tech.”
This is an opinion and analysis article; the views expressed by the author or authors are not necessarily those of Scientific American.