Facebook has sparked a new debate about privacy and I think it’s time to bring it to health care.
What does it mean when millions of people flock to share/overshare information, even as Facebook’s default privacy settings have slowly become openness settings (but the company maintains radio silence)? Pew Internet research shows that a sizeable portion of the population (young people, in particular) carefully manage their online reputations, but where does that leave the rest? Consumer groups in the U.S. have requested an investigation and the Canadian and German governments are sharpening their knives, too.
Now, to add the health angle, PatientsLikeMe’s Ben Heywood posted yesterday about transparency, openness, and privacy:
We do not want anyone to be surprised by the impact of sharing data on PatientsLikeMe. We believe in openness, but we also want people to knowingly make the choice to be open with their health information.
…Recently, we suspended a user who registered as a patient in the Mood community. This user was not a patient, but rather a computer program that scrapes (i.e. reads and stores) forum information. Our system, which alerts us when an account has looked at too many posts or too many patient profiles within a specified time interval, detected the user. We have verified the account was linked to a major media monitoring company, and we have since sent a cease and desist letter to its executives.
I love that word knowingly. Are consumers competent to make the decision to openly share their observations of daily living? Is it OK that a private company can leverage that data for profit (as long as they’re open about it)? What would be different about the Facebook controversy if Mark Zuckerberg had come out with a statement like that when the first stories broke?
I’m reminded of what Paul Ohm wrote about the broken promise of HIPAA:
[I]t is hard to imagine another privacy problem with such starkly presented benefits and costs. On the one hand, when medical researchers can freely trade information, they can develop treatments to ease human suffering and save lives. On the other hand, our medical secrets are among the most sensitive we hold.
I’m also reminded of my own research finding that people living with chronic disease have a secret weapon: Each other. Mobile, social technologies are tapping in to a human need to connect with each other, to share, to lend a helping hand, and to laugh.
I’d like to start a conversation about health privacy that includes an open dialogue about the risks and benefits of sharing. Who’s in?
An open market in health information is the preferred outcome here – but only so long as we can guarantee that health information will not be misused. We are moving in the right direction (GINA, other anti-discrimination laws – and their observance/enforcement – are key to this endeavor), but we’re not there yet. So the question remains: what do we do today? We go back to Ben Heywood’s comment: folks need to choose to be open “knowingly.” In other words, there needs to be informed consent to sharing of health information. There needs to be a clear articulation of the risks and benefits laid out in easy-to-understand language that people will read and understand before choosing to make certain information public, or keep it private. Not a long piece of gobbledegook that nobody will read; a clear and concise statement of risks and benefits. That is the key to obtaining informed consent. Now comes the hard part: what gets included in the risks? All risks, likely risks, unlikely risks? You may lose your job, your home, some aspect of your reputation? Think of this as an informed consent for a surgical procedure – Yes, there are risks with all anesthesia, but people go under every day because the benefits outweigh the risks. If de-identified PHI is being shared off the back end of an online service, we know that it usually can be re-identified, so you need to assume your health information may well be publicly available if it is used in such a manner. The question then becomes a question of personalized risk, and personalized benefit. I figure that since Britney Spears’ medical records are apparently of greater general interest than mine, and since I have disclosed certain aspects of my own medical history, I don’t really care that much if my health information becomes public. Others will feel differently. Bottom line, I agree with you, Susannah, but I hold out some measure of hope that our system may one day (not soon, but one day, if I’m not being too idealistic here) may offer sufficient protections so that release of health information will not have deleterious effects on people’s lives. Meanwhile, the key is to consider not the generally-applicable risks and benefits of health information sharing, but the personalized version of that calculus.
Thanks, David! Note that I don’t say I support one outcome over the other, so hopefully you’re just agreeing with me that we need to have this conversation.
The Pew Internet Project is coming out with a new report on Tuesday entitled “Reputation Management” which will go deeply into how people make choices about what/how/why to share and how they use the internet to keep track of other people. We are mapping this landscape, not shaping it.
Susannah, I agree that the conversation is important, but I also thought I detected an endorsement of the notion of making choices “knowingly” – on rereading your post, perhaps you were being more neutral.
Yep, neutral. Thanks for the opportunity to say it again.
I called out “knowingly” b/c it is such a key distinction, so value-laden, so historically rich in meaning. The internet is revolutionizing people’s relationships with information and the needle keeps moving on what people “know” (what policymakers think people know, what doctors think people know, etc.).
Correction: look for that new report on Wed. @12noon Eastern.
Great post! I’d just like to say that as a patient, I value access to my medical information over privacy and security.
-Alan
With you, Susannah, and David Harlow as contributing voices in the conversation, I’m in. David articulated my initial thoughts extremely well.
I see a big distinction between the data posted by patients on the Web on community sites and the ‘secondary data’ produced by EHRs that are being aggregated into patient/disease registries.
Interesting! I see 4 questions about agency stemming from your comment:
Who is collecting the data?
Who is holding the data?
What is the purpose of the data collection?
What is the scale of the data collection?
Is that right? Any others?
Subsidiary questions might include: Is the entity holding the data for a particular use a credible purveyor of that use?
Good point!
Bringing in John Mack’s post (http://bit.ly/de00Da) and the comments attached to it:
I think scale is part of what is tripping up Facebook, for example, and yet scale is what they need to be as useful as they are. Public health records need to be collected & maintained on a grand scale to be useful, but that’s also why the gov’t must to protect them and have clear policies about who can access them and for what purpose. PatientsLikeMe is in the middle – a dot-com doing the work of a public-health entity.
In any data conversation I’d add two other questions:
1. For how long do they intend to hold the data?
2. How will they dispose of the data when no longer needed?
In the digital world today informed consent implies eternal consent. What I’m willing to share today may change tomorrow when someone finds a new use for it.
Great additions, Rob! Much appreciated.
John Mack has a good post on the pharma perspective on these issues: “Data Mining in the Deep, Dark Social Networks of Patients. Word to Pharma: Caveat Emptor.” http://bit.ly/de00Da
I’m already in the conversation, including the more subtle issues:
http://philbaumann.com/2010/05/13/a-question-concerning-the-ethics-of-social-media-presence/
BUT: Here’s the conversation we’re not having: Dignity.
The Privacy conversation is old hat – it’s just that more people are realizing that the so-called Web 2.0 Ideology wasn’t all that well thought out to begin with (and, from my perspective, I believe that the Healthcare Social Media conversation has be somewhat tainted by that ideology).
Yes, I’ll be happy to continue conversing about Privacy.
But I want us to bring forth Dignity more into focus. Why? Because Privacy and Dignity are inter-related.
Will you join that conversation, Susannah? Please??? With sugar on top? :)
@PhilBaumann
You don’t have to add sugar – it’s a great, but different, conversation. I will add my two cents soon. For now, I agree: dignity is a key element. Thanks!
Great, look forward to it.
But one point I beg to differ with you: Dignity is not a different conversation (it’s different from Privacy, but it is critical to include it with Privacy). I know, this is an unusual perspective – but that’s why I sounding the alarm.
Context is critical – a conversation concerning Privacy without the context of Dignity will miss a larger perspective that’s needed in facing the Privacy problems we’re – and will continue to – face.
See – In Praise of Oversharing, for an example of why we need to discuss non-Privacy matters when discussing Privacy.
http://www.time.com/time/printout/0,8816,1990586,00.html
(You’ll have to forgive me if I’m not making myself clear – I’m planting seeds for ideas which I’ll piece together – I just ask you not to dismiss the Dignity issue all too easily – it’s part of the conversation that needs to take place.)
Warning: pre-coffee thoughts, but I saw your comment and want to say immediately: Dignity is an *essential* part of the Privacy conversation (thanks for the capitalization idea – sets it off better than scare quotes).
When I wrote about the two conversations above I meant that your post about a business decision to have a FB page is different from a consumer’s decision to share details about their treatments and symptoms on a patient network. Maybe it is the same, maybe I need to think about it more – looking forward to digging in and learning!
I’m in … or have been in. I have reservations about the concept of the ‘personal health footprint’ especially as it relates to parents openly discussing the issues of their children. Lots to talk about.
http://www.33charts.com/2009/11/your-personal-health-footprint.html
Thanks, DrV! Your comment, along with others, reminds me that I am *joining* a new conversation about privacy, not starting it.
The Facebook conversation, which is global, combined with the PatientsLikeMe conversation, which is “local” (ie, pretty much confined to health geeks at this point) is just a new jumping-off point.
Roni Zeiger’s recent post about health data, anything that Gilles Frydman has written, e-Patient Dave’s famous treatises on open data… all of these are links in the “let’s talk about sharing & opportunity & power, not just privacy” chain. Maybe we need a review article to round it all up?
Here’s another contribution to the conversation:
Why we share: a sideways look at privacy
http://confusedofcalcutta.com/2010/05/23/why-we-share-a-sideways-look-at-privacy/
Speaking of sharing, I listened yesterday to Regina Holliday talk about how she used Facebook to record her husband’s treatments and keep friends informed about what was happening.
When she finally got a copy of his medical record, she noticed that the nurses’ notes didn’t match her own timestamped FB notes.
When I tweeted this, Brian McGowan tweeted back: “Low-cost PHR w/privacy issues?” Cue the shudders from many sides, but this is typical internet user behavior in my observation – they use whatever resources are available to get the job done, MacGuyver-style.
Reggie went on to say that she believes people tell FB things they wouldn’t tell their doctors. Now *there’s* a discussion point for us.
Sorry to be late in this conversation.
First, I am not sure I understand what damage the robot user did to PatientsLikeMe’s users. I am, OTOH, clear about its real potential damage to the PLM business model. So did the robot get suspended to protect PLM or its users? This is a non-trivial question and I think the answer is as important to this conversation as any of the comments made so far. Don’t take me wrong. I love PLM. The Heywood brothers have implemented one aspect of health data aggregation and sharing that will generate many ideas. But they face the same problem that ACOR has faced for many years: it is easy to create robots that will scrape sites that are supposedly protected from this kind of activity. It creates real business issues and the solutions are very hard (if not impossible) to find.
Too many times (and maybe in the majority of cases) ownership protection masquerades as privacy protection. danah boyd says it best: “fundamentally, privacy is about having control over how information flows”. This becomes particularly true in this fast expanding networked world where the value is NOT in the individual data but in the metadata generated by sharing.
Last year I agreed to be one of the co-writers of the declaration of health data rights. It says:
If we had to rewrite the declaration today I would certainly insist on adding the concept of control of the personal data flow. In a perfect world a better future will bring a high level of granularity for individual control of that personal data flow. Just as anything taken from my body belongs to me, I should have a personal stake in the metadata I help generate. The new economy will have to come up with solutions. Otherwise, I am afraid social upheaval could happen online, just as it has happened offline many times before. Just as close to 1/2 billion people have used Facebook they could decide to terminate their account.
Gilles,
I have been thinking about this comment for the last week, collecting my thoughts on it even as wave after wave of news washed in, adding to what I want to say.
First, my Pew Internet colleagues released a new report, Reputation Management, which finds that internet users age 18-29 (supposedly the carefree sharing generation) are the most likely group to pay attention to their digital footprints and the least likely to trust social network sites. Check out the trend lines from previous surveys — are Americans becoming canny curators of their online profiles? Or are they whistling past the graveyard? See: http://pewrsr.ch/repMGMT
Second, Facebook (the bete noire stalking this conversation) released its new privacy controls, which make some aspects of a profile easier to shield. Danny Sullivan wrote a comprehensive post which points out that controls for third-party applications are still hard to find. You may trust your friends, and your friends’ friends, and even Facebook itself, but do you trust FB to choose trustworthy partners who can gain access to your profile? See: http://selnd.com/dwOJj0
Third, Natasha Singer’s NYT story, “When Patients Meet Online, Are There Side Effects?” highlighted the potential benefits of health data collection (featuring, yes, PLM and CureTogether) along with the potential pitfalls (“Do we need to protect people who have illnesses from being exploited?” <– I've heard this said so many times in private, it's refreshing to get it out in the open. LOTS to discuss, suffice to say.)
See: http://nyti.ms/9NvwUo
Fourth, Tim O'Reilly weighed in with a post that captures what I've heard for years from privacy & security experts: Naivete is regrettable among consumers, dangerous among reporters. Be sure to read the comments, including this rejoinder from Tim: "there is so much opportunity to create value *for the user* (not just for the vendor) in collecting this data that we need to figure out how to handle it going forward, not scare ourselves out of doing the hard work of making it safe." See: http://oreil.ly/93vVKw
So now to your point: Was the robot threatening PLM's users' privacy or was it threatening PLM's business model? Maybe both.
The past week's headlines have crystallized, for me, a focus on third-party applications and access to data (which is related to your point about individual control).
Do PLM's members join the community knowing their data will be aggregated, monitored, and yes, sold? Do they do so because they trust PLM to do the right thing, such as limiting access to trusted (and yes, paying) partners? Or are users naive and in need of protection? Who gets to decide that?
Finally, I have to re-use a line from one of my previous posts: What's the point? What's the point of aggregating data if it's not going to be put to use?
It's the whole point, of course. It's why Facebook AND patient communities become more valuable to users the more people join & share. It also happens to be why Facebook and patient communities become more valuable to businesses/researchers the more people join & share. And it's why this conversation matters.
Holy CRAP are you a good thinker/reader/curator, Susannah. (Re your comment today.)
I’ve been gone a week on speeches & vacations, so I’ve heard only whispers of some of those items. Your post-length comment saved me a *week* of catching up. Thanks.
I think there is much to your comment “I’m also reminded of my own research finding that people living with chronic disease have a secret weapon: Each other.” The issues at FB underscore the ambiguity of the kind of environment it is – is it a place to share personal health situations? maybe or maybe not. The many different communities that have formed within disease areas have not offered this kind of ambiguity – not that they are perfect – but most of them require real malicious work to exploit. We hear so much about FB and Twitter because the scale and shifting API parameters make them attractive places to exploit.
Today’s youth are learning in middle school about how to manage (or not) their footprint. It is a reality that we all must take an active role in knowing who can do what with the information we share. And for those who don’t – check out the growth that services like http://www.reputationdefender.com have seen in the last two years…The “Cleaners” are already doing well.
Ted, thanks so much for this insight.
The Pew Internet Project explored some of these issues in our 2007 report, Digital Footprints:
http://www.pewinternet.org/Reports/2007/Digital-Footprints.aspx
Here’s the summary:
47% of internet users have searched for information about themselves online, up from just 22% five years ago. However, few monitor their online presence with great regularity. Just 3% of self-searchers report that they make a regular habit of it and 74% have checked up on their digital footprints only once or twice.
Indeed, most internet users are not concerned about the amount of information available about them online, and most do not take steps to limit that information. Fully 60% of internet users say they are not worried about how much information is available about them online. Similarly, the majority of online adults (61%) do not feel compelled to limit the amount of information that can be found about them online.
We will release new data on reputation management on pewinternet.org on Wed. May 26 (at 12noon Eastern if you are really psyched to read it).
I can’t wait to contribute this data to the public conversation and bring it home, as always, to health & health care.
i’m a health and communication consultant for companies investing in workplace wellness and the co-founder of a social networking group, cohealth (@co_health). during this month’s twitter chat we dove into privacy issues as it concerns employees’ health data and the workplace — and the tradeoff and benefits that could come from employees sharing their data more openly with their employers or third-party administrators. it’s not a simple concept, but it seems that as we get more comfortable sharing information overall, as we get more protections and assurances of continued health coverage no matter our condition, and as employers get savvier about creating customized and comprehensive workplace wellness solutions and working across industries, the benefits far outweigh the negatives. particularly when employees do it *knowingly*.
f
Ok, so I am WAY late to this great convo, but I am 100% IN! Privacy is a crucial issue to discuss:
1. Privacy needs to be an individual & informed choice: I agree, you should KNOW and CHOOSE when which info gets out there. I have witnessed patient convos online, pestering against pharma reading their stuff (which was not the case btw)…but what where these patients expecting? You put comments with your name on a blog ANYONE who knows how to google will find it!
Patient need to be educated AND enabled to make the right choice. What is too much exposure for one, might be of no concern to another patient…yet, who is taking the lead at the moment educated patients about this choice?
2. Self vs. over-regulation
You mention EU regulators taking a critical look at Internet privacy. I’m sorry, this just sends cold shivers down my spine. As a German, I have issues with anything that resembles *big brother is watching you*. EU regulators know NOTHING about social media, how will they be able to guide me through the social media jungle? They believe they should prevent me from reading US health info…to *protect* me from knowledge?
Sorry, not a concept I can endorse. I want to empowered to make my own health choices, not be censored out of info.
Let me finish with this quote from L. Crisler (a lady who lived with wolves ;-)):
Ignorance is knowing nothing and believe in the good
Innocence is knowing everything and still believe in the good
I love the innocence of social media, let’s make sure we enable it responsibly.
PS.: Let me make a sly side comment about PLM: By *protecting* the community from mean social media monitoring scrapping robots, they are also safe guarding their main source of revenue: data reports sold to pharmaceutical companies…
What my #hcsmeu gestalt entity said ;) ^^^
@andrewspong
I agree with Jamie Heywood’s view that privacy is a ultimately a selfish act. Privacy also impacts the ability for govt agencies to serve customers even large partner orgs. Yet I also agree that we should have a choice about what we share. Yet technology is not allowing a black and white distinction between these polar opposites. It seems like Patientslikeme took quick action and were open about what happened. In a grey area like privacy it is hard to have blanket prescriptions. @stevemuse
Health privacy on Facebook is used for a good purpose. I like people sharing about their health status.
These medical studies implements a through diagnostic hypnosis in the human health.