How do regular people find evidence-informed guidance to help make decisions about safe living in a pandemic? Questions answered when needed in a useful manner? Part 2 in this Person-First approach. Join our journey.
Proem
A few weeks ago, on this podcast, I introduced you to a project I’ve been working on about Person-First Safe Living in a Pandemic. I share it with you for several reasons. First, I focus my days on learning how people make health choices and decisions in real-time. To further that, I commit to improving the alignment of questions people ask and the available research. What better context to explore than this pandemic? Just as the issues and tensions of health equity and systemic racism exist all the time, the problem of finding trusted evidence-informed guidance heightens during this pandemic. Second, I’m fascinated by the challenge of communicating what works for people and what doesn’t, specifically, communication leading to action. Actions mean changes in behavior and practice. I endlessly perseverate about end-users, audience, medium, message, and methods. Third, preparing the written material and the podcast episode helps us, this small and mighty band of volunteers, reflect on what we’re doing and why we’re doing it. This exercise of writing and recording gives us material to share in various venues and media. On the one hand, the details of how the sausage is made can be unappealing and dry; on the other hand, I’m so excited about it, I must share it with you. OK, here goes, Part 2.
Finding Information about Safe Living in a Pandemic
In Part 1: What Could Go Wrong? of our four-part series, we introduced Carlos, an ICU nurse treating COVID-19 patients. He struggles to manage COVID-19 and life outside of the hospital with his sister and mother. We began to examine the question: How can regular people, like Carlos, find up-to-date, trustworthy answers to questions they have about living safely in a pandemic – finding answers when they have questions in a manner useful to them? We shifted from patient-centered thinking to a person-first point-of-view. Person-first, meaning we start with understanding people and hearing their questions and concerns, and then looking for the answers. We recognized that our work’s end-users are community resources who we define as relative experts with at least a 15-minute advantage of knowledge and expertise both with and without credentials. Our audience is experts in the knowledge management (computerized decision support and library science) fields.
We asked how the research and knowledge management industry can help regular people and communities find evidence-informed guidance to live safely? In Part 1, we introduced ourselves as a mighty band of volunteers and described the early steps of our journey. We said that we knew we would hit a home run when interested – no, excited – people come on board, work with us, and carry the project to a sustainable conclusion. We have had new people and organizations join us in the few months we’ve been working.
In this Part 2, we ask: How can Carlos and his community resources find evidence-informed guidance to help answer the questions he and his family ask – again, in a manner useful to them, at the time they need it? So, what questions might they have? Who might they ask? How can they find what they need?
Questions people ask
As we became acquainted with people like Carlos, we listened to the questions they asked about COVID-19. They asked about treatment, testing, work and school, transportation, money – safe living – in a pandemic. Everyone sought to find answers in the context of their conditions, environment, and circumstances. They sought options to manage their lives and health, not just diagnoses, and treatment. Overloaded would understate our feelings after listening to the massive scope of questions about COVID-19 we heard people asking – paralyzed may be more accurate. So, we elected to focus on testing – no reason beyond a possible common thread and the participation in our group of Michael Waters, a testing expert with the FDA. We took advantage of his expertise. We listened informally for a week – to family, friends, colleagues, social media, popular media, wherever we went – for questions people asked about COVID-19 testing. We seldom needed to bring up the subject. It was a routine topic everywhere, validating our unscientific choice of focus. In that week, we cataloged 75 different questions about testing that we distilled down to eleven.
- Who needs a COVID-19 test?
- How long after I test positive do I have to be quarantined?
- How much will a COVID-19 test cost me?
- If a test shows that I have antibodies to COVID-19, am I safe?
- When will I be able to get a test that I can do at home?
- How often should I get a test?
- How good are tests?
- Who, besides me, will get my test results? What will they do with it?
- What is my employer doing about testing? What if they don’t have a plan?
- Am I being enrolled in an experiment?
- If I get an antibody test and have antibodies, do I still need to wear a mask?
_______________________________________________________________
Finding answers
Next, each person on our team chose one question from the list and spent a week looking for answers in academic literature, popular and social media, and from lay experts, community resources. We felt sobered and disappointed at the gap between people’s questions and available, reliable information to answer those questions. Useful evidence-informed guidance was exceedingly difficult to find. Internet search results ranged from a firehose of information to incomprehensible resources. We heard overwhelming distrust in information in every flavor imaginable. Some sources trusted Dr. Fauci, some President Trump; some trusted the CDC, others didn’t – all over the map.
We would all benefit from a means to quickly focus, laser-like, on the information we need (searching) when we need it, in a manner we can use. Perhaps experts in computable decision science and library science could help with findability. What followed was an exercise in classification, metadata, and tagging.
Classification systems and search engines: PubMed, Medline Plus, and Google
Traditionally, academics and scientists use NLM (National Library of Medicine) resources, including PubMed, and other classification systems to help organize and search for academic literature. Some regular people – not clinicians, academics, librarians, CDS (clinical decision support) professionals – are comfortable searching using these more traditional, often less user-friendly means. Others use a search engine to type a question or a few words into an internet browser or ask someone (a crony, neighbor, respected person, community resource, etc.). Either way, the range of responses starts with nothing (in rare cases) and ends with way too much, almost all of which is non-specific and may not align well with the original question.
Carlos might search for COVID-19 Testing for ICU Nurses in PubMed, Medline Plus, or Google. Each search would return quite different results, often changing daily or more frequently.
PubMed
- COVID-19: A perspective on Africa’s capacity and response
- Use of personal protective equipment against coronavirus disease 2019 by healthcare professionals in Wuhan, China: a cross-sectional study
- Effect of Hydrocortisone on Mortality and Organ Support in Patients with Severe COVID-19: The REMAP-CAP COVID-19 Corticosteroid Domain Randomized Clinical Trial
Medline Plus
- A Guide to Surgical Specialists
- For Parents: Multisystem Inflammatory Syndrome in Children (MIS-C) associated with COVID-19
- What Is a Ventilator?
- Guidance for Healthcare Workers about COVID-19 (SARS-CoV-2) Testing
- Clinical Care Guidance for Healthcare Professionals about Coronavirus (COVID-19)
- A Texas ICU nurse is hospitalized with COVID-19 after testing negative
We also found marked variation in results among different browsers, Firefox, Chrome, Bing, DuckDuckGo. Of course, if we changed a word in the search, the results were different, some useful, most not. The reality, of course, being that usefulness is critical but often hard to achieve.
Custom searching for usefulness – For me, about me, by me
Anyone searching for something in a library, bookstore, website, bureau, or closet hopes they find order rather than chaos – the right stuff, at the right time, in a manner that makes sense—socks in the sock drawer, fiction with fiction. When we think about this as accessibility, we refer to a language I understand, a complexity that matches my experience, media I am comfortable with, time it takes to consume, and the intended audience. We also need a summary to help make a quick decision or judgment, so we do not have to waste our precious time. We might want to search for or filter for specific subtopics like K-12, college, travel, etc. These are general categories. Tagging is further search refinement. In the end, we want to search for information about a question we have that relates to us and our situation and find meaningful results. This categorization and filtering help successful finding. Next, let’s talk about metadata (data about data) and tagging.
Metadata, data about data, can help us organize
How can we use decision support tools and library science to help organize and help us find the right stuff at the right time in the right manner?
As no-budget volunteers, we could not afford to reinvent the wheel. We considered existing and new metadata that might be easily automated. Existing metadata because decision scientists may have already set standards. We considered using crowdsourced approaches (e.g., Wikipedia) to generating relevant metadata, finding a cadre of people who recommend evidence-informed sources of information, and assign metadata to their recommended resources. We looked to common clinical decision support (CDS) standards since this is where our work started and then worked on adding what is missing. Here are some metadata elements we’re testing now:
Data Element | Data type | Response examples | |
Accessibility | Minutes to consume | number | 3 minutes to watch, 10 minutes to read |
Languages | check box, short answer | English, Spanish, other (fill in) | |
Readability | text | Grade 6, Grade 12 | |
Media type | check box | Text only, multimedia (audio, video, graphic) | |
Location (Country, State, Zip) | check box, short answer | Not specified, US, State | |
Category | check box, short answer | health, children, older adults, employment, restrictions, testing, vaccines, school | |
Short summary | short answer (240 characters) | ||
Tags | Fill in, frequently used | Infants, preschool, >75, essential workers, masks, quarantine, home testing, one dose tests, college |
So, there is good news and bad news in what we’ve done here. We’ve found intersections with other disciplines; we’ve seen organizations and individuals attempting to solve similar problems. We’ve identified some good examples of what we need to do next. However, the work isn’t easy or straightforward.
Tagging
Tags, custom sub-categories can help people find what they are looking for when browsing or searching, think a navigation tool, a GPS. Some platforms use hashtags; some use free form tags; some have internal tags. Find an example at Prescription to Learn that uses exciting examples of navigation tools. Check it out. Some tags may be more useful than others. Successful tagging depends on how people think and search; therefore, it includes much redundancy. It’s hard to imagine the full automation of tagging; after all, we are our own best curators of the information we need. So, who is responsible for tagging or has the time and resources to curate all the information out there? We think perhaps useful tagging could and should be crowdsourced—all good thoughts for further exploration.
Taking action – partnerships
Recently we agreed to partner with EBSCO to use their Stacks Content Management System as a library of person-first COVID-19 resources. This content management system provides us with a sandbox, a site, to test our findability methods and to grow our partnerships. It also allows us to test out our ideas about crowdsourcing the tagging process. We accepted this generous offer because a member of our team, Kayla Nelson, stepped up to learn the platform and begin entering resources.
The art and science of tag creation feel daunting, a tension between standards (a set pool of tags) and person-responsive (people think differently). We found a class of students in healthcare communication who will help us begin to crowdsource tagging.
What’s next? (More questions, some answers.)
Our mighty band of volunteers continues to find people and organizations dying to grow this discovery process, find funding, build, or join coalitions and move it along. In the third post, we will continue to share our unfunded discovery journey, moving on to Trust and Recommendations. We seek to promote a dialog within the research community and between researchers and laypeople and their communities. Here we are planting a seed.
Please communicate with us on info@safeliving.tech, use #safelivingpandemic on Twitter, or check out our website still in development https://www.safeliving.tech/
Reflection
You are part of our experiment as a reader and listener. What do you think? What audience are you? Does this episode resonate? Does it motivate you to do anything? With which audiences should we share this? Too technical, not technical enough? Interesting? Boring? Share your thoughts and advice. We need it. Thanks. Onward.
Recent Comments