If you are like any other public health researcher during the last year-especially if you are an epidemiologist, like me-you have spent each and every day evaluating how public health works in practice. You have dedicated your time, your expertise, and your energy in facing one of the largest public health challenges we have seen in a century. You have become the decision maker of your inner circle, the guiding light of reason at your family dinner table, and the person on everyone’s social media page that they refer to for advice. In other words, you are in a constant state of critically evaluating yourself and your work.
For me, this became all the more personal when I tested positive for COVID-19, becoming part of a dataset like the ones I use daily in my research studying exposure to metals.
As a PhD student in epidemiology, my dissertation work relies on data. I study how exposure to environmental metals impacts children’s neurodevelopment. Numbers and patterns drive my questions, results, and conclusions. The datasets I use to answer my research questions contain a wealth of information on demographics, children’s health, and environmental exposures. In other words, they contain everything you need to conduct an analysis and publish a scientific paper.
While I work, I often reflect on my research aims and goals to make sure that the work I am doing meets the wants and needs of the communities I’m studying. But now, since testing positive for COVID-19, I’ve thought of all the research questions I could ask of the dataset I have joined. Are there pregnancy risks for women who were diagnosed with COVID-19? Does COVID-19 alter brain function? Will COVID-19 affect how we age? Does a COVID-19 diagnosis change the mental health status of individuals? Do age, sex, race, and income mediate these relationships?
A basic principle of public health is to ensure that research aims meet community concerns. For me, this is easier to conceptualize when people from the community we are studying are actively engaged in building the dataset, or helped to initiate the research and its aims in the first place.
But that’s harder to do when the dataset I’m studying is preexisting. Becoming a data point myself, joining a dataset that will likely be used for years to come as researchers continue to ask new questions about the COVID-19 pandemic, has pushed me to ask new questions about how I practice and research public health.
For one, I’m committed to being a better communicator. After testing positive for COVID, I was given an overwhelming amount of advice on how to safely isolate. It was not unlike the public health advice that I helped craft as an intern at a public health department, explaining the ins and outs of preventing lead poisoning in children. My experience having COVID has helped me recognize that when explaining health protocols, public health officials shouldn’t just provide blanket instructions, but should ideally also ask each person about their individual situation. Those questions might look like: What options do you have available to carry out these practices? Will feeling responsible for these measures add additional stress to your life? Do you have access to mental health services, a support system, and/or other services that you may need to help meet these guidelines?
I also now know what it feels like to be part of a dataset without being able to know your own personal data or the results from research being done on your data. In the week that I tested positive through BU, the University’s public-facing dashboard reported 50 positive samples, with 13 of those samples identified as containing a COVID-19 variant of concern. According to BU’s protocol, those samples were de-identified, and so individuals who tested positive for COVID-19 don’t know whether they were infected with a variant. The reasoning behind this makes sense-there are currently no different clinical recommendations between SARS-CoV-2 variants. Still, I felt strongly that I wanted to know which strain I had.
Why was I so bothered that I didn’t get the chance to know which strain I was infected with? Like many who received the news that they tested positive, there are feelings of guilt and confusion. The question that has haunted me the most is why did I-versus others with the same exposure-turn positive? This is a question we face regularly in epidemiology: How can we make a claim about cause and effect when there are so many different pathways through which an exposure can lead to disease?
This question of how to provide data without apparent clinical relevance is a challenging one that I have faced in my own work (as I study exposure to metals, many of which are unregulated and have some beneficial and detrimental effects). In that role, I’m faced with a question on how we tell a person their level of exposure when there is no clinical recommendation for a safe level of exposure and/or no guideline to limit or control levels of the contaminant. While there is an increasing pattern of sharing data with participants, researchers still face ethical concerns of presenting data that has little bearing on current clinical practices.
Going forward, I will encourage my fellow researchers to remember that each row in your dataset is much more meaningful than just being able to answer research questions. I will remember to take diligence to align my research aims and work with those people whose data I’m studying-even if it does not feel immediately apparent where these data come from. My COVID experience reminds me that I have so much more to learn on the path to being a good public health professional.