By Shiv Patel

This summer has a lot of my assumptions about what it truly looks like to “do good” in a community. I used to think that doing good was doing tangible impact, helping underserved populations gain access to care, or improving health outcomes through innovation. But now, I believe that doing good is less about the urge to help, but more about the integrity with which that help is delivered.
In a community context, to “do good” means being accountable, humble, and responsive. It’s not enough to offer solutions. It’s about ensuring those solutions are co-designed with the people who need them, informed by lived experience, and attentive to historical context and present inequities. That’s the biggest lesson I’ve taken from working on my capstone this summer.
At Mayo, I’ve seen firsthand how research can be both a force for progress and a source of exclusion. On the one hand, radiation oncology is advancing rapidly – AI tools, genomic classifiers, adaptive planning – all with the potential to make treatment more effective and less harmful. But I’ve also come to see how research, when disconnected from the realities of underserved communities, can perpetuate harm. Algorithms trained on unrepresentative data sets can also amplify disparities. Genomic classifiers become tools of exclusion if testing is not available to patients. An inaccessible technology is not equitable, no matter how innovative.
Talking with Dr. Borrás-Osorio was a wake-up call. She reminded me that in most of the world “doing good” starts with just getting the basics right. For her, the question isn’t whether a patient gets personalized radiation, it’s whether they get any radiation at all. That re-centered my focus. Research should not just seek to perfect what works in elite settings; it should figure out how to translate, adapt, and scale those tools for those who need them most.
I’ve also seen that research can foster or erode trust. At one tumor board meeting, there was a brief consideration of the ethical challenges of integrating predictive models into practice. One clinician brought up concern about patient consent and understanding: “Are we just going to assume that because we can predict toxicity, patients are going to trust what the algorithm is telling us to do?”
It was a reminder that even well-intentioned tools can fail if we don’t design them with, not for, the people they’re trying to help.
As I think about more work to be done in my CBI, I see a few key areas:
- Implementation science: How do we bring new technologies to underserved clinics and community hospitals?
- Data equity: How do we diversify the datasets that inform clinical decision-making?
- Patient trust and communication: How do we make high-tech, high-stakes decision-making transparent and participatory?
And myself, I am curious about how we can introduce community-based participatory research (CBPR) models into precision oncology. How do we give individuals from low-income, rural, or minority communities a seat at the research table? “Doing good” in research is easy to say. Doing it justly demands more. It demands listening, constant rethinking, and a willingness to slow down innovation until it’s for all.