Article

What Using AI for My Mom’s Cancer Taught Me

By John Bailey

January 20, 2026

The most profound way I used AI in 2025 came during one of the harder stretches my family has faced: My mother’s cancer came back.

In the past, navigating this diagnosis meant late‑night Googling, scattered notes, and a nagging sense that I did not fully understand the medical language in front of us. Second opinions introduced different treatment options, sequences, and philosophies of care—and a new kind of uncertainty. Every decision had a life-or-death aspect to it, and most came with trade-offs that had no clear answer. You’re not just picking a treatment. You’re betting. Betting that the side effects will be worth it. That one option is better than another. That you’re not trading her good days for a few more bad ones.

This time was different. As the diagnosis unfolded, I built a custom AI assistant that could summarize test results and pathology reports in plain language. I compiled my mother’s prior medical records and treatment history and gave the assistant that context, allowing it to reason with a more complete picture of her case. What changed wasn’t just access to information; it was my ability to make sense of it without being overwhelmed. I could surface potential treatment options, identify strategies for managing side effects, and develop better questions to bring into conversations with her doctors.

One of the most valuable moments came when I paired deep research capabilities with a carefully structured prompt to examine my mother’s case through multiple expert lenses, much like a real tumor board. The prompt tasked the system to reason as an oncologist, a radiation specialist, a molecular pathologist, a clinical-trials expert, and a patient advocate, each analyzing the same facts from a different vantage point. By forcing these perspectives to respond to one another, the process surfaced tradeoffs, risks, and sequencing decisions that no viewpoint would have captured on its own. The result was a detailed, source‑cited report that helped us think more clearly about the path ahead.

The tools I used varied depending on the task. For organizing records and maintaining context across conversations, I relied on ChatGPT’s Projects and custom GPTs. For deep research (pulling together clinical trial data, treatment protocols, and emerging evidence), I used Deep Research in ChatGPT, Claude, and Gemini, and Manus. For medical-specific reasoning, I turned to specialized platforms: Glass Health, OpenEvidence, and Google’s MedGemma. No single tool did everything well. But together, they formed a kind of distributed intelligence I could draw on as questions arose.

None of this replaced her doctors. But it helped my mother become a more informed patient and helped me be a better advocate. We arrived at appointments prepared and able to engage meaningfully in decisions about her care.

It also solved problems I hadn’t anticipated. Her records lived in different EHR systems that at times couldn’t talk to each other. But the AI assistant had everything: medical records, labs, genetic tests, and imaging reports. I could bring her complete history into the room, filling gaps her doctors couldn’t see from their screens.

On at least two occasions, AI surfaced details that were initially overlooked. Once, it flagged a finding in a radiology report that the physician’s assistant had missed. Another time, it surfaced a potential treatment for managing chemo side effects that, once we brought it to her doctors, became part of her care plan.

I share this not to criticize her doctors. They are doing their best within a strained and broken healthcare system against an unforgiving disease. But having AI as a form of backup analysis and second opinion materially improved my mother’s care.

I also built something different: A chatbot for her.

My mother needed something she could ask the same question at three in the morning without feeling like a burden, and get a calm, patient explanation. The chatbot translated medical jargon into plain language. It answered honestly without minimizing the seriousness of what she was facing. It could gently walk her through what was coming—the treatment, the side effects, how they’d be managed—with patience and warmth that, ironically, studies have shown AI often delivers more consistently than overwhelmed healthcare professionals. I also added the option for a short prayer or devotional when she wanted one.

Clarity, empathy, availability, and a little bit of hope. It became a quiet source of comfort during an incredibly frightening journey.

She is still undergoing treatment, and while we’re hopeful, we don’t yet know how this story ends. This is not a story about AI curing cancer.

Cancer isn’t beaten in a single moment. It’s navigated, decision by decision.  AI didn’t tell us what to choose. But it helped us understand what we were choosing between. In a fight measured in small gains, that clarity was its own kind of gift.