Pratik Desai, a 34-year-old with a background in systems integration and AI, built a free, AI-assisted workflow to help his family through a tough Stage 4 cancer journey. He turned scattered medical data into structured insights, hoping to give caregivers more power and help them communicate better with clinicians—especially when guidance felt thin.
He wanted to see if a caregiver-led AI approach could catch errors, flag risks, and support smarter treatment decisions in the messy world of oncology.
A personal breakthrough: building an AI-powered care workflow
Desai started by exporting his mother’s daily Epic records. Then he layered in AI tools to pull together the data and generate questions for her doctors.
He used NotebookLM and Claude to translate raw medical notes, test results, and imaging reports into a story that nonmedical caregivers and medical teams could both follow. His goal wasn’t to replace clinicians, but to act as a structured coach—helping families advocate with more precision and confidence.
The workflow shifted from a pilot to something scalable as his mother’s records grew. With thousands of pages piling up, Desai kept refining the toolkit, choosing the most capable models he could find while dealing with their flaws.
He made sure the approach stayed free and usable for other families, even those without any clinical background.
Key components of the workflow
- Data fusion: Daily Epic exports feed a living case model, so information always stays up to date.
- AI-assisted synthesis: Large language models condense and organize findings into actionable insights.
- Anomaly detection and pattern recognition: The system points out inconsistencies in reports and flags clinical patterns that might need a closer look—like signs of pulmonary embolism or bleeding risks after transfusions.
- Ready-to-use clinician prompts: It generates targeted questions, nudging care teams toward clearer explanations and faster action.
- Documentation and traceability: The workflow logs changes and keeps a growing record (which hit over 1,600 pages later on) to back up ongoing discussions.
Clinical impact and lessons learned
Desai’s approach helped spot issues that clinicians might have missed or noticed too late. By catching CAT scan report errors and picking up on evolving clinical signals—like embolic risk and post-transfusion bleeding—the tool aimed to prevent emergencies and give families more time together.
In this case, the patient lived 76 days after diagnosis, spending 67 of those days as an inpatient. That really brings home just how much timely, informed interventions can matter.
He kept tweaking the system as his mother’s medical history grew, swapping in better-performing models when possible—even if they sometimes made mistakes or “hallucinated.” He believes the medical system should be judged by the same standards researchers use for AI: transparency, accountability, and a focus on real outcomes, not just avoiding errors.
Illustrative outcomes
- Early identification of report anomalies helped catch potential mistakes before they affected care.
- Detection of clinically meaningful patterns—like signs of pulmonary embolism and bleeding risks—prompted timely clinician review and adjustments.
- Practical patient-advocacy support gave families clearer, more focused questions to ask care teams.
- Bandwidth for discussion let caregivers participate more deeply in decision-making and make sense of complex treatment trade-offs.
- Accessibility and replication showed that a simple, free tool can help other families advocate effectively too.
Implications for patients, caregivers, and healthcare systems
It turns out, AI in healthcare can serve as a coach and second opinion. It’s not here to replace clinicians, but it can be a trusted companion that helps close gaps in care and organize information.
AI tools like this give caregivers and families more confidence to engage with treatment plans. They can also improve the transparency and accountability that patients deserve—something we all want, honestly.
For healthcare teams, the case really points to the need for transparent AI governance. Robust data provenance and clinician oversight are essential when bringing patient-facing AI aids into the mix.
Making caregiver tools accessible and free matters a lot. It broadens participation in care decisions and pushes for more patient-centered outcomes.
Here is the source article for this story: I built an AI tool to manage mom’s stage 4 cancer treatment