The problem isn’t reminders. It’s placement.
Most medication apps focus on reminders — alerting you when a dose is due. That’s useful, but it assumes someone already figured out when each dose should be. For people on multiple medications, that “when” is the hard part: levothyroxine wants an empty stomach, calcium wants a meal, the two of them want to be four hours apart, and metformin wants dinner. Working out a valid daily schedule from a list of pills is a constraint satisfaction problem.
Pharmacists do this calculation all the time. They do it well, often in a single conversation at pickup. But that conversation happens once, on paper, and gets forgotten. Sewa’s job is to make the math behind that conversation legible, citable, and rerunnable.
We’re not trying to replace the pharmacist. We’re trying to give you a working draft to take to them, so the conversation can start from “is this right?” instead of “what should I do?”
Step one: turning your text into structured data.
You can paste anything — bottle labels, a doctor’s note, a list from your phone. The first step is parsing. A Claude API call (Haiku for speed) extracts structured data with strict JSON schema validation: name, dose, dose unit, frequency, prescriber instructions.
The output looks like this:
const parsed = [
{
name: "Levothyroxine",
dose: 75,
unit: "mcg",
frequency: "once_daily",
timing_hint: "morning",
food: "empty_stomach"
},
// ...four more medications
];Each parsed entry is then sent to RxNorm, the U.S. National Library of Medicine’s drug name registry, to resolve a unique identifier (an RxCUI). This is the only place where the raw drug name leaves your browser — and the response is just an ID number. From here on, the rest of the pipeline operates on the ID, not the name.
The constraint solver, in ~270 lines of TypeScript.
The scheduling step is a classic constraint satisfaction problem. Each medication generates one or more dose “slots” that need to be placed on a 24-hour clock. Each slot has hard constraints (must be at bedtime, must be 4 hours from calcium) and soft preferences (cluster with other morning doses if possible).
We use a hand-written backtracking solver — no Z3, no MiniZinc, no fancy SMT. The reason is practicality: the problem is small (typically 1–15 slots), the constraints are simple, and a custom solver lets us emit reasoning traces alongside the solution. Off-the- shelf solvers give you the answer; we need the explanation too.
- 06:30Levothyroxine
- 11:00Calcium
- ≥4h gap required between these two
For each medication, the solver explores valid placements in order of constraint tightness: medications with the most specific timing rules go first, less-constrained ones fill in around them. When all medications are placed, the result includes not just the chosen time but the chain of reasoning that led there — which rules were considered, which were satisfied, and which alternative placements were rejected.
Where the interaction rules come from.
The solver is only as good as its dataset. Sewa uses three sources, in order of authority:
ONCHigh — peer-reviewed, government-published
The “high-priority drug-drug interaction” list developed by the Office of the National Coordinator and published in JAMIA in 2013 (Phansalkar et al.). It’s intentionally narrow — these are the interactions where clinical evidence is strongest and consensus is highest. The original ONC API hosting this dataset was discontinued in January 2024; the published paper remains the authoritative reference, and we re-host the data structurally.
CredibleMeds — QT-prolongation registry
For one specific class of risk — combinations of drugs that prolong the QT interval and can cause fatal arrhythmias — we cross-reference the CredibleMeds registry, an academic project graded by tier: Known, Possible, Conditional. This is one of the few clinical scenarios where the harm is severe enough that even a small risk of combination should be flagged.
Curated timing rules
FDA labels often specify timing rules (“take 4 hours apart from antacids”) that aren’t captured in interaction databases because they’re not strictly interactions — they’re absorption rules. We extract these from the labels themselves and store them as {rxcuiA, rxcuiB, minGapMinutes, citation} tuples. The set is reviewed quarterly and version-controlled in the open repo.
Why every placement gets an explanation.
An explanation isn’t decoration. It’s the actual product. If Sewa said “take calcium at 11:00” with no reasoning, the user (and the pharmacist) would have to take that on faith. With reasoning, they can verify it in seconds: “ah, four hours after the thyroid pill — yes, that’s right.”
Explanations are derived from the solver’s structured trace — the chosen time, the rules that applied, the citations — and rendered as plain-English prose. The language model, when used, never adds clinical content the solver didn’t justify. If you removed the LLM entirely, you’d still get a valid schedule with rule names attached — the LLM just makes those rule names readable.
Citations are attached programmatically, not by the model. There is no path by which a fabricated source can appear in the UI.
What this approach does not catch.
The honest list. These are the limits of the current dataset and the architecture — they should inform how you use the output:
The reason we are this explicit about limits is that opacity is the most common failure mode in this space. Most apps that schedule medications give you a result without telling you what they checked. Pharmacist verification cannot happen against a black box.
Why this is free.
The 2024 sunset of the NLM Drug-Drug Interaction API was a quiet event with significant consequences. For a decade, free public infrastructure made it possible for academic projects, EHRs, pharmacy schools, and developer tools to build clinical decision support without paying a vendor. When it went away, the free versions stopped getting maintained — only the commercial ones kept up.
Sewa is an attempt to put back a small, defensible piece of that public infrastructure: not a full interaction database, not a clinical decision support system, but a transparent scheduler that shows its work. Every rule has a citation. Every placement has a reasoning trace. Nothing is hidden behind a paywall.
This is also why the architecture is what it is. Privacy- preserving by default. No accounts, no database, no business model that requires capturing patient data. The goal is for someone five years from now — a pharmacy student building a project, a clinic in a low-resource setting, a researcher who needs to verify what we did — to be able to use this and trust what it’s doing.