· 2026
Transfer validation automation
A tool that automatically checks, through the HBYS system, whether patients transferred from the emergency department to the tertiary center were actually admitted. It turned a monthly manual chore into an automated, record-by-record process.
The problem
When you transfer a patient from the emergency department to the tertiary center, the story ends on your side — but whether the patient was actually admitted there is, for us, an important quality indicator. How many of the patients we transferred actually needed admission? Are we over-transferring, or transferring appropriately? Answering that requires checking, one by one, what became of every patient we sent.
This had been a monthly manual grind: for each patient on the transfer list, log into HBYS, search by national ID, open the patient history, look for an admission record. Hundreds of rows, each a few clicks. Work no one wanted to do, and work whose reliability was doubtful even when it was done.
What I built
HBYS has no API — no externally accessible data connection. The only way to answer “was this patient admitted” is the very same web interface a human would click through. So I automated that interface: a Selenium-based script logs into HBYS, searches by national ID for each patient on the transfer list, opens the patient history window, and looks for an admission record. The output is an Excel report — for each patient, “admitted / not admitted,” which unit if admitted, and a per-unit summary.
It’s worth saying up front that the tool is fragile — but that fragility isn’t an engineering failure, it’s the nature of the problem. HBYS’s interface is built with ExtJS and uses dynamic IDs, so you can’t locate elements by stable IDs; you have to write selectors that rely on stable class and name attributes. It needs the hospital network, it needs a ChromeDriver matching the installed Chrome, and if the HBYS interface changes, the script breaks. All of this follows from the fact that no official integration exists. Scraping is the honest acknowledgment that there’s no proper door into this system.
What was technically interesting
Not the automation itself — automation is something anyone can do with enough patience. What’s interesting is the rule that turns the automation into a measurement instrument.
Finding an “admission” record in a patient’s history isn’t enough. You have to be sure that admission was the result of this transfer. A patient might have been admitted three years ago for an unrelated reason; reporting that old record as the success of the current transfer invalidates the measurement from the start. So for an admission record to count as “belonging to this transfer,” three conditions have to hold together: the record type must be exactly “admission,” the admitting institution must be the tertiary center, and the admission date must fall within a defined window relative to the transfer time (from two hours before the transfer to twenty-four hours after). If more than one record satisfies all three, the chronologically earliest is chosen.
That rule is what makes the script a measurement instrument rather than a scraper. The automation itself is easy; the question of whether this data point actually measures the thing I want to measure is the real work.
Outcome
The tool brought what had been a monthly grind down to a script that runs in a few minutes. But the interesting part came from somewhere I didn’t expect.
At a provincial health-coordination meeting, the transfer-admission counts the tool produced were set side by side with the provincial health directorate’s own figures for the same period. The two numbers didn’t agree. As we discussed the difference, it appeared that the discrepancy lay more on the official-figures side — my script, which checked every patient one by one, record by record, gave a more consistent picture than the official process producing aggregate statistics. This wasn’t a controlled, methodical validation; it was two numbers placed side by side in a meeting. But it made me think: a check done one by one, record by record, can be more reliable than a statistic produced in bulk — and the reason isn’t that the tool is clever, only that it looks at each row separately.
The repo is public, but the tool runs on the hospital network, on real patient data — it isn’t something you download and run anywhere; it’s a piece that lives inside the hospital’s data environment.