Healthcare Technology
Two Patents and Zero Patients Lost: Building Emergency Healthcare Systems
The two years I spent building radiology software changed how I think about what software is actually for.
When a trauma patient arrives in an emergency department at two in the morning, a clock starts. The emergency physician needs imaging — a CT scan of the abdomen, an MRI of the spine, an X-ray of a suspected fracture. The radiology technologist performs the scan. The images go to the radiologist for interpretation. The radiologist writes a report. The report goes back to the emergency physician, who acts on it. Every minute in that chain has clinical consequences. A delayed report on a brain bleed is not a missed SLA. It is a patient who did not receive treatment in time.
I spent 2013 to 2015 as Director of Software Development at Lakeland Healthcare Group, building the software that managed that chain. I want to tell this story carefully, because it is the one chapter of my career where the technical work felt most directly connected to something larger than itself.
What a Radiology Information System Does
A Radiology Information System — a RIS — is the operational backbone of a radiology department. It manages the order workflow: when a physician orders an imaging study, the RIS receives that order, routes it to the appropriate technologist, tracks the scan's progress, routes the images to the radiologist, tracks the report, and routes the completed report back to the ordering physician. It also manages scheduling, billing, and the tracking of patient-level data.
When I arrived at Lakeland, the existing workflow had gaps. The most critical was in the emergency and after-hours process. Hospital radiology departments do not operate like a 9-to-5 office. Emergency scans come in at all hours. Radiologists may be reading remotely — from home, from an offsite facility — and the communication chain between the scan and the reading radiologist was not reliable. Studies were getting assigned, but there was no reliable mechanism to confirm they had been received and acknowledged by the responsible radiologist in time to matter.
The proprietary messaging system I built for Lakeland solved this specific problem. When an urgent study was ordered, the system sent a structured message to the on-call radiologist's device with a required acknowledgment. If the acknowledgment did not arrive within a configurable time window, the system escalated — to the supervisor, to the backup radiologist, to the department head. Nobody could miss a critical study and have that miss go undetected. The workflow was closed-loop, with every step logged and timestamped.
Closing the Loop
The Escalation Chain
Sentinel Events
A sentinel event in healthcare is a serious, unexpected adverse event that indicates a systemic failure — a patient harmed or killed in a way that signals something in the process is broken. The Joint Commission, which accredits US hospitals, has a formal sentinel event policy. When a sentinel event occurs, it triggers a root cause analysis: what went wrong, where in the process did it fail, what change would prevent recurrence.
One of the features I built into the Lakeland RIS was sentinel event tracking for radiology workflows. If a study was delayed beyond acceptable thresholds — if a critical finding was reported late, if an escalation chain was not followed, if a radiologist's acknowledgment came outside the acceptable window — the system flagged that event, captured the full workflow history, and made it available for review. The intent was not punitive. It was to identify systemic failures before they caused harm, and to give department leadership the data to make process improvements.
The Clinical Consequence of a Missed Acknowledgment
In radiology, an unacknowledged critical study is not an administrative inconvenience. It means a radiologist has not confirmed they have received and accepted responsibility for reading a scan that may contain a time-sensitive finding. A missed intracranial hemorrhage, an unread pneumothorax, an unacknowledged aortic dissection — these are the failure modes that kill people. The closed-loop system existed for one reason: to ensure that no study could be in a state of ambiguous ownership. Either a radiologist had confirmed receipt, or the system had escalated until one had.
Building this required working closely with the radiologists and department administrators at Lakeland. That collaboration was different from anything I had done in financial software. It is the kind of cross-industry learning I reflect on in my career retrospective. Radiologists think about their work in clinical terms — diagnostic accuracy, turnaround time, critical value notification. Translating those clinical concepts into data models and business logic required me to understand the domain at a level that went beyond technical requirements. I sat in workflow reviews. I talked to technologists about what slowed them down. I observed the actual reading workflow to understand where the handoffs were and where they broke.
What It Means to Get a Patent
The system produced two provisional US patents. I want to be honest about what that means and what it does not mean. A provisional patent is not a full patent — it is a placeholder that establishes a priority date and gives you twelve months to file a full application. Getting a provisional patent means a patent attorney looked at what you built and concluded it was novel and non-obvious enough to warrant the investment of pursuing protection. That is meaningful as a professional signal.
But what the patents actually represent to me is simpler: the acknowledgment that the problem we solved had not been solved in that particular way before. The closed-loop emergency messaging workflow, integrated with sentinel event tracking and staff performance monitoring, was a genuinely new approach to a real clinical problem. Two people sitting in a room thought carefully about what we had built and decided it was worth protecting. That is its own form of validation.
The technical stack was Java and Spring Boot for the application layer, MongoDB for the document store, Docker for deployment, LDAP for directory services and authentication, and AWS for the hosting infrastructure. The choice of MongoDB was deliberate: radiology workflow data is document-shaped. An order has a complex, nested structure — the patient, the study, the modality, the reading radiologist, the report, the acknowledgments, the escalations — and forcing that into a relational schema would have created joins that made the audit trail harder to query. A document store let us store the entire workflow history for a study as a single retrievable unit.
The Weight of Building Systems Where Failure Harms People
In trading, a system failure costs money. In healthcare, a system failure can cost a life. I think about that difference a lot when I look back at the Lakeland years.
The design principles are similar at a surface level: high availability, fault tolerance, comprehensive logging, clear escalation paths. But the emotional weight is different. When I was testing the escalation logic — simulating a delayed acknowledgment, watching the system escalate to the backup radiologist, verifying the timestamped audit trail — I was not thinking about uptime percentages. I was thinking about the scenario where this actually fires at 3 a.m., in a real emergency department, with a real patient waiting.
That weight makes you careful in ways that are hard to manufacture artificially. It makes you actually read through the edge cases rather than filing them as future work. It makes you test the failure modes with the same rigor you test the happy path, because in this domain, the failure modes are the ones that matter most. What happens when the network drops? What happens when the radiologist's device is offline? What happens when an escalation message is sent and the supervisor is also unreachable?
Every one of those questions had to have a designed, tested, documented answer. Not because a QA checklist required it, but because the alternative was a gap in the safety net — a scenario in which a critical study could fall through without any human in the chain knowing it had happened.
I left Lakeland in 2015 having built something I am genuinely proud of. The system reduced overhead costs and improved department efficiency — the metrics that hospital administrators track. But the thing I carry from those two years is different from a metric. It is the experience of building software that exists inside a human safety system, and understanding what that responsibility actually demands.
Not every engineer will work in healthcare. But every engineer is responsible for the downstream consequences of the systems they build. Lakeland made that responsibility concrete for me in a way that nothing before or since has matched.
"Every engineer is responsible for the downstream consequences of the systems they build. Lakeland made that responsibility concrete for me in a way that nothing before or since has matched."
Arindam Paul — Director of Software Development, Lakeland Healthcare 2013–2015