3 min read

The Auditor Who Bet on AI: Why I Left California, Moved to Illinois, and Built Tools to Make AI Auditable

The Auditor Who Bet on AI: Why I Left California, Moved to Illinois, and Built Tools to Make AI Auditable

The Auditor Who Bet on AI: Why I Left California, Moved to Illinois, and Built Tools to Make AI Auditable

Dek: A supplier‑quality engineer left the California medical‑device track, moved to Illinois for family, and rebuilt his career around a single idea: if AI is going to run inside regulated products and processes, someone has to make it pass an audit.

I built my early career in California inside the most unforgiving rooms in healthcare—supplier escalations, CAPA boards, and audit backrooms. The job was simple to define and hard to do: when a regulator asks “show me,” you show them. My craft became turning chaos—nonconformances, supplier changes, manufacturing variation—into evidence chains.

In those years I helped run large remediation efforts, closed more than 120 legacy supplier and quality notifications, and contributed to audit‑readiness programs that materially reduced findings for a major division. I learned how to speak two languages: clinical risk and manufacturing reality. It was the best training I could have asked for.

Then I left California.

The move to Illinois was personal. My family needed me. I chose proximity over proximity to the next promotion. What I didn’t expect was that the distance from the coast would make the next career decision obvious.

When generative AI arrived, most conversations in regulated industries fell into two camps: enthusiasm without a plan, or fear without a strategy. I had seen this movie before with software lifecycle controls. The problem wasn’t AI; the problem was proof. How do you demonstrate that AI‑assisted work is safe, effective, and controlled—today, next month, and after the next model patch?

So I bet on myself and on AI.

I launched Audit Coach and KoalaT.ai, bringing fractional leadership and technical depth to startups that need inspection‑ready systems. I built VirtualBackroom.ai, an audit simulation and evidence‑mapping environment that lets teams rehearse FDA and Notified Body questions with real citations, not slideware. I prototyped an Audit Risk Predictor Agent to surface patterns that precede findings. And I kept a personal project—ChatALZ—to remind me what the technology is ultimately for.

Here’s what I’ve learned in the field:

  • Intended use is the boundary, not the checklist. It frames risk class and operating conditions. User requirements come after, and they must be measurable.
  • We didn’t lose reliability; we changed its definition. In deterministic systems, reliability meant repeatability. In AI systems, it means stable performance envelopes with explicit human‑in‑the‑loop controls, rollback paths, and continuous monitoring.
  • Validation is a lifecycle, not a gate. I keep Model Qualification for statistical integrity, Performance Qualification for real‑world truthfulness and safety (factuality checks, adversarial probes, privacy‑leak tests), and continuous governance for drift, edits, and incident response tied to CAPA.

My credentials helped—lead auditor training, supplier‑quality certifications, and coursework in AI for healthcare—but the real shift was mindset. I stopped treating AI as a curiosity and started treating it like a supplier: it needs agreements, change notices, verification plans, and performance monitoring. When you do that, AI stops being a risk you fear and becomes a capability you can defend.

I didn’t move to Illinois to slow down. I moved to keep the right things close: my family, and the parts of work that matter. The rest—tools, models, methods—should evolve. The patients don’t care if your evidence lives in a binder or a dashboard. They care that it holds up.

Sidebar (for editors):

  • Proof points: Led/participated in programs credited with a ~40% reduction in audit findings at a corporate division; closed 120+ legacy supplier/quality notifications through a structured remediation; managed global supplier‑quality and audit‑readiness initiatives for Class III devices.
  • Projects: Audit Coach; KoalaT.ai; VirtualBackroom.ai (audit simulation + citation tracing); Audit Risk Predictor Agent; ChatALZ (supervised support for Alzheimer’s families).
  • Expertise: FDA 21 CFR 820/QMSR trajectory, ISO 13485, EU MDR, supplier quality (SCAR/PPAP), CAPA, risk management, CSA/CSV for AI‑assisted processes, PMS/real‑world performance monitoring.
  • Why now: AI is entering regulated workflows faster than validation practices are adapting. There is a gap between innovation and auditability. This story is about closing that gap with practical systems.

If you’re commissioning features on AI in regulated industries, the future of audit and compliance, or mid‑career pivots driven by family decisions and technology shifts, this is the intersection where I work every day.