AI Transmission Path Audit
On 2026-04-23 we audited the full request path of the Artificial Intelligence (AI) narrative-generation feature, from the mobile client through the Vercel proxy to OpenAI. This page summarizes what was checked, what was found, and the known tolerances we disclose.
This is the public summary. A more detailed copy of the audit record (including the field-level code map and platform configuration screenshots) is available on request to facility security teams under a confidentiality agreement — please use the contact form.
Scope & Methodology
Audit date: 2026-04-23
Scope: The /api/generate Vercel Edge Function and the full request path from the mobile client through the proxy to OpenAI. The audit covered the request body, the request headers, the proxy's logging, the analytics and telemetry that fire on the narrative-generation path, and the Vercel platform configuration.
Methodology: Static review of every file involved in constructing the prompt and every logging call on the request path, followed by dashboard verification of the Vercel project configuration and an empirical sample of production runtime logs.
Request Body — What Transits to OpenAI
The body of the request sent from the mobile app to /api/generate contains four kinds of content:
- Structured enums (request type, model name) — pre-enumerated, no user-typed content. Safe.
- Structured chip values (interventions, hallucination types, safety status, Activities of Daily Living (ADL), ambulation, eating, medication compliance, commitment status, and all clinical assessment enums) — passed through whitelist label maps before being embedded in the prompt. They cannot carry user-typed patient data. Safe.
- Numeric and date-typed fields (vitals, Morse / Braden / Glasgow Coma Scale (GCS) scores, wound measurements, pain intensities, infection screening counts, note date, note time) — structured types with no free-text. The note date and note time are the wall-clock time of note generation, not patient Date of Birth (DOB) or admission date. Safe.
- Free-text fields (per-section observation, notes, details, direct quotes, education barriers, goal progress notes, wound notes, signed nursing notes) — transmit verbatim. There is no automated client-side de-identification step before transmission. See Known Tolerances below.
The structured data model is the primary protection at the structured-field layer: there are no fields for patient name, Medical Record Number (MRN), or date of birth in the chart data model. Free-text fields transmit as the nurse enters them.
Request Headers
Three request headers leave the device:
Content-Type— literalapplication/json.- An anonymous RevenueCat user identifier (
$RCAnonymousID:…) used for subscription validation. Pseudonymous; not patient PII. - A free-narrative entitlement boolean (sent only when the free-pool path is used).
No patient data, free-text content, or device fingerprint travels in the headers.
Proxy Logging
Every console.* call site on the request path was reviewed. Only request metadata is logged:
- Request type (narrative / freeform / clinical) and the model name being used.
- An 8-character prefix of the anonymous customer identifier — sufficient to correlate a small number of recent requests during incident triage, not enough to reverse to an individual.
- Network-level error messages from the upstream OpenAI iterator and from the client retry path (e.g. “network request failed”).
Request and response bodies are not written to logs. The Vercel Edge Runtime does not capture request or response bodies in runtime logs by default, and our application code does not introduce any additional body-logging.
Analytics & Telemetry
Two analytics events fire on the narrative-generation path:
- Narrative generated: carries a non-reversible 32-bit hash of the chart UUID. The chart UUID itself is client-generated and contains no patient content.
- Free-narrative used: carries an integer narrative count, the free-narrative limit, and a feature name literal.
Mixpanel and AppsFlyer providers are typed pass-throughs: they forward only the explicitly filtered properties from the call site, do not spread arbitrary context, and validate against a hardcoded event allowlist. AppsFlyer runs in anonymized mode.
No analytics event on the narrative-generation path carries the prompt content, the request body, or the generated narrative text.
Vercel Platform Configuration
In addition to the application-code review, the Vercel project configuration was checked on 2026-04-25:
- Log drains: none configured. No external service captures runtime logs from the project.
- Third-party log integrations (Datadog, Axiom, etc.): none installed.
- Runtime log retention: 1 day (Vercel Pro default), corroborated by querying the runtime-log API with a 24-hour window (succeeds) versus a 7-day window (fails).
- Project access: scoped to the development team with two-factor authentication enforced on owner accounts.
The audit included an empirical sample of production runtime logs over a 24-hour window. Across roughly 200 production log lines, full-text searches for prompt header strings and free-text section markers (the literal tokens our formatters emit when free-text is present in the body) returned no matches in /api/generate request lines. The one matching log line was the audited-safe metadata form (request type and a truncated model name only), confirming the static-code finding against live traffic.
Known Tolerances
We disclose three deliberate tolerances in the transmission posture. None of them is a surprise; all three have been considered and accepted as the cost of the current architecture.
1. Nurse name in the prompt header
The nurse's display name from app settings is embedded as NURSE NAME: <name> at the top of every prompt to give the AI the documenting clinician's identity. Under the Health Insurance Portability and Accountability Act (HIPAA)'s 18 de-identification identifiers (45 CFR §164.514(b)(2)), nurse name is not patient Protected Health Information (PHI). It is user-identifying data that could be correlated to a care site by someone with sufficient context. We do not use this fact to claim that the full OpenAI payload is HIPAA-de-identified.
2. Patient-reference label transit
The optional patient-reference label (configured in the app's settings as examples like “Patient”, “Veteran”, or “Resident”) is intended as a category label, not a patient identifier. The raw value transits in the request body before the proxy applies its server-side sanitization (alphanumeric-only, capped at 50 characters). The mobile TextInput is capped at 20 characters client-side. This is documented as a tolerance because the sanitization is server-side rather than before transmission.
3. Free-text content transmits as entered
This is the principal tolerance and the single biggest exposure in the current architecture. A nurse who pastes an Electronic Medical Record (EMR) excerpt or types a direct identifier into any free-text field will send that content verbatim to OpenAI. Our published guidance on the privacy and security pages tells users to keep free-text entries to the minimum necessary clinical content and not to enter direct identifiers (names, dates of birth, medical record numbers, room or bed numbers, addresses, phone numbers, exact dates, or similar identifiers) into AI-bound free-text fields unless an approved facility deployment permits that workflow.
We considered, and decided against, adding an automated client-side de-identification step before transmission. The analysis showed that for nursing free-text such a pipeline cannot reliably reach Safe Harbor de-identification under 45 CFR §164.514(b) without removing clinically important content, and a partial scrubber would create a false sense of safety. The compliance posture for facility deployments instead rests on the upstream Business Associate Agreement (BAA) + Zero Data Retention (ZDR) chain (OpenAI BAA + ZDR + Vercel HIPAA add-on) treating the entire payload as PHI.
Findings
- Every field in the request body is accounted for and categorized.
- Request headers carry no patient data.
- Proxy logging carries no request body content under current code — verified empirically against a 24-hour sample of production runtime logs.
- Analytics events on the narrative-generation path carry no prompt or narrative content.
- No Vercel log drains and no third-party log integrations are configured for the project.
- Free-text fields transit to OpenAI verbatim; Nurse Charting Pro does not represent the OpenAI payload as HIPAA-de-identified under 45 CFR §164.514. Facility deployments rely on the upstream Business Associate Agreement (BAA) + Zero Data Retention (ZDR) chain.
As a result of this audit we corrected several pieces of internal documentation and sales copy that previously described an automated client-side de-identification step that did not match what the code does, and we updated the public web copy on the privacy, security, HIPAA, and FAQ pages to disclose the request-scoped AI transmission path rather than relying on a blanket “PHI never leaves the device” framing.
Need the full audit record?
For facility security review, we share the full audit record — including the field-level code map, dashboard configuration screenshots, and runtime-log samples — under a confidentiality agreement.
Contact us →Related Documentation
Security Architecture →
How Nurse Charting Pro stores chart data, manages encryption keys, and crypto-shreds at end of shift.
Privacy Policy →
What data the app collects, where it goes, and how the FHIR Send-to-EHR flow works.
HIPAA →
Our HIPAA posture, BAA stance, and what facility deployments need to have in place before PHI flows.
FAQ →
Common questions on pricing, supported devices, and how end-of-shift wipe behaves.