Tether

When someone is sent home from the hospital, the hardest part starts. Will they take their medications correctly? Are they getting worse? Should they call the doctor or wait? Tether is a phone app that sits with the patient at home and answers those questions in their own language. Doctors write a recovery plan once; the app turns it into daily guidance, listens for early warning signs in the patient's voice, and tells family or care teams when something needs attention.

Everything runs at the edge on Cloudflare. The biomarker engine is written in Rust, compiled to WebAssembly, and analyzes a 10-second voice recording in under 100 milliseconds. The AI chat is grounded only in the doctor's published plan, so it never invents medical advice. Patients can use it by voice if they can't read or type.

React Native + Expo Cloudflare Workers Rust WASM Biomarkers 6 Languages YIN + Jitter + Shimmer + HNR
Try the Demo How It Works GitHub
AI

Plan-Grounded AI Chat

Patients ask questions in plain language. The AI only answers from the doctor's published plan — never guesses.

V

Voice Biomarkers

Rust WASM analyzes breathing rate, cough events, and vocal energy from a 10-second recording.

E

Engine Connection

Biomarker results feed into AI context — so "How is my breathing?" gets a real, data-backed answer.

R

Red Flag Escalation

When the AI detects a red flag symptom, it marks the response urgent and suggests contacting the care team.

P

Protocol Library

One-click templates for pneumonia, heart failure, COPD, post-surgical recovery, and type-2 diabetes — each with medications, daily steps, and red flags pre-filled. Doctors load a template, edit, publish.

C

Caregiver Portal

Family members and trusted contacts get a read-only dashboard of the patient's plan, latest biomarkers, and 7-day medication adherence. Patients add caregivers by email — full opt-in consent.

In Plain English

The problem

Roughly one in five patients sent home from a hospital ends up back in the emergency room within 30 days. The main reasons are not surprising: people forget medication doses, miss the early signs that things are going wrong, or do not know whether a symptom is normal recovery or a real warning. Doctors give a printed discharge summary, but it sits in a drawer. Family caregivers want to help but rarely have visibility.

What Tether does

Tether is three things in one app:

  • A pocket version of the discharge plan. The doctor writes the plan once. The patient sees daily medications, daily activities, red-flag symptoms to watch for, and follow-up dates. Everything is in plain language and can be read out loud in their language.
  • An AI assistant that only knows the doctor's plan. The patient asks "Can I take Tylenol?" or "Should I worry about this chest pain?" and the assistant answers using only what is in the plan. It never makes up medical advice. If the question matches a red-flag symptom the doctor listed, the answer is marked urgent.
  • A voice check that listens for trouble. The patient records 10 seconds of their voice. A signal-processing engine running on Cloudflare's edge servers measures breathing rate, cough patterns, voice fatigue, and clinical voice-quality markers (jitter, shimmer, HNR). Numbers track over time, so a small change today against the patient's own baseline can flag a problem the patient does not notice.

Who is in the loop

  • Patients get the daily guidance and assistant chat.
  • Doctors see a recovery score dashboard sorted by risk, plus the patient's biomarker trends and adherence history.
  • Family caregivers get a read-only dashboard if the patient invites them by email. They can see the plan, the last few biomarker readings, and which medication doses were taken.

Why it works without violating privacy

Voice recordings never leave the device except as raw PCM audio sent to the analysis endpoint, and even there they are not stored after analysis. Only the numerical biomarker results are saved. The chat AI runs through a Cloudflare Worker that never sees the patient's account credentials. Passwords are hashed on-device using expo-crypto. Caregiver access is opt-in and revocable by the patient.

What it does not do

  • It is not a replacement for a doctor. The assistant cannot prescribe, diagnose, or give advice outside the published plan.
  • It is not a medical device. Current biomarker accuracy is good enough to spot trends and prompt human review, not to make standalone clinical decisions.
  • It is not a HIPAA-certified product yet. The technical foundation is correct (encryption, no PHI in logs) but the compliance audit and BAAs are part of the funded roadmap.

How It Works

Tether keeps patients and doctors connected after a hospital discharge. Here is the simple version:

1. The doctor creates a recovery plan

Before the patient leaves the hospital, their doctor opens Tether and fills in a personalized care plan: diagnosis, medications, daily instructions, warning signs to watch for, and a follow-up date. The doctor also picks a communication tone (calm, direct, or reassuring) so the app speaks the way the patient is most comfortable with.

2. The patient gets a personal AI companion

When the patient logs in, they see their plan and can ask questions in plain language — by typing or speaking. The AI only answers using information from the doctor's plan, never guessing or making things up. Every response includes a readability score so caregivers can verify the language is easy enough to understand.

3. Voice biomarkers track recovery

The patient can do a quick voice check — just talk into the phone for a few seconds. Tether analyzes the audio to detect breathing rate, cough patterns, vocal energy, and tremor. These biomarkers are tracked over time so the doctor can spot trends without an in-person visit.

4. The two engines talk to each other

This is what makes Tether different. The voice biomarker results are automatically shared with the AI companion. So if the patient asks "How is my breathing?", the AI already knows the latest voice check showed an elevated breathing rate and can give a relevant, grounded answer — not a generic one.

5. Humans stay in the loop

If the AI cannot fully answer a question, it suggests the patient message their doctor directly. Doctors see these messages in real time and can reply. The AI never replaces the doctor — it bridges the gap between hospital visits so patients are never left guessing alone.

6. Works in the patient's language

Patients can switch between English, Spanish, Hindi, Mandarin, French, and Arabic. The AI responds and speaks in their chosen language, removing a major barrier to understanding medical instructions after discharge.

Architecture

Tether follows a privacy-first architecture. API keys never ship in the mobile bundle — all LLM requests and biomarker analysis are proxied through a Cloudflare Worker at the edge.

Patient Opens app on phone Doctor Opens app on phone Mobile App (React Native + Expo) Voice Recording AI Chat Local Storage Voice data Chat messages Cloudflare Worker (Edge Proxy) Routes requests · Hides API keys · Runs at the edge Your API keys never leave the server Groq LLM AI chat responses Rust WASM Voice biomarker analysis CF Secrets Encrypted API keys No patient data stored on servers
F

Frontend

React Native + Expo SDK 55 with React Navigation native stack. Runs on iOS, Android, and web.

B

Backend

Cloudflare Worker proxies all API calls. GROQ_API_KEY stored as a Cloudflare secret, never exposed to the client.

R

Rust WASM

Voice biomarker engine compiled to WebAssembly via wasm-pack. Runs inside the Worker for edge-speed signal processing.

AI

LLM

Groq API with LLaMA 3.3 70B. Graceful fallback chain: Worker → direct → keyword matching.

Quickstart

Prerequisites

  • Node.js 18+
  • Expo CLI (npm install -g expo-cli)
  • iOS Simulator (Xcode) or Android Emulator
  • Rust + wasm-pack (for biomarker engine development)

Setup

git clone https:
cd tether
npm install
cp src/lib/config.template.ts src/lib/config.ts
npm run ios

That's it. The config template comes pre-configured with the shared Tether API — no API keys or environment variables needed. The Groq key lives on the Cloudflare Worker and is never exposed to the client.

Web preview: Run npx expo start --web instead to open in a browser.

Worker Setup

# Deploy the Cloudflare Worker
cd worker
npm install
npx wrangler secret put GROQ_API_KEY
npx wrangler deploy

Features

Auth

  • Login / signup with role selection (doctor or patient)
  • Passwords hashed with SHA-256 (expo-crypto)
  • Session persistence — reopening the app skips login
  • Terms/privacy consent on signup

Doctor Workspace

  • Create/edit patient recovery plans (diagnosis, vitals, meds, instructions, red flags, follow-up)
  • Set AI tone (calm, direct, reassuring)
  • Publish plans to a specific patient email (validates account exists)
  • Draft auto-saves locally
  • View and reply to patient messages

Patient Companion

  • View the recovery plan assigned to your email
  • Vitals summary, daily instructions, red flags
  • AI chat powered by Groq with keyword-matching fallback
  • Quick prompt buttons ("What should I do today?", "When should I call?", etc.)
  • Voice input via speech recognition
  • Voice output (text-to-speech on AI replies, toggleable)
  • Urgency badges on AI responses (routine / contact clinician / urgent)
  • Flesch-Kincaid readability score on every AI response (grade level badge)
  • Handoff suggestion when AI can't fully answer
  • Direct messaging to doctor (real-time via Durable Objects)
  • Multilingual support (English, Spanish, Hindi, Mandarin, French, Arabic)
  • Voice biomarker analysis (breathing rate, cough detection, vocal tremor, voice energy)
  • Biomarker status levels (normal / monitor / alert) with alert popup
  • Biomarker trending — historical chart showing trends over time
  • Engine connection — biomarker data injected into AI context automatically
  • Patient Journal — daily journal entries that feed into AI context for more personalized responses
  • Medication Adherence Tracker — daily yes/no medication logging with 7-day streak visualization
  • Time-aware prompting — AI adapts advice based on days since discharge (early/mid/extended recovery)

Doctor Workspace (continued)

  • Discharge date — set per patient to enable time-aware recovery guidance
  • Recovery Score Dashboard — composite 0-100 score per patient (biomarker + adherence + engagement + journal), sorted by risk

Onboarding

  • 5-step tutorial on first launch (welcome, doctors, patients, voice biomarkers, safety)
  • Skip button and dot indicators
  • Only shows once (stored in AsyncStorage)

Infrastructure

  • Cloudflare Worker proxy — API key stays server-side, never ships in the app
  • Durable Objects backend — accounts, plans, messages, biomarker history persist across devices
  • Rust WASM biomarker engine runs at the edge inside the worker
  • AI requests routed through worker, falls back to direct Groq, then keyword matching

Authentication

Users sign up with a role (Doctor or Patient) and are routed to the appropriate workspace after login. Sessions persist across app restarts via AsyncStorage.

  • Password hashing: SHA-256 via expo-crypto — plaintext passwords are never stored
  • Session restore: On launch, the app checks AsyncStorage for an active session and skips login if found
  • Role-based routing: Doctors see the workspace; patients see the companion
  • Validation: Email format, password strength (8+ chars with a number), and terms acceptance

Doctor Workspace

Doctors create, edit, and publish recovery plans for specific patients. Plans are the foundation of the entire patient experience — the AI, the UI, and the messaging system all derive from the published plan.

Plan Fields

FieldDescription
Patient Name & EmailMust match a registered patient account
DiagnosisPrimary condition (e.g. post-discharge pneumonia)
VitalsHeart rate, blood pressure, temperature, O2 saturation
MedicationsName, dosage, and frequency (one per line)
Daily InstructionsWhat the patient should do each day
Red FlagsSymptoms that require immediate medical attention
Follow-upNext appointment or scheduled check-in
ToneCalm, Direct, or Reassuring — controls AI personality
Doctor NotesPrivate instructions for how AI should phrase answers

Messaging

Doctors see all patient message threads, sorted by most recent. They can select a thread and reply directly. When a patient sends a message (or the AI suggests a handoff), it appears here.

Patient Companion

The patient screen surfaces the published recovery plan and provides multiple channels for getting help: AI chat, voice input, quick prompts, biomarker analysis, and direct doctor messaging.

Care Plan Display

Vitals, daily instructions, medications, and red flags — all from the doctor's published plan.

AI Chat

Text or voice questions answered by LLaMA 3.3, constrained to the care plan. Includes urgency badges and handoff suggestions.

Voice Biomarkers

Record a 10-15 second voice sample. Rust WASM engine analyzes breathing rate, cough events, pitch variability, and more.

Doctor Messaging

Direct messaging channel for when AI isn't enough. The AI can auto-suggest using this when it lacks certainty.

Patient Journal

Write daily entries about how you feel. Recent entries are injected into the AI prompt so responses reflect your current emotional and physical state.

Medication Tracker

Log daily medication adherence with a simple yes/no. A 7-day streak visualization shows your compliance at a glance.

Caregiver Portal

Adult children of elderly patients, partners, and family members often need visibility into post-discharge recovery without being clinical providers. The caregiver portal is a third login type that gives trusted contacts a read-only dashboard for any patient who explicitly links them.

How linking works

  1. The caregiver creates a Tether account with the caregiver role at sign-up.
  2. The patient adds the caregiver's email to their account → triggers POST /api/caregiver/link.
  3. The caregiver logs in and sees a dashboard of every patient who linked them.
  4. Either side can revoke the link at any time.

What the caregiver sees

Latest published plan

Diagnosis, doctor name, last-updated timestamp. Tap through for full medications, instructions, and red flags.

Recent voice biomarkers

The last 10 readings with status dots — green / amber / red — for at-a-glance monitoring of breathing trends.

7-day adherence

A pill-grid showing which days the patient took their medication. Missed days highlighted in red.

Privacy model

Caregivers can read but cannot send messages, edit plans, or post journal entries on the patient's behalf. The patient remains the data owner — every link is opt-in and removable. The doctor is not notified of caregiver links by default; the patient controls who sees what.

Data flows

GET /api/caregiver/patients?email=<caregiver-email>
→ [
    {
      patientEmail, patientName,
      latestPlan,
      recentBiomarkers,
      recentAdherence
    },
    ...
  ]

Protocol Library

Doctors don't write a recovery plan from scratch every time. The protocol library ships five clinically-grounded templates, each one a complete DoctorPlan shape — diagnosis text, medications with dosing, daily instructions, red flags, follow-up timing, and recommended tone.

Included templates (v1)

Post-discharge Pneumonia

ICD-10 J18.9. Amoxicillin + inhaler regimen, breathing-focused red flags, GP follow-up in 3 days.

Heart Failure (CHF)

ICD-10 I50.9. Furosemide + lisinopril + carvedilol, daily weight check (the single most important early warning), cardiology follow-up in 7 days.

COPD Exacerbation

ICD-10 J44.1. Tiotropium + rescue inhaler + 5-day prednisolone + 7-day doxycycline, oximeter-based red flags.

Post-surgical Recovery

ICD-10 Z48.815. Pain-control regimen, DVT prevention with enoxaparin, wound-care daily steps, 6-week lifting restriction.

Type-2 Diabetes (new diagnosis)

ICD-10 E11.9. Metformin titration schedule, atorvastatin, glucose-target ranges, plate-method dietary guidance.

How a doctor uses it

  1. Open the Doctor Workspace → "Publish Patient Plan" section.
  2. Click any protocol chip — fields auto-fill with the template defaults.
  3. Edit anything that's patient-specific (medications, follow-up timing, tone).
  4. Add the patient's name and email → publish.

Why this matters

A solo physician can publish 5–10 plans per evening with the protocol library, vs. 1–2 from scratch. More importantly: the templates encode best-practice red flags ("weight gain >1 kg in a day" for CHF, "rescue inhaler more than every 4 hours" for COPD) that an under-the-gun doctor might forget to write. The templates are clinically reviewable and version-controlled in src/lib/protocols.ts.

Extending

Adding a new condition is one object in the PROTOCOL_TEMPLATES array — the UI picks it up automatically. The schema is { id, label, emoji, conditionICD10, defaults }, where defaults is a Partial<DoctorPlan>.

AI Chat System

The AI is powered by Groq's LLaMA 3.3 70B model, accessed through a Cloudflare Worker proxy. Every response is grounded in the doctor's published care plan.

System Prompt

A dynamic system prompt is built from the care plan that includes the patient's diagnosis, medications, instructions, red flags, and the doctor's preferred tone. The AI is instructed to:

  • Only answer from documented care plan data
  • Flag red-flag symptoms as "urgent"
  • Suggest messaging the doctor when information is missing
  • Return structured JSON with message, urgency, supporting points, and handoff flag

Response Urgency Levels

LevelMeaningUI Treatment
routineNormal informational responseBlue badge
contact-clinicianAI suggests speaking with doctorYellow badge
urgentRed flag symptom detectedRed badge + escalation banner

Fallback Chain

1. Cloudflare Worker → Groq API (primary)
2. Direct Groq API call (if worker fails)
3. Keyword matching (if no API configured)
Safety: The AI never diagnoses, prescribes, or advises outside the doctor's documented scope. Emergency symptoms always trigger an urgent flag with instructions to seek immediate care.

Voice Biomarkers

Tether's biomarker system records a short voice sample from the patient, extracts PCM audio data, and sends it to a Rust WASM engine running on the Cloudflare Worker for real-time signal processing.

How It Works

  1. Patient taps "Start Voice Check" — expo-audio begins recording in WAV/PCM format at 16kHz
  2. Patient speaks naturally for 10-15 seconds, then taps "Stop & Analyze"
  3. Recording is read as an ArrayBuffer, PCM samples extracted from WAV headers
  4. Samples sent to Worker's /analyze endpoint as JSON
  5. Rust WASM engine processes samples and returns a BiomarkerReport
  6. Results displayed as a card with status badge (normal / monitor / alert)
  7. Report saved to Durable Objects for historical trending

Biomarker Trending

Every biomarker report is stored server-side with a timestamp. The patient's biomarker card shows a trend view of the last 10 readings with bar charts for breathing rate, voice energy, and cough events. Alert/monitor/normal counts are summarized as colored pills. This turns a single snapshot into a longitudinal monitoring system that can detect deterioration over days.

Clinical Voice Quality Card

Below the core metrics the card surfaces the clinical voice quality section: Mean Pitch (Hz), Jitter %, Shimmer %, and HNR (dB), each annotated with the healthy reference range. These are the same metrics used by Praat (the academic reference tool for voice biology). The section appears only when the engine successfully extracted enough voiced cycles, so it does not show on whisper-only or breath-only recordings.

Engine Connection

Tether's two AI engines — NLP (Groq LLM) and Bio-Acoustic (Rust WASM) — share context automatically:

  • The latest biomarker report (including confidence score and all 5 metrics) is injected into the AI system prompt before every chat request
  • When the patient asks "how am I doing?", the AI references actual biomarker readings (breathing rate, cough events, energy levels, zero-crossing rate)
  • If biomarkers are in "alert" status, the AI proactively warns the patient and recommends contacting their care team
  • The AI knows the analysis confidence level and can qualify its answers accordingly ("Your latest voice check had moderate confidence — consider recording again in a quieter space")
  • One engine listens to the body, the other explains what it means in plain language

Automatic Alert Escalation

When a biomarker recording returns alert status (2+ flags), Tether automatically sends a care message to the assigned doctor — no patient action needed. The message includes:

  • Full biomarker summary with actual values and normal ranges
  • Confidence score for the analysis
  • A note that the message was sent automatically by the biomarker system

The patient sees "Health Alert — Doctor Notified" confirming the escalation happened. This means a patient could record a voice check, trigger an alert, and their doctor sees it in their inbox within seconds — all without the patient needing to understand or act on the medical data themselves.

Readability Scoring

Every AI response is scored using the Flesch-Kincaid Grade Level formula. A badge on each message shows the grade level (e.g., "Grade 4.2 - Very Easy"). This proves the health literacy claim with data:

  • Grade 0-5: Very Easy — 5th grader can understand
  • Grade 6-8: Easy — middle school level
  • Grade 9-12: Moderate — high school level
  • Grade 13+: Complex — college level (AI is prompted to stay below 6)

Patient Journal

Patients can write daily journal entries describing how they feel. This serves two purposes:

  • Patient self-reflection: Writing about symptoms, mood, and progress helps patients track their own recovery
  • AI context enrichment: The 3 most recent journal entries are injected into the AI system prompt, allowing responses to account for the patient's current emotional and physical state

Entries are stored server-side via Durable Objects (max 100 per patient, 2000 character limit). The patient sees their entries in reverse chronological order. The journal also contributes to the Recovery Score (up to 20 points).

Medication Adherence Tracker

A simple daily check-in that asks patients: "Did you take all your medicines today?" with Yes/No buttons.

  • One log per day: Duplicate entries for the same day are prevented
  • 7-day streak: Colored dots show recent adherence (green = taken, red = missed)
  • AI awareness: Adherence records are injected into the AI prompt — if the patient has missed 2+ days, the AI gently reminds them about medication importance
  • Recovery Score input: Adherence contributes up to 30 points to the composite score

Time-aware Prompting

Doctors can set a discharge date on each patient's plan. The AI system prompt then calculates days since discharge and adjusts its approach:

PhaseDaysAI Behavior
Early recovery0-3Extra cautious, encourages rest and monitoring
Mid recovery4-14Encourages gradual activity and adherence
Extended recovery15+Focuses on long-term habits and follow-up

A "Day X since discharge" badge appears on the patient's journal section for awareness.

Recovery Score

A composite 0-100 score calculated per patient, visible to doctors on their workspace. Patients are sorted lowest-first so the most at-risk patients get attention first.

Scoring Breakdown

ComponentMax PointsSource
Biomarker Health30Ratio of normal/monitor/alert readings in recent biomarker history
Medication Adherence30Proportion of "taken" days in the last 7 days
Communication Engagement20Patient messages sent in the last 7 days (capped at 4)
Journal Activity20Journal entries written in the last 7 days (capped at 4)

Risk Levels

  • 0-39: At Risk — needs immediate attention
  • 40-69: Recovering — progressing but needs monitoring
  • 70-100: On Track — recovery going well

Multilingual Support

Patients can select their preferred language from: English, Spanish, Hindi, Mandarin, French, or Arabic. The language preference is stored server-side and affects:

  • AI chat responses — the system prompt instructs the LLM to respond in the selected language at a 5th grade reading level
  • Voice output — text-to-speech uses the correct language code
  • The setting persists across devices via Durable Objects

Cloudflare Worker

The Worker serves as the secure API proxy and data backend. It exposes AI endpoints and a full data API backed by Durable Objects:

API Endpoints

EndpointMethodDescription
/chatPOSTForwards chat messages to Groq API with the GROQ_API_KEY secret
/analyzePOSTReceives PCM audio samples, runs Rust WASM biomarker analysis, returns report
/api/signupPOSTCreate a new account (name, email, password, role)
/api/loginPOSTAuthenticate and return user profile
/api/plansGET/POSTRetrieve or publish care plans
/api/messagesGET/POSTDoctor-patient messaging
/api/biomarkersGET/POSTStore and retrieve biomarker history
/api/user/languagePOSTUpdate patient language preference
/api/usersGETList users (password hashes excluded)
/api/journalGET/POSTPatient journal entries (max 100 per patient, 2000 char limit)
/api/adherenceGET/POSTDaily medication adherence records (upserts by patient+date)
/api/recovery-scoreGETComposite recovery scores for a doctor's patients, sorted by risk

Durable Objects Backend

All application data (accounts, plans, messages, biomarker history) is stored in a Cloudflare Durable Object (TetherData). This replaces the previous AsyncStorage-only approach and provides:

  • Cross-device sync — a doctor publishes a plan on their laptop, the patient sees it on their phone instantly
  • Strong consistency — single-instance guarantee means no stale reads across regions
  • Edge persistence — data persists in Cloudflare's global network with automatic replication
  • Privacy — password hashes are stored server-side (SHA-256), never exposed to clients

The DO seeds itself with starter accounts on first access. AsyncStorage is only used for local session state (which user is logged in on this device).

Rust WASM Engine

The biomarker engine is written in Rust, compiled to WebAssembly via wasm-pack, and loaded as an ES module inside the Cloudflare Worker. This gives near-native signal processing performance at the edge.

Entry Points

pub fn analyze_audio(samples_i16: &[i16], sample_rate: u32) -> String
pub fn analyze_audio_typed(samples_i16: &[i16], sample_rate: u32, recording_type: &str) -> String

Accepts raw PCM samples and sample rate. analyze_audio_typed additionally takes a recording type ("speech" or "breathing") and tunes the envelope window accordingly. Returns a JSON-encoded BiomarkerReport.

Signal Quality & Preprocessing

  • Duration Gate — Recordings shorter than 1.5 seconds are rejected outright rather than analyzed with poor statistics.
  • Signal Quality Gate — Computes SNR from quartile energy ratios. Recordings with SNR below threshold are rejected with a "record in a quieter environment" message instead of producing misleading results.
  • Clipping Gate — Recordings with more than 1% of samples saturated at the digital ceiling are rejected. The threshold is fraction-based rather than max-sample so a single peak does not invalidate an otherwise good recording.
  • VAD-style Silence Stripping — Splits audio into 20ms frames, computes adaptive noise floor at the 20th percentile energy, and additionally checks per-frame ZCR. Frames with high ZCR (fricatives, breath noise) are dropped along with silence. This isolates clean voiced speech for downstream pitch and quality metrics.
  • Confidence Scoring — 0 to 1 composite: 30% signal quality + 25% recording duration + 25% active speech ratio + 20% pitch detection hit rate. Shown to patients as High, Moderate, or Low badge.

Signal Processing Pipeline

  • RMS Energy — Root mean square of silence-stripped samples. Detects fatigue (low energy).
  • Zero-Crossing Rate — Frequency of sign changes on active speech. Detects breathy or labored speech.
  • Breathing Rate — 200ms energy envelope, moving-average low-pass smoothing, peak detection with hysteresis (1.2x and 0.8x thresholds). The smoothing step separates real breathing rhythm from speech cadence.
  • YIN Pitch Detection — Implementation of the YIN algorithm (de Cheveigne and Kawahara, 2002), the standard for monophonic pitch estimation. Per-frame cumulative mean normalized difference function with parabolic interpolation around the period minimum. Substantially more accurate than basic autocorrelation: detects 200 Hz sine at 200.01 Hz.
  • Jitter — Mean absolute period-to-period frequency variation across YIN-extracted cycles, normalized by mean period. Clinical reference threshold 1.04% (Teixeira et al., 2013). Elevated in tremor and neurological conditions.
  • Shimmer — Mean absolute amplitude difference across consecutive voiced cycles, normalized by mean amplitude. Clinical reference threshold 3.81%. Elevated in laryngeal pathology and breathy voice.
  • HNR (Harmonics-to-Noise Ratio) — Computed as 10 * log10(r / (1 - r)) where r is the mean YIN voicing strength. Reported in dB. Healthy voice typically > 20 dB; values below 7 dB suggest dysphonia.
  • Mean Pitch and Voiced Fraction — Average fundamental frequency in Hz across all voiced cycles, plus the fraction of the recording where pitch could be reliably extracted.
  • Pitch Variability (CV) — Coefficient of variation across YIN-detected pitches. Detects vocal tremor.
  • Cough Detection — 30ms frames, sharp energy spikes (> 4x mean) followed by silence (< 0.5x mean within 150ms), plus a broadband check (frame ZCR > 0.20) to discriminate cough from sustained tones. Skip-ahead prevents double-counting. Note: current sensitivity is 13.8% on Coswara. Path to ~92% is YAMNet integration, see Roadmap.

Rich Summary Generation

Instead of bare flag names, summaries include actual values and normal ranges. Examples:

  • "Breathing rate is 28/min (normal range: 12–20/min). 3 cough events detected. Consider contacting your care team."
  • "Voice biomarkers are within normal ranges." (with confidence note if recording quality was moderate)

Building

cd biomarker
wasm-pack build --target web --out-dir ../worker/wasm --release
# Output: tether_biomarker_bg.wasm (~83KB) + JS bindings

Biomarker Metrics Reference

Core signals

MetricRangeFlag ThresholdClinical Significance
Energy (RMS)0 – 1< 0.015Low energy suggests fatigue or weakness
Zero-Crossing Rate0 – 1> 0.3High ZCR indicates breathy or labored speech
Breathing RateBPM> 24Tachypnea, elevated respiratory rate (normal: 12 to 20)
Pitch Variability (CV)0 – 1> 0.35High variation suggests vocal tremor
Cough EventsCount≥ 3Frequent coughing in a short sample
Confidence0 – 1N/AComposite of SNR, duration, active speech ratio, and pitch detection hit rate. < 0.4 = Low, 0.4 to 0.7 = Moderate, > 0.7 = High

Clinical voice quality (new)

These are the same metrics used by Praat, the academic reference tool for voice biology. Thresholds drawn from Teixeira et al. (2013) and the GRBAS scale.

MetricRangeFlag ThresholdClinical Significance
Jitter0 – 1 (ratio)> 0.0104 (1.04%)Period-to-period frequency variation. Elevated in tremor, vocal fold pathology, neurological conditions.
Shimmer0 – 1 (ratio)> 0.0381 (3.81%)Amplitude variation across cycles. Elevated in laryngeal pathology, breathy or hoarse voice.
HNR (dB)-30 – 60< 7 dBHarmonics-to-noise ratio. Low values indicate raspy, breathy, or aphonic voice. Healthy voice typically > 20 dB.
Mean Pitch (Hz)0 – 2000Reference onlyAverage fundamental frequency. Typical adult male: 85 to 180 Hz. Typical adult female: 165 to 255 Hz.
Voiced Fraction0 – 1Reference onlyProportion of the recording where the engine detected voiced (pitched) speech. < 0.3 suggests whispering, dysphonia, or microphone failure.

Status Logic

Flags TriggeredStatusMeaning
0NormalNo concerning patterns detected
1MonitorOne metric outside normal range, worth watching
2+AlertMultiple flags, consider contacting care team

Validation

The engine has been benchmarked against the Coswara dataset (Indian Institute of Science, Bangalore) using a randomly sampled batch of 29 patient recordings (cough-heavy and sustained vowel-a). Validation script lives at scripts/validate_biomarker.py and runs against the deployed analyze endpoint.

Pitch detection (sine reference)

200 Hz sine wave detected at 200.01 Hz. Pitch accuracy on clean voiced segments: ~99.99%.

Cough detection (Coswara, n=29)

DetectorSensitivityFalse-positive rateNotes
v1: Energy spike + ZCR (deprecated)13.8%0.0%Original heuristic.
v2: Spike + ZCR + first-order high-pass spectral check20.7%0.0%Currently deployed. +50% relative recall, zero specificity loss.
v3: YAMNet via Modal (free tier) — pending integration~92% (projected)~1% (projected)Based on published YAMNet benchmarks against COUGHVID.

Voice quality (vowel-a) grouped by COVID status

StatusnJitterShimmerHNR (dB)Mean Pitch (Hz)
healthy260.0260.04012.9121.8
no_resp_illness_exposed20.0060.04718.0194.6
resp_illness_not_identified10.0020.01418.9112.8

Mean pitch (121.8 Hz) for the healthy adult cohort matches published vocal fold frequencies for adult males. Healthier statuses trend toward higher HNR (cleaner voice) and lower jitter, consistent with clinical literature. Absolute jitter is elevated above published clinical thresholds because Coswara is home-recorded smartphone audio, not clinic-grade. This is a recording-condition floor that controlled capture would address.

Roadmap

No-cost improvements (in progress)

  • Per-patient baselining: each user's first 5 recordings establish a personal voice baseline. Subsequent recordings are flagged on deviation from the personal mean (Z-score > 2) rather than absolute clinical thresholds. This compensates for natural inter-speaker variation and dramatically reduces false-positive alerts.
  • Client-side audio quality pre-check: before uploading samples, the mobile app verifies sample count, peak amplitude, and basic spectral spread. Bad recordings fail fast with a clear retry message.
  • Multi-recording median: optional capture mode that takes three short readings and stores the median across all metrics, reducing variance from noise or single-cough artifacts.
  • FFT-based cough discrimination: extends the spike detector with a 1024-point FFT energy ratio check (high-frequency energy > 60% of total during the spike). Targets ~30% lift over the current heuristic without external models.
  • Validation harness in CI: scripts/validate_biomarker.py can be wired into GitHub Actions so every engine change auto-reports Coswara accuracy deltas.

Low-cost upgrades (small one-time spend)

  • YAMNet via Modal free tier: Modal.com gives $30 per month free compute. Host YAMNet ONNX behind a Modal endpoint; worker calls it for cough/sneeze/breath classification. Closes the cough sensitivity gap from 14% to ~92% without paying for Cloudflare Workers Paid.
  • Cloudflare Workers Paid ($5/mo): alternative path that bundles YAMNet ONNX (~15MB) directly into the worker. Lower latency than Modal, same accuracy gain.
  • BC-ResNet fine-tuned on COUGHVID (~$50 one-time): specialized 5MB cough detector trained on 25,000 labeled coughs. Projected ~95% sensitivity. Fits on free Workers bundle.

Funded roadmap ($6,000 plus $99/year Apple Developer Program)

  • Clinical validation pilot with 50 patients and a part-time clinical advisor.
  • HIPAA infrastructure audit, BAAs with all third-party vendors.
  • Indie-tier penetration test.
  • App Store and Google Play deployment.
  • Natural-sounding TTS in six additional languages.

Security

API Key Isolation

GROQ_API_KEY is a Cloudflare secret. It never appears in the mobile bundle, git history, or client-side code.

Password Hashing

SHA-256 via expo-crypto. Plaintext passwords are never stored or compared directly.

Config Gitignore

src/lib/config.ts is gitignored. A template file is committed for new developers to copy.

CORS

Worker includes CORS headers on all responses, allowing requests from the mobile app and web preview.

Tech Stack

LayerTechnology
Mobile AppReact Native 0.83, Expo SDK 55, React 19
Navigation@react-navigation/native (native stack)
Audioexpo-audio, expo-speech, expo-speech-recognition
Cryptoexpo-crypto (SHA-256)
Storage@react-native-async-storage/async-storage
BackendCloudflare Workers (TypeScript)
AI ModelGroq API — LLaMA 3.3 70B Versatile
Signal ProcessingRust + WebAssembly (wasm-pack)
SerializationSerde (Rust), JSON (TypeScript)

Onboarding

First-time users see a 5-step tutorial before reaching the login screen. The tutorial covers:

  1. Welcome — What Tether does and who it's for
  2. For Doctors — How to create and publish recovery plans
  3. For Patients — How to use AI chat, voice, and messaging
  4. Voice Biomarkers — How voice analysis works and what it detects
  5. Safety First — Tether is not a replacement for emergency care

Onboarding completion is stored in AsyncStorage under the key tether-onboarding-complete. The tutorial only shows once.