Tether

A clinical companion app that connects post-discharge patients with their healthcare providers through AI-powered guidance, voice biomarker analysis, and secure messaging.

How It Works

Tether keeps patients and doctors connected after a hospital discharge. Here is the simple version:

1. The doctor creates a recovery plan

Before the patient leaves the hospital, their doctor opens Tether and fills in a personalized care plan: diagnosis, medications, daily instructions, warning signs to watch for, and a follow-up date. The doctor also picks a communication tone (calm, direct, or reassuring) so the app speaks the way the patient is most comfortable with.

2. The patient gets a personal AI companion

When the patient logs in, they see their plan and can ask questions in plain language — by typing or speaking. The AI only answers using information from the doctor's plan, never guessing or making things up. Every response includes a readability score so caregivers can verify the language is easy enough to understand.

3. Voice biomarkers track recovery

The patient can do a quick voice check — just talk into the phone for a few seconds. Tether analyzes the audio to detect breathing rate, cough patterns, vocal energy, and tremor. These biomarkers are tracked over time so the doctor can spot trends without an in-person visit.

4. The two engines talk to each other

This is what makes Tether different. The voice biomarker results are automatically shared with the AI companion. So if the patient asks "How is my breathing?", the AI already knows the latest voice check showed an elevated breathing rate and can give a relevant, grounded answer — not a generic one.

5. Humans stay in the loop

If the AI cannot fully answer a question, it suggests the patient message their doctor directly. Doctors see these messages in real time and can reply. The AI never replaces the doctor — it bridges the gap between hospital visits so patients are never left guessing alone.

6. Works in the patient's language

Patients can switch between English, Spanish, Hindi, Mandarin, French, and Arabic. The AI responds and speaks in their chosen language, removing a major barrier to understanding medical instructions after discharge.

Architecture

Tether follows a privacy-first architecture. API keys never ship in the mobile bundle — all LLM requests and biomarker analysis are proxied through a Cloudflare Worker at the edge.

Patient Opens app on phone Doctor Opens app on phone Mobile App (React Native + Expo) Voice Recording AI Chat Local Storage Voice data Chat messages Cloudflare Worker (Edge Proxy) Routes requests · Hides API keys · Runs at the edge Your API keys never leave the server Groq LLM AI chat responses Rust WASM Voice biomarker analysis CF Secrets Encrypted API keys No patient data stored on servers
F

Frontend

React Native + Expo SDK 55 with React Navigation native stack. Runs on iOS, Android, and web.

B

Backend

Cloudflare Worker proxies all API calls. GROQ_API_KEY stored as a Cloudflare secret, never exposed to the client.

R

Rust WASM

Voice biomarker engine compiled to WebAssembly via wasm-pack. Runs inside the Worker for edge-speed signal processing.

AI

LLM

Groq API with LLaMA 3.3 70B. Graceful fallback chain: Worker → direct → keyword matching.

Quickstart

Prerequisites

  • Node.js 18+
  • Expo CLI (npm install -g expo-cli)
  • iOS Simulator (Xcode) or Android Emulator
  • Rust + wasm-pack (for biomarker engine development)

Setup

git clone https://github.com/ArhanCodes/tether.git
cd tether
npm install
cp src/lib/config.template.ts src/lib/config.ts
npm run ios

That's it. The config template comes pre-configured with the shared Tether API — no API keys or environment variables needed. The Groq key lives on the Cloudflare Worker and is never exposed to the client.

Web preview: Run npx expo start --web instead to open in a browser.

Worker Setup

# Deploy the Cloudflare Worker
cd worker
npm install
npx wrangler secret put GROQ_API_KEY
npx wrangler deploy

Features

Auth

  • Login / signup with role selection (doctor or patient)
  • Passwords hashed with SHA-256 (expo-crypto)
  • Session persistence — reopening the app skips login
  • Terms/privacy consent on signup

Doctor Workspace

  • Create/edit patient recovery plans (diagnosis, vitals, meds, instructions, red flags, follow-up)
  • Set AI tone (calm, direct, reassuring)
  • Publish plans to a specific patient email (validates account exists)
  • Draft auto-saves locally
  • View and reply to patient messages

Patient Companion

  • View the recovery plan assigned to your email
  • Vitals summary, daily instructions, red flags
  • AI chat powered by Groq with keyword-matching fallback
  • Quick prompt buttons ("What should I do today?", "When should I call?", etc.)
  • Voice input via speech recognition
  • Voice output (text-to-speech on AI replies, toggleable)
  • Urgency badges on AI responses (routine / contact clinician / urgent)
  • Flesch-Kincaid readability score on every AI response (grade level badge)
  • Handoff suggestion when AI can't fully answer
  • Direct messaging to doctor (real-time via Durable Objects)
  • Multilingual support (English, Spanish, Hindi, Mandarin, French, Arabic)
  • Voice biomarker analysis (breathing rate, cough detection, vocal tremor, voice energy)
  • Biomarker status levels (normal / monitor / alert) with alert popup
  • Biomarker trending — historical chart showing trends over time
  • Engine connection — biomarker data injected into AI context automatically

Onboarding

  • 5-step tutorial on first launch (welcome, doctors, patients, voice biomarkers, safety)
  • Skip button and dot indicators
  • Only shows once (stored in AsyncStorage)

Infrastructure

  • Cloudflare Worker proxy — API key stays server-side, never ships in the app
  • Durable Objects backend — accounts, plans, messages, biomarker history persist across devices
  • Rust WASM biomarker engine runs at the edge inside the worker
  • AI requests routed through worker, falls back to direct Groq, then keyword matching

Authentication

Users sign up with a role (Doctor or Patient) and are routed to the appropriate workspace after login. Sessions persist across app restarts via AsyncStorage.

  • Password hashing: SHA-256 via expo-crypto — plaintext passwords are never stored
  • Session restore: On launch, the app checks AsyncStorage for an active session and skips login if found
  • Role-based routing: Doctors see the workspace; patients see the companion
  • Validation: Email format, password strength (8+ chars with a number), and terms acceptance

Doctor Workspace

Doctors create, edit, and publish recovery plans for specific patients. Plans are the foundation of the entire patient experience — the AI, the UI, and the messaging system all derive from the published plan.

Plan Fields

FieldDescription
Patient Name & EmailMust match a registered patient account
DiagnosisPrimary condition (e.g. post-discharge pneumonia)
VitalsHeart rate, blood pressure, temperature, O2 saturation
MedicationsName, dosage, and frequency (one per line)
Daily InstructionsWhat the patient should do each day
Red FlagsSymptoms that require immediate medical attention
Follow-upNext appointment or scheduled check-in
ToneCalm, Direct, or Reassuring — controls AI personality
Doctor NotesPrivate instructions for how AI should phrase answers

Messaging

Doctors see all patient message threads, sorted by most recent. They can select a thread and reply directly. When a patient sends a message (or the AI suggests a handoff), it appears here.

Patient Companion

The patient screen surfaces the published recovery plan and provides multiple channels for getting help: AI chat, voice input, quick prompts, biomarker analysis, and direct doctor messaging.

Care Plan Display

Vitals, daily instructions, medications, and red flags — all from the doctor's published plan.

AI Chat

Text or voice questions answered by LLaMA 3.3, constrained to the care plan. Includes urgency badges and handoff suggestions.

Voice Biomarkers

Record a 10-15 second voice sample. Rust WASM engine analyzes breathing rate, cough events, pitch variability, and more.

Doctor Messaging

Direct messaging channel for when AI isn't enough. The AI can auto-suggest using this when it lacks certainty.

AI Chat System

The AI is powered by Groq's LLaMA 3.3 70B model, accessed through a Cloudflare Worker proxy. Every response is grounded in the doctor's published care plan.

System Prompt

A dynamic system prompt is built from the care plan that includes the patient's diagnosis, medications, instructions, red flags, and the doctor's preferred tone. The AI is instructed to:

  • Only answer from documented care plan data
  • Flag red-flag symptoms as "urgent"
  • Suggest messaging the doctor when information is missing
  • Return structured JSON with message, urgency, supporting points, and handoff flag

Response Urgency Levels

LevelMeaningUI Treatment
routineNormal informational responseBlue badge
contact-clinicianAI suggests speaking with doctorYellow badge
urgentRed flag symptom detectedRed badge + escalation banner

Fallback Chain

1. Cloudflare Worker → Groq API (primary)
2. Direct Groq API call (if worker fails)
3. Keyword matching (if no API configured)
Safety: The AI never diagnoses, prescribes, or advises outside the doctor's documented scope. Emergency symptoms always trigger an urgent flag with instructions to seek immediate care.

Voice Biomarkers

Tether's biomarker system records a short voice sample from the patient, extracts PCM audio data, and sends it to a Rust WASM engine running on the Cloudflare Worker for real-time signal processing.

How It Works

  1. Patient taps "Start Voice Check" — expo-audio begins recording in WAV/PCM format at 16kHz
  2. Patient speaks naturally for 10-15 seconds, then taps "Stop & Analyze"
  3. Recording is read as an ArrayBuffer, PCM samples extracted from WAV headers
  4. Samples sent to Worker's /analyze endpoint as JSON
  5. Rust WASM engine processes samples and returns a BiomarkerReport
  6. Results displayed as a card with status badge (normal / monitor / alert)
  7. Report saved to Durable Objects for historical trending

Biomarker Trending

Every biomarker report is stored server-side with a timestamp. The patient's biomarker card shows a trend view of the last 10 readings with bar charts for breathing rate, voice energy, and cough events. Alert/monitor/normal counts are summarized as colored pills. This turns a single snapshot into a longitudinal monitoring system that can detect deterioration over days.

Engine Connection

Tether's two AI engines — NLP (Groq LLM) and Bio-Acoustic (Rust WASM) — share context automatically:

  • The latest biomarker report is injected into the AI system prompt before every chat request
  • When the patient asks "how am I doing?", the AI references actual biomarker readings (breathing rate, cough events, energy levels)
  • If biomarkers are in "alert" status, the AI proactively warns the patient and recommends contacting their care team
  • One engine listens to the body, the other explains what it means in plain language

Readability Scoring

Every AI response is scored using the Flesch-Kincaid Grade Level formula. A badge on each message shows the grade level (e.g., "Grade 4.2 - Very Easy"). This proves the health literacy claim with data:

  • Grade 0-5: Very Easy — 5th grader can understand
  • Grade 6-8: Easy — middle school level
  • Grade 9-12: Moderate — high school level
  • Grade 13+: Complex — college level (AI is prompted to stay below 6)

Multilingual Support

Patients can select their preferred language from: English, Spanish, Hindi, Mandarin, French, or Arabic. The language preference is stored server-side and affects:

  • AI chat responses — the system prompt instructs the LLM to respond in the selected language at a 5th grade reading level
  • Voice output — text-to-speech uses the correct language code
  • The setting persists across devices via Durable Objects

Cloudflare Worker

The Worker serves as the secure API proxy and data backend. It exposes AI endpoints and a full data API backed by Durable Objects:

API Endpoints

EndpointMethodDescription
/chatPOSTForwards chat messages to Groq API with the GROQ_API_KEY secret
/analyzePOSTReceives PCM audio samples, runs Rust WASM biomarker analysis, returns report
/api/signupPOSTCreate a new account (name, email, password, role)
/api/loginPOSTAuthenticate and return user profile
/api/plansGET/POSTRetrieve or publish care plans
/api/messagesGET/POSTDoctor-patient messaging
/api/biomarkersGET/POSTStore and retrieve biomarker history
/api/user/languagePOSTUpdate patient language preference
/api/usersGETList users (password hashes excluded)

Durable Objects Backend

All application data (accounts, plans, messages, biomarker history) is stored in a Cloudflare Durable Object (TetherData). This replaces the previous AsyncStorage-only approach and provides:

  • Cross-device sync — a doctor publishes a plan on their laptop, the patient sees it on their phone instantly
  • Strong consistency — single-instance guarantee means no stale reads across regions
  • Edge persistence — data persists in Cloudflare's global network with automatic replication
  • Privacy — password hashes are stored server-side (SHA-256), never exposed to clients

The DO seeds itself with starter accounts on first access. AsyncStorage is only used for local session state (which user is logged in on this device).

Rust WASM Engine

The biomarker engine is written in Rust, compiled to WebAssembly via wasm-pack, and loaded as an ES module inside the Cloudflare Worker. This gives near-native signal processing performance at the edge.

Entry Point

pub fn analyze_audio(samples_i16: &[i16], sample_rate: u32) -> String

Accepts raw PCM samples and sample rate. Returns a JSON-encoded BiomarkerReport.

Signal Processing Pipeline

  • RMS Energy — Root mean square of normalized samples. Detects fatigue (low energy)
  • Zero-Crossing Rate — Frequency of sign changes. Detects breathy/labored speech
  • Breathing Rate — Low-pass filtered energy envelope, peak counting. Estimates breaths per minute
  • Pitch Variability — Autocorrelation-based pitch detection, coefficient of variation. Detects vocal tremor
  • Cough Detection — Sharp energy spikes (>3x mean) followed by silence. Counts distinct cough events

Building

cd biomarker
wasm-pack build --target web --out-dir ../worker/wasm
# Output: tether_biomarker_bg.wasm (~49KB) + JS bindings

Biomarker Metrics Reference

MetricRangeFlag ThresholdClinical Significance
Energy (RMS)0 – 1< 0.02Low energy suggests fatigue or weakness
Zero-Crossing Rate0 – 1> 0.3High ZCR indicates breathy or labored speech
Breathing RateBPM> 24Tachypnea — elevated respiratory rate
Pitch VariabilityCV> 0.35High variation suggests vocal tremor
Cough EventsCount≥ 3Frequent coughing in a short sample

Status Logic

Flags TriggeredStatusMeaning
0NormalNo concerning patterns detected
1MonitorOne metric outside normal range — worth watching
2+AlertMultiple flags — consider contacting care team

Security

API Key Isolation

GROQ_API_KEY is a Cloudflare secret. It never appears in the mobile bundle, git history, or client-side code.

Password Hashing

SHA-256 via expo-crypto. Plaintext passwords are never stored or compared directly.

Config Gitignore

src/lib/config.ts is gitignored. A template file is committed for new developers to copy.

CORS

Worker includes CORS headers on all responses, allowing requests from the mobile app and web preview.

Tech Stack

LayerTechnology
Mobile AppReact Native 0.83, Expo SDK 55, React 19
Navigation@react-navigation/native (native stack)
Audioexpo-audio, expo-speech, expo-speech-recognition
Cryptoexpo-crypto (SHA-256)
Storage@react-native-async-storage/async-storage
BackendCloudflare Workers (TypeScript)
AI ModelGroq API — LLaMA 3.3 70B Versatile
Signal ProcessingRust + WebAssembly (wasm-pack)
SerializationSerde (Rust), JSON (TypeScript)

Onboarding

First-time users see a 5-step tutorial before reaching the login screen. The tutorial covers:

  1. Welcome — What Tether does and who it's for
  2. For Doctors — How to create and publish recovery plans
  3. For Patients — How to use AI chat, voice, and messaging
  4. Voice Biomarkers — How voice analysis works and what it detects
  5. Safety First — Tether is not a replacement for emergency care

Onboarding completion is stored in AsyncStorage under the key tether-onboarding-complete. The tutorial only shows once.