Tether
A clinical companion app that connects post-discharge patients with their healthcare providers through AI-powered guidance, voice biomarker analysis, and secure messaging.
How It Works
Tether keeps patients and doctors connected after a hospital discharge. Here is the simple version:
1. The doctor creates a recovery plan
Before the patient leaves the hospital, their doctor opens Tether and fills in a personalized care plan: diagnosis, medications, daily instructions, warning signs to watch for, and a follow-up date. The doctor also picks a communication tone (calm, direct, or reassuring) so the app speaks the way the patient is most comfortable with.
2. The patient gets a personal AI companion
When the patient logs in, they see their plan and can ask questions in plain language — by typing or speaking. The AI only answers using information from the doctor's plan, never guessing or making things up. Every response includes a readability score so caregivers can verify the language is easy enough to understand.
3. Voice biomarkers track recovery
The patient can do a quick voice check — just talk into the phone for a few seconds. Tether analyzes the audio to detect breathing rate, cough patterns, vocal energy, and tremor. These biomarkers are tracked over time so the doctor can spot trends without an in-person visit.
4. The two engines talk to each other
This is what makes Tether different. The voice biomarker results are automatically shared with the AI companion. So if the patient asks "How is my breathing?", the AI already knows the latest voice check showed an elevated breathing rate and can give a relevant, grounded answer — not a generic one.
5. Humans stay in the loop
If the AI cannot fully answer a question, it suggests the patient message their doctor directly. Doctors see these messages in real time and can reply. The AI never replaces the doctor — it bridges the gap between hospital visits so patients are never left guessing alone.
6. Works in the patient's language
Patients can switch between English, Spanish, Hindi, Mandarin, French, and Arabic. The AI responds and speaks in their chosen language, removing a major barrier to understanding medical instructions after discharge.
Architecture
Tether follows a privacy-first architecture. API keys never ship in the mobile bundle — all LLM requests and biomarker analysis are proxied through a Cloudflare Worker at the edge.
Frontend
React Native + Expo SDK 55 with React Navigation native stack. Runs on iOS, Android, and web.
Backend
Cloudflare Worker proxies all API calls. GROQ_API_KEY stored as a Cloudflare secret, never exposed to the client.
Rust WASM
Voice biomarker engine compiled to WebAssembly via wasm-pack. Runs inside the Worker for edge-speed signal processing.
LLM
Groq API with LLaMA 3.3 70B. Graceful fallback chain: Worker → direct → keyword matching.
Quickstart
Prerequisites
- Node.js 18+
- Expo CLI (
npm install -g expo-cli) - iOS Simulator (Xcode) or Android Emulator
- Rust + wasm-pack (for biomarker engine development)
Setup
git clone https://github.com/ArhanCodes/tether.git
cd tether
npm install
cp src/lib/config.template.ts src/lib/config.ts
npm run ios
That's it. The config template comes pre-configured with the shared Tether API — no API keys or environment variables needed. The Groq key lives on the Cloudflare Worker and is never exposed to the client.
npx expo start --web instead to open in a browser.
Worker Setup
# Deploy the Cloudflare Worker
cd worker
npm install
npx wrangler secret put GROQ_API_KEY
npx wrangler deploy
Features
Auth
- Login / signup with role selection (doctor or patient)
- Passwords hashed with SHA-256 (expo-crypto)
- Session persistence — reopening the app skips login
- Terms/privacy consent on signup
Doctor Workspace
- Create/edit patient recovery plans (diagnosis, vitals, meds, instructions, red flags, follow-up)
- Set AI tone (calm, direct, reassuring)
- Publish plans to a specific patient email (validates account exists)
- Draft auto-saves locally
- View and reply to patient messages
Patient Companion
- View the recovery plan assigned to your email
- Vitals summary, daily instructions, red flags
- AI chat powered by Groq with keyword-matching fallback
- Quick prompt buttons ("What should I do today?", "When should I call?", etc.)
- Voice input via speech recognition
- Voice output (text-to-speech on AI replies, toggleable)
- Urgency badges on AI responses (routine / contact clinician / urgent)
- Flesch-Kincaid readability score on every AI response (grade level badge)
- Handoff suggestion when AI can't fully answer
- Direct messaging to doctor (real-time via Durable Objects)
- Multilingual support (English, Spanish, Hindi, Mandarin, French, Arabic)
- Voice biomarker analysis (breathing rate, cough detection, vocal tremor, voice energy)
- Biomarker status levels (normal / monitor / alert) with alert popup
- Biomarker trending — historical chart showing trends over time
- Engine connection — biomarker data injected into AI context automatically
Onboarding
- 5-step tutorial on first launch (welcome, doctors, patients, voice biomarkers, safety)
- Skip button and dot indicators
- Only shows once (stored in AsyncStorage)
Infrastructure
- Cloudflare Worker proxy — API key stays server-side, never ships in the app
- Durable Objects backend — accounts, plans, messages, biomarker history persist across devices
- Rust WASM biomarker engine runs at the edge inside the worker
- AI requests routed through worker, falls back to direct Groq, then keyword matching
Authentication
Users sign up with a role (Doctor or Patient) and are routed to the appropriate workspace after login. Sessions persist across app restarts via AsyncStorage.
- Password hashing: SHA-256 via
expo-crypto— plaintext passwords are never stored - Session restore: On launch, the app checks AsyncStorage for an active session and skips login if found
- Role-based routing: Doctors see the workspace; patients see the companion
- Validation: Email format, password strength (8+ chars with a number), and terms acceptance
Doctor Workspace
Doctors create, edit, and publish recovery plans for specific patients. Plans are the foundation of the entire patient experience — the AI, the UI, and the messaging system all derive from the published plan.
Plan Fields
| Field | Description |
|---|---|
| Patient Name & Email | Must match a registered patient account |
| Diagnosis | Primary condition (e.g. post-discharge pneumonia) |
| Vitals | Heart rate, blood pressure, temperature, O2 saturation |
| Medications | Name, dosage, and frequency (one per line) |
| Daily Instructions | What the patient should do each day |
| Red Flags | Symptoms that require immediate medical attention |
| Follow-up | Next appointment or scheduled check-in |
| Tone | Calm, Direct, or Reassuring — controls AI personality |
| Doctor Notes | Private instructions for how AI should phrase answers |
Messaging
Doctors see all patient message threads, sorted by most recent. They can select a thread and reply directly. When a patient sends a message (or the AI suggests a handoff), it appears here.
Patient Companion
The patient screen surfaces the published recovery plan and provides multiple channels for getting help: AI chat, voice input, quick prompts, biomarker analysis, and direct doctor messaging.
Care Plan Display
Vitals, daily instructions, medications, and red flags — all from the doctor's published plan.
AI Chat
Text or voice questions answered by LLaMA 3.3, constrained to the care plan. Includes urgency badges and handoff suggestions.
Voice Biomarkers
Record a 10-15 second voice sample. Rust WASM engine analyzes breathing rate, cough events, pitch variability, and more.
Doctor Messaging
Direct messaging channel for when AI isn't enough. The AI can auto-suggest using this when it lacks certainty.
AI Chat System
The AI is powered by Groq's LLaMA 3.3 70B model, accessed through a Cloudflare Worker proxy. Every response is grounded in the doctor's published care plan.
System Prompt
A dynamic system prompt is built from the care plan that includes the patient's diagnosis, medications, instructions, red flags, and the doctor's preferred tone. The AI is instructed to:
- Only answer from documented care plan data
- Flag red-flag symptoms as
"urgent" - Suggest messaging the doctor when information is missing
- Return structured JSON with message, urgency, supporting points, and handoff flag
Response Urgency Levels
| Level | Meaning | UI Treatment |
|---|---|---|
routine | Normal informational response | Blue badge |
contact-clinician | AI suggests speaking with doctor | Yellow badge |
urgent | Red flag symptom detected | Red badge + escalation banner |
Fallback Chain
1. Cloudflare Worker → Groq API (primary)
2. Direct Groq API call (if worker fails)
3. Keyword matching (if no API configured)
Voice Biomarkers
Tether's biomarker system records a short voice sample from the patient, extracts PCM audio data, and sends it to a Rust WASM engine running on the Cloudflare Worker for real-time signal processing.
How It Works
- Patient taps "Start Voice Check" —
expo-audiobegins recording in WAV/PCM format at 16kHz - Patient speaks naturally for 10-15 seconds, then taps "Stop & Analyze"
- Recording is read as an ArrayBuffer, PCM samples extracted from WAV headers
- Samples sent to Worker's
/analyzeendpoint as JSON - Rust WASM engine processes samples and returns a
BiomarkerReport - Results displayed as a card with status badge (normal / monitor / alert)
- Report saved to Durable Objects for historical trending
Biomarker Trending
Every biomarker report is stored server-side with a timestamp. The patient's biomarker card shows a trend view of the last 10 readings with bar charts for breathing rate, voice energy, and cough events. Alert/monitor/normal counts are summarized as colored pills. This turns a single snapshot into a longitudinal monitoring system that can detect deterioration over days.
Engine Connection
Tether's two AI engines — NLP (Groq LLM) and Bio-Acoustic (Rust WASM) — share context automatically:
- The latest biomarker report is injected into the AI system prompt before every chat request
- When the patient asks "how am I doing?", the AI references actual biomarker readings (breathing rate, cough events, energy levels)
- If biomarkers are in "alert" status, the AI proactively warns the patient and recommends contacting their care team
- One engine listens to the body, the other explains what it means in plain language
Readability Scoring
Every AI response is scored using the Flesch-Kincaid Grade Level formula. A badge on each message shows the grade level (e.g., "Grade 4.2 - Very Easy"). This proves the health literacy claim with data:
- Grade 0-5: Very Easy — 5th grader can understand
- Grade 6-8: Easy — middle school level
- Grade 9-12: Moderate — high school level
- Grade 13+: Complex — college level (AI is prompted to stay below 6)
Multilingual Support
Patients can select their preferred language from: English, Spanish, Hindi, Mandarin, French, or Arabic. The language preference is stored server-side and affects:
- AI chat responses — the system prompt instructs the LLM to respond in the selected language at a 5th grade reading level
- Voice output — text-to-speech uses the correct language code
- The setting persists across devices via Durable Objects
Cloudflare Worker
The Worker serves as the secure API proxy and data backend. It exposes AI endpoints and a full data API backed by Durable Objects:
API Endpoints
| Endpoint | Method | Description |
|---|---|---|
/chat | POST | Forwards chat messages to Groq API with the GROQ_API_KEY secret |
/analyze | POST | Receives PCM audio samples, runs Rust WASM biomarker analysis, returns report |
/api/signup | POST | Create a new account (name, email, password, role) |
/api/login | POST | Authenticate and return user profile |
/api/plans | GET/POST | Retrieve or publish care plans |
/api/messages | GET/POST | Doctor-patient messaging |
/api/biomarkers | GET/POST | Store and retrieve biomarker history |
/api/user/language | POST | Update patient language preference |
/api/users | GET | List users (password hashes excluded) |
Durable Objects Backend
All application data (accounts, plans, messages, biomarker history) is stored in a Cloudflare Durable Object (TetherData). This replaces the previous AsyncStorage-only approach and provides:
- Cross-device sync — a doctor publishes a plan on their laptop, the patient sees it on their phone instantly
- Strong consistency — single-instance guarantee means no stale reads across regions
- Edge persistence — data persists in Cloudflare's global network with automatic replication
- Privacy — password hashes are stored server-side (SHA-256), never exposed to clients
The DO seeds itself with starter accounts on first access. AsyncStorage is only used for local session state (which user is logged in on this device).
Rust WASM Engine
The biomarker engine is written in Rust, compiled to WebAssembly via wasm-pack, and loaded as an ES module inside the Cloudflare Worker. This gives near-native signal processing performance at the edge.
Entry Point
pub fn analyze_audio(samples_i16: &[i16], sample_rate: u32) -> String
Accepts raw PCM samples and sample rate. Returns a JSON-encoded BiomarkerReport.
Signal Processing Pipeline
- RMS Energy — Root mean square of normalized samples. Detects fatigue (low energy)
- Zero-Crossing Rate — Frequency of sign changes. Detects breathy/labored speech
- Breathing Rate — Low-pass filtered energy envelope, peak counting. Estimates breaths per minute
- Pitch Variability — Autocorrelation-based pitch detection, coefficient of variation. Detects vocal tremor
- Cough Detection — Sharp energy spikes (>3x mean) followed by silence. Counts distinct cough events
Building
cd biomarker
wasm-pack build --target web --out-dir ../worker/wasm
# Output: tether_biomarker_bg.wasm (~49KB) + JS bindings
Biomarker Metrics Reference
| Metric | Range | Flag Threshold | Clinical Significance |
|---|---|---|---|
| Energy (RMS) | 0 – 1 | < 0.02 | Low energy suggests fatigue or weakness |
| Zero-Crossing Rate | 0 – 1 | > 0.3 | High ZCR indicates breathy or labored speech |
| Breathing Rate | BPM | > 24 | Tachypnea — elevated respiratory rate |
| Pitch Variability | CV | > 0.35 | High variation suggests vocal tremor |
| Cough Events | Count | ≥ 3 | Frequent coughing in a short sample |
Status Logic
| Flags Triggered | Status | Meaning |
|---|---|---|
| 0 | Normal | No concerning patterns detected |
| 1 | Monitor | One metric outside normal range — worth watching |
| 2+ | Alert | Multiple flags — consider contacting care team |
Security
API Key Isolation
GROQ_API_KEY is a Cloudflare secret. It never appears in the mobile bundle, git history, or client-side code.
Password Hashing
SHA-256 via expo-crypto. Plaintext passwords are never stored or compared directly.
Config Gitignore
src/lib/config.ts is gitignored. A template file is committed for new developers to copy.
CORS
Worker includes CORS headers on all responses, allowing requests from the mobile app and web preview.
Tech Stack
| Layer | Technology |
|---|---|
| Mobile App | React Native 0.83, Expo SDK 55, React 19 |
| Navigation | @react-navigation/native (native stack) |
| Audio | expo-audio, expo-speech, expo-speech-recognition |
| Crypto | expo-crypto (SHA-256) |
| Storage | @react-native-async-storage/async-storage |
| Backend | Cloudflare Workers (TypeScript) |
| AI Model | Groq API — LLaMA 3.3 70B Versatile |
| Signal Processing | Rust + WebAssembly (wasm-pack) |
| Serialization | Serde (Rust), JSON (TypeScript) |
Onboarding
First-time users see a 5-step tutorial before reaching the login screen. The tutorial covers:
- Welcome — What Tether does and who it's for
- For Doctors — How to create and publish recovery plans
- For Patients — How to use AI chat, voice, and messaging
- Voice Biomarkers — How voice analysis works and what it detects
- Safety First — Tether is not a replacement for emergency care
Onboarding completion is stored in AsyncStorage under the key tether-onboarding-complete. The tutorial only shows once.