Tether
A clinical companion app that connects post-discharge patients with their healthcare providers through AI-powered guidance, voice biomarker analysis, and secure messaging.
Architecture
Tether follows a privacy-first architecture. API keys never ship in the mobile bundle — all LLM requests and biomarker analysis are proxied through a Cloudflare Worker at the edge.
Frontend
React Native + Expo SDK 55 with React Navigation native stack. Runs on iOS, Android, and web.
Backend
Cloudflare Worker proxies all API calls. GROQ_API_KEY stored as a Cloudflare secret, never exposed to the client.
Rust WASM
Voice biomarker engine compiled to WebAssembly via wasm-pack. Runs inside the Worker for edge-speed signal processing.
LLM
Groq API with LLaMA 3.3 70B. Graceful fallback chain: Worker → direct → keyword matching.
Quickstart
Prerequisites
- Node.js 18+
- Expo CLI (
npm install -g expo-cli) - iOS Simulator (Xcode) or Android Emulator
- Rust + wasm-pack (for biomarker engine development)
Setup
git clone https://github.com/ArhanCodes/tether.git
cd tether
npm install
cp src/lib/config.template.ts src/lib/config.ts
npm run ios
That's it. The config template comes pre-configured with the shared Tether API — no API keys or environment variables needed. The Groq key lives on the Cloudflare Worker and is never exposed to the client.
npx expo start --web instead to open in a browser.
Worker Setup
# Deploy the Cloudflare Worker
cd worker
npm install
npx wrangler secret put GROQ_API_KEY
npx wrangler deploy
Authentication
Users sign up with a role (Doctor or Patient) and are routed to the appropriate workspace after login. Sessions persist across app restarts via AsyncStorage.
- Password hashing: SHA-256 via
expo-crypto— plaintext passwords are never stored - Session restore: On launch, the app checks AsyncStorage for an active session and skips login if found
- Role-based routing: Doctors see the workspace; patients see the companion
- Validation: Email format, password strength (8+ chars with a number), and terms acceptance
Doctor Workspace
Doctors create, edit, and publish recovery plans for specific patients. Plans are the foundation of the entire patient experience — the AI, the UI, and the messaging system all derive from the published plan.
Plan Fields
| Field | Description |
|---|---|
| Patient Name & Email | Must match a registered patient account |
| Diagnosis | Primary condition (e.g. post-discharge pneumonia) |
| Vitals | Heart rate, blood pressure, temperature, O2 saturation |
| Medications | Name, dosage, and frequency (one per line) |
| Daily Instructions | What the patient should do each day |
| Red Flags | Symptoms that require immediate medical attention |
| Follow-up | Next appointment or scheduled check-in |
| Tone | Calm, Direct, or Reassuring — controls AI personality |
| Doctor Notes | Private instructions for how AI should phrase answers |
Messaging
Doctors see all patient message threads, sorted by most recent. They can select a thread and reply directly. When a patient sends a message (or the AI suggests a handoff), it appears here.
Patient Companion
The patient screen surfaces the published recovery plan and provides multiple channels for getting help: AI chat, voice input, quick prompts, biomarker analysis, and direct doctor messaging.
Care Plan Display
Vitals, daily instructions, medications, and red flags — all from the doctor's published plan.
AI Chat
Text or voice questions answered by LLaMA 3.3, constrained to the care plan. Includes urgency badges and handoff suggestions.
Voice Biomarkers
Record a 10-15 second voice sample. Rust WASM engine analyzes breathing rate, cough events, pitch variability, and more.
Doctor Messaging
Direct messaging channel for when AI isn't enough. The AI can auto-suggest using this when it lacks certainty.
AI Chat System
The AI is powered by Groq's LLaMA 3.3 70B model, accessed through a Cloudflare Worker proxy. Every response is grounded in the doctor's published care plan.
System Prompt
A dynamic system prompt is built from the care plan that includes the patient's diagnosis, medications, instructions, red flags, and the doctor's preferred tone. The AI is instructed to:
- Only answer from documented care plan data
- Flag red-flag symptoms as
"urgent" - Suggest messaging the doctor when information is missing
- Return structured JSON with message, urgency, supporting points, and handoff flag
Response Urgency Levels
| Level | Meaning | UI Treatment |
|---|---|---|
routine | Normal informational response | Blue badge |
contact-clinician | AI suggests speaking with doctor | Yellow badge |
urgent | Red flag symptom detected | Red badge + escalation banner |
Fallback Chain
1. Cloudflare Worker → Groq API (primary)
2. Direct Groq API call (if worker fails)
3. Keyword matching (if no API configured)
Voice Biomarkers
Tether's biomarker system records a short voice sample from the patient, extracts PCM audio data, and sends it to a Rust WASM engine running on the Cloudflare Worker for real-time signal processing.
How It Works
- Patient taps "Start Voice Check" —
expo-audiobegins recording in WAV/PCM format at 16kHz - Patient speaks naturally for 10-15 seconds, then taps "Stop & Analyze"
- Recording is read as an ArrayBuffer, PCM samples extracted from WAV headers
- Samples sent to Worker's
/analyzeendpoint as JSON - Rust WASM engine processes samples and returns a
BiomarkerReport - Results displayed as a card with status badge (normal / monitor / alert)
Cloudflare Worker
The Worker serves as the secure API proxy between the mobile app and external services. It exposes two endpoints:
API Endpoints
| Endpoint | Method | Description |
|---|---|---|
/chat | POST | Forwards chat messages to Groq API with the GROQ_API_KEY secret |
/analyze | POST | Receives PCM audio samples, runs Rust WASM biomarker analysis, returns report |
Chat Request
POST /chat
Content-Type: application/json
{
"messages": [
{ "role": "system", "content": "..." },
{ "role": "user", "content": "What should I do today?" }
]
}
Analyze Request
POST /analyze
Content-Type: application/json
{
"samples": [0, 120, -340, ...], // Int16 PCM samples
"sampleRate": 16000
}
Rust WASM Engine
The biomarker engine is written in Rust, compiled to WebAssembly via wasm-pack, and loaded as an ES module inside the Cloudflare Worker. This gives near-native signal processing performance at the edge.
Entry Point
pub fn analyze_audio(samples_i16: &[i16], sample_rate: u32) -> String
Accepts raw PCM samples and sample rate. Returns a JSON-encoded BiomarkerReport.
Signal Processing Pipeline
- RMS Energy — Root mean square of normalized samples. Detects fatigue (low energy)
- Zero-Crossing Rate — Frequency of sign changes. Detects breathy/labored speech
- Breathing Rate — Low-pass filtered energy envelope, peak counting. Estimates breaths per minute
- Pitch Variability — Autocorrelation-based pitch detection, coefficient of variation. Detects vocal tremor
- Cough Detection — Sharp energy spikes (>3x mean) followed by silence. Counts distinct cough events
Building
cd biomarker
wasm-pack build --target web --out-dir ../worker/wasm
# Output: tether_biomarker_bg.wasm (~49KB) + JS bindings
Biomarker Metrics Reference
| Metric | Range | Flag Threshold | Clinical Significance |
|---|---|---|---|
| Energy (RMS) | 0 – 1 | < 0.02 | Low energy suggests fatigue or weakness |
| Zero-Crossing Rate | 0 – 1 | > 0.3 | High ZCR indicates breathy or labored speech |
| Breathing Rate | BPM | > 24 | Tachypnea — elevated respiratory rate |
| Pitch Variability | CV | > 0.35 | High variation suggests vocal tremor |
| Cough Events | Count | ≥ 3 | Frequent coughing in a short sample |
Status Logic
| Flags Triggered | Status | Meaning |
|---|---|---|
| 0 | Normal | No concerning patterns detected |
| 1 | Monitor | One metric outside normal range — worth watching |
| 2+ | Alert | Multiple flags — consider contacting care team |
Security
API Key Isolation
GROQ_API_KEY is a Cloudflare secret. It never appears in the mobile bundle, git history, or client-side code.
Password Hashing
SHA-256 via expo-crypto. Plaintext passwords are never stored or compared directly.
Config Gitignore
src/lib/config.ts is gitignored. A template file is committed for new developers to copy.
CORS
Worker includes CORS headers on all responses, allowing requests from the mobile app and web preview.
Tech Stack
| Layer | Technology |
|---|---|
| Mobile App | React Native 0.83, Expo SDK 55, React 19 |
| Navigation | @react-navigation/native (native stack) |
| Audio | expo-audio, expo-speech, expo-speech-recognition |
| Crypto | expo-crypto (SHA-256) |
| Storage | @react-native-async-storage/async-storage |
| Backend | Cloudflare Workers (TypeScript) |
| AI Model | Groq API — LLaMA 3.3 70B Versatile |
| Signal Processing | Rust + WebAssembly (wasm-pack) |
| Serialization | Serde (Rust), JSON (TypeScript) |
Onboarding
First-time users see a 5-step tutorial before reaching the login screen. The tutorial covers:
- Welcome — What Tether does and who it's for
- For Doctors — How to create and publish recovery plans
- For Patients — How to use AI chat, voice, and messaging
- Voice Biomarkers — How voice analysis works and what it detects
- Safety First — Tether is not a replacement for emergency care
Onboarding completion is stored in AsyncStorage under the key tether-onboarding-complete. The tutorial only shows once.