Tether

A clinical companion app that connects post-discharge patients with their healthcare providers through AI-powered guidance, voice biomarker analysis, and secure messaging.

React Native Expo SDK 55 Cloudflare Workers Rust WASM Groq LLM

Architecture

Tether follows a privacy-first architecture. API keys never ship in the mobile bundle — all LLM requests and biomarker analysis are proxied through a Cloudflare Worker at the edge.

┌─────────────────────────────────────┐ │ Mobile App (Expo) │ │ │ │ Auth ─→ Doctor / Patient screens │ │ Voice ─→ expo-speech-recognition │ │ Audio ─→ expo-audio recording │ │ Storage ─→ AsyncStorage (local) │ └──────────┬──────────┬───────────────┘ │ │ AI chat Voice samples │ │ ▼ ▼ ┌─────────────────────────────────────┐ │ Cloudflare Worker (Edge) │ │ │ │ /chat ──→ Groq API (LLaMA 3.3) │ │ /analyze → Rust WASM biomarker │ │ │ │ API keys stored as CF secrets │ └─────────────────────────────────────┘
F

Frontend

React Native + Expo SDK 55 with React Navigation native stack. Runs on iOS, Android, and web.

B

Backend

Cloudflare Worker proxies all API calls. GROQ_API_KEY stored as a Cloudflare secret, never exposed to the client.

R

Rust WASM

Voice biomarker engine compiled to WebAssembly via wasm-pack. Runs inside the Worker for edge-speed signal processing.

AI

LLM

Groq API with LLaMA 3.3 70B. Graceful fallback chain: Worker → direct → keyword matching.

Quickstart

Prerequisites

  • Node.js 18+
  • Expo CLI (npm install -g expo-cli)
  • iOS Simulator (Xcode) or Android Emulator
  • Rust + wasm-pack (for biomarker engine development)

Setup

git clone https://github.com/ArhanCodes/tether.git
cd tether
npm install
cp src/lib/config.template.ts src/lib/config.ts
npm run ios

That's it. The config template comes pre-configured with the shared Tether API — no API keys or environment variables needed. The Groq key lives on the Cloudflare Worker and is never exposed to the client.

Web preview: Run npx expo start --web instead to open in a browser.

Worker Setup

# Deploy the Cloudflare Worker
cd worker
npm install
npx wrangler secret put GROQ_API_KEY
npx wrangler deploy

Authentication

Users sign up with a role (Doctor or Patient) and are routed to the appropriate workspace after login. Sessions persist across app restarts via AsyncStorage.

  • Password hashing: SHA-256 via expo-crypto — plaintext passwords are never stored
  • Session restore: On launch, the app checks AsyncStorage for an active session and skips login if found
  • Role-based routing: Doctors see the workspace; patients see the companion
  • Validation: Email format, password strength (8+ chars with a number), and terms acceptance

Doctor Workspace

Doctors create, edit, and publish recovery plans for specific patients. Plans are the foundation of the entire patient experience — the AI, the UI, and the messaging system all derive from the published plan.

Plan Fields

FieldDescription
Patient Name & EmailMust match a registered patient account
DiagnosisPrimary condition (e.g. post-discharge pneumonia)
VitalsHeart rate, blood pressure, temperature, O2 saturation
MedicationsName, dosage, and frequency (one per line)
Daily InstructionsWhat the patient should do each day
Red FlagsSymptoms that require immediate medical attention
Follow-upNext appointment or scheduled check-in
ToneCalm, Direct, or Reassuring — controls AI personality
Doctor NotesPrivate instructions for how AI should phrase answers

Messaging

Doctors see all patient message threads, sorted by most recent. They can select a thread and reply directly. When a patient sends a message (or the AI suggests a handoff), it appears here.

Patient Companion

The patient screen surfaces the published recovery plan and provides multiple channels for getting help: AI chat, voice input, quick prompts, biomarker analysis, and direct doctor messaging.

Care Plan Display

Vitals, daily instructions, medications, and red flags — all from the doctor's published plan.

AI Chat

Text or voice questions answered by LLaMA 3.3, constrained to the care plan. Includes urgency badges and handoff suggestions.

Voice Biomarkers

Record a 10-15 second voice sample. Rust WASM engine analyzes breathing rate, cough events, pitch variability, and more.

Doctor Messaging

Direct messaging channel for when AI isn't enough. The AI can auto-suggest using this when it lacks certainty.

AI Chat System

The AI is powered by Groq's LLaMA 3.3 70B model, accessed through a Cloudflare Worker proxy. Every response is grounded in the doctor's published care plan.

System Prompt

A dynamic system prompt is built from the care plan that includes the patient's diagnosis, medications, instructions, red flags, and the doctor's preferred tone. The AI is instructed to:

  • Only answer from documented care plan data
  • Flag red-flag symptoms as "urgent"
  • Suggest messaging the doctor when information is missing
  • Return structured JSON with message, urgency, supporting points, and handoff flag

Response Urgency Levels

LevelMeaningUI Treatment
routineNormal informational responseBlue badge
contact-clinicianAI suggests speaking with doctorYellow badge
urgentRed flag symptom detectedRed badge + escalation banner

Fallback Chain

1. Cloudflare Worker → Groq API (primary)
2. Direct Groq API call (if worker fails)
3. Keyword matching (if no API configured)
Safety: The AI never diagnoses, prescribes, or advises outside the doctor's documented scope. Emergency symptoms always trigger an urgent flag with instructions to seek immediate care.

Voice Biomarkers

Tether's biomarker system records a short voice sample from the patient, extracts PCM audio data, and sends it to a Rust WASM engine running on the Cloudflare Worker for real-time signal processing.

How It Works

  1. Patient taps "Start Voice Check" — expo-audio begins recording in WAV/PCM format at 16kHz
  2. Patient speaks naturally for 10-15 seconds, then taps "Stop & Analyze"
  3. Recording is read as an ArrayBuffer, PCM samples extracted from WAV headers
  4. Samples sent to Worker's /analyze endpoint as JSON
  5. Rust WASM engine processes samples and returns a BiomarkerReport
  6. Results displayed as a card with status badge (normal / monitor / alert)

Cloudflare Worker

The Worker serves as the secure API proxy between the mobile app and external services. It exposes two endpoints:

API Endpoints

EndpointMethodDescription
/chatPOSTForwards chat messages to Groq API with the GROQ_API_KEY secret
/analyzePOSTReceives PCM audio samples, runs Rust WASM biomarker analysis, returns report

Chat Request

POST /chat
Content-Type: application/json

{
  "messages": [
    { "role": "system", "content": "..." },
    { "role": "user", "content": "What should I do today?" }
  ]
}

Analyze Request

POST /analyze
Content-Type: application/json

{
  "samples": [0, 120, -340, ...],  // Int16 PCM samples
  "sampleRate": 16000
}

Rust WASM Engine

The biomarker engine is written in Rust, compiled to WebAssembly via wasm-pack, and loaded as an ES module inside the Cloudflare Worker. This gives near-native signal processing performance at the edge.

Entry Point

pub fn analyze_audio(samples_i16: &[i16], sample_rate: u32) -> String

Accepts raw PCM samples and sample rate. Returns a JSON-encoded BiomarkerReport.

Signal Processing Pipeline

  • RMS Energy — Root mean square of normalized samples. Detects fatigue (low energy)
  • Zero-Crossing Rate — Frequency of sign changes. Detects breathy/labored speech
  • Breathing Rate — Low-pass filtered energy envelope, peak counting. Estimates breaths per minute
  • Pitch Variability — Autocorrelation-based pitch detection, coefficient of variation. Detects vocal tremor
  • Cough Detection — Sharp energy spikes (>3x mean) followed by silence. Counts distinct cough events

Building

cd biomarker
wasm-pack build --target web --out-dir ../worker/wasm
# Output: tether_biomarker_bg.wasm (~49KB) + JS bindings

Biomarker Metrics Reference

MetricRangeFlag ThresholdClinical Significance
Energy (RMS)0 – 1< 0.02Low energy suggests fatigue or weakness
Zero-Crossing Rate0 – 1> 0.3High ZCR indicates breathy or labored speech
Breathing RateBPM> 24Tachypnea — elevated respiratory rate
Pitch VariabilityCV> 0.35High variation suggests vocal tremor
Cough EventsCount≥ 3Frequent coughing in a short sample

Status Logic

Flags TriggeredStatusMeaning
0NormalNo concerning patterns detected
1MonitorOne metric outside normal range — worth watching
2+AlertMultiple flags — consider contacting care team

Security

API Key Isolation

GROQ_API_KEY is a Cloudflare secret. It never appears in the mobile bundle, git history, or client-side code.

Password Hashing

SHA-256 via expo-crypto. Plaintext passwords are never stored or compared directly.

Config Gitignore

src/lib/config.ts is gitignored. A template file is committed for new developers to copy.

CORS

Worker includes CORS headers on all responses, allowing requests from the mobile app and web preview.

Tech Stack

LayerTechnology
Mobile AppReact Native 0.83, Expo SDK 55, React 19
Navigation@react-navigation/native (native stack)
Audioexpo-audio, expo-speech, expo-speech-recognition
Cryptoexpo-crypto (SHA-256)
Storage@react-native-async-storage/async-storage
BackendCloudflare Workers (TypeScript)
AI ModelGroq API — LLaMA 3.3 70B Versatile
Signal ProcessingRust + WebAssembly (wasm-pack)
SerializationSerde (Rust), JSON (TypeScript)

Onboarding

First-time users see a 5-step tutorial before reaching the login screen. The tutorial covers:

  1. Welcome — What Tether does and who it's for
  2. For Doctors — How to create and publish recovery plans
  3. For Patients — How to use AI chat, voice, and messaging
  4. Voice Biomarkers — How voice analysis works and what it detects
  5. Safety First — Tether is not a replacement for emergency care

Onboarding completion is stored in AsyncStorage under the key tether-onboarding-complete. The tutorial only shows once.