A narrow lineup,
deliberately.
One foundation model, one memory layer, a small wearable line, and four applied programmes. We'd rather ship what we understand than what we don't.
Consumer pre-orders and the Sonal OS app live at sonal-ai.com. This page is the research register — what we build and why.
Presence AI
The foundation model.
A native overlap LLM built around one objective: attribution — knowing who said what, frame by frame, through overlap, repair and interruption. Built on EEND, TS-VAD, and open ASR foundations. The engine that makes every downstream memory trustworthy.
Sonal OS
The memory layer.
Accurate capture, legible memory, and the downstream actions it enables. Sonal OS turns attributed audio and video into searchable memory and structured action — device, model and app as one system.
What comes out.
From whitepaper § 7. Sonal transforms speaker-attributed transcripts into discrete, actionable memory objects — each one grounded in source evidence (timestamp, speaker, excerpt, confidence score) so a human can always verify.
Action items
Who committed to what, with timestamp and transcript excerpt.
Reminders
Personal commitments and temporal references extracted and scheduled.
Follow-up drafts
Email and message drafts grounded in conversation content and ownership.
Shopping + purchase intents
Items mentioned for acquisition, captured with surrounding context.
Searchable memory
Queryable index of past conversations — ‘when did we discuss X?’
Audit trail
Evidence grounding on every object: timestamp, speaker, excerpt, confidence. Low confidence flags for review, never surfaces as fact.
Four bodies now. More ahead.
The flagship line ships in 2026 with four form factors — the same instrument, worn the way you already dress. The innovation pipeline extends to smart glasses (the first multimodal form factor), ring, watch, and bone-conduction.
Core Pendant
The default shape of Sonal.
Wristband Mode
Built for motion, studio, field.
Lapel / Pin Mode
Boardroom-ready, visible indicator.
Pin
Minimal, ultralight, one-button.
Smart Glasses
The first multimodal form factor — audio + vision.
Smart Ring
Always-present capture without a visible device.
Smart Watch
On-wrist display for daily summaries.
Bone Conduction
Private output channel for memory assistants.
Four rooms where memory has to be right.
Each programme is a deep partnership with a small number of organisations. We tune Sonal in the room, publish what we learn, and only expand once the work is right.
Healthcare & Clinical
Doctor discusses X-ray results (audio) + image of the X-ray (video).
Generates a comprehensive patient visit summary — attributed, signed, auditable.
Legal & Professional
Multiple attorneys arguing — an overlap event.
Verbatim diarisation of who said what, with timestamps suitable for record.
Security & Home Automation
Camera sees FedEx truck + person dropping a box.
Logs ‘Parcel delivered’ event, alerts the homeowner via app.
Personal Life & Family
Friend mentions an anniversary date in passing.
Auto-creates a calendar event — ‘Buy a gift for Sarah's anniversary’ — with the clip as evidence.
Sonal Labs is not a marketplace.
We're a research company that ships.
If you want to try something before it's public — as a design partner, researcher, or operator in healthcare, law, security or family-care — write to us. We'll reply.
Contact the team