Flowsync - Automating music experiences - An Apple Music feature
Date
1.5 months
Service
iOS feature
Client
Concept work

Project Overview
An adaptive music system for Apple Watch that automatically syncs your music to your body's rhythm. By using heart rate and movement data, it helps neurodivergent users stay focused and calm without them ever having to touch a screen.
Problem
Standard music interfaces are "context-blind." For neurodivergent adults (ADHD/ASD), manual playlist management during task transitions creates severe cognitive friction. Current systems fail to recognize the body's physical state as a regulatory input, forcing manual interventions that break flow states and tax executive function.
This lack of automated environmental adaptation leads to "sensory mismatch," where the audio environment contradicts the user's physiological arousal—precisely when they need support the most.
The core issue: Music apps don't know if you're sprinting to catch a bus or pacing in your living room. This forces users to manually adjust their music, breaking the very flow state they're trying to achieve.
Solution
FlowSync transitions music from explicit to implicit interaction. The system uses sensor fusion, cross-referencing Heart Rate (physiological arousal) and Gait Rhythm (accelerometer/CoreMotion) — to detect transition states in real-time and automatically adapt your music.
How it works:
FlowSync operates entirely in the background. When you transition from sitting to moving, it detects the shift in your pace and heart rate, then seamlessly switches to music that matches your rhythm — no tapping, no interruption.
The core innovation is the "Soft Landing" algorithm — a 4-5 second gradual crossfade that aligns incoming track BPM with your physical tempo (BPM ≈ Gait Frequency × 60), providing automated emotional regulation without user intervention.
Impact & Metrics
100% | 92% | 4.7/5 |
|---|---|---|
Automation | Accuracy | Satisfaction |
Zero manual interventions during movement — complete elimination of playlist management | Tempo matching correlation between audio BPM and physical gait rhythm | User rating across all participants (neurodivergent users: 5/5) |
Additional outcomes:
Cognitive load reduction by removing manual music control during movement
Privacy-first design with all processing on-device (no cloud uploads of sensitive biometric data)


Key Highlights
Technical Architecture: Sensor Fusion Model
The "Soft Landing" Algorithm
Design Evolution: Workflow to Visual Design



