Scroll

AI Product Design · 2025

Feelink

An emotion journal that meets you where you are

Feelink

AI Product Design

AI Integration

Lovable

UX / UI Design

Psychology

iOS

React

Capacitor

Product Design · Psychology · AI-assisted build

2025

A multi-modal emotion check-in built on Lovable, currently live at feelink-app.lovable.app. Three parallel check-in modes — Daily Self Check-In, Simple Mood Check, and Process a Trigger — let users engage at the level their capacity allows on any given day. A research-backed feelings taxonomy, a guided 4-step tour, and a longitudinal statistics view sit underneath.

See the website

Stack

Tools used

The kit that built this project — from research to deploy. AI tools called out separately because they shape how the work gets made, not just what it's made of.

AI

  • LovablePrimary build environment — every screen on the live site was made via Lovable's natural-language editor
  • Claude (Sonnet)Body-sensation language suggestions, weekly synthesis writing
  • GeminiAlternative model testing for emotion classification

Design

  • FigmaColor system, mood-board exploration, body-mapping figure studies
  • Adobe IllustratorBody-mapping figure illustration

Code

  • React + ViteApp runtime (Lovable's vite_react_shadcn_ts stack)
  • TypeScriptType safety
  • Tailwind CSSStyling
  • Radix UI / shadcnAccessible primitives for dialogs, sliders, tooltips

Infrastructure

  • CapacitoriOS / Android wrapper for the native shell (App Store submission)
  • SupabaseUser data and session storage
  • Lovable hostingLive at feelink-app.lovable.app

The insight

Affective neuroscience is clear about something most consumer apps ignore: emotional granularity — the ability to identify, label, and distinguish between emotions — is a stronger predictor of long-term wellbeing than emotional positivity itself. People who can tell the difference between 'frustrated' and 'overwhelmed' recover faster from stress than people who feel both as 'bad'. Yet most tracking apps reduce all of this to a single mood slider. Feelink was built to capture emotion the way people actually feel it: in the body, in context, with reasons.

Built on Lovable

The entire product was built in Lovable — Lovable's web editor handled the React + Vite + Tailwind + shadcn scaffold and every iteration after it. The git history (374 commits across 7 working sessions over two months) tells the iteration story: a body figure that took ten visual rounds to land, a click-to-place dot interaction that needed nine follow-up bug-fix prompts, an icon library swap from lucide to Phosphor that broke imports twice, and a five-revert-in-four-minutes slider styling sequence that became the test case for the design system. Every screen below is captured directly from the live Lovable app at feelink-app.lovable.app.

Three modes, calibrated to capacity

The hardest design problem in mental-health apps is that the user's capacity to engage varies wildly day to day. Sometimes you have ten minutes and want to actually examine what's happening; sometimes you have ten seconds and just want to mark it. Feelink offers three check-in modes calibrated to where the user actually is — captured below from the live Lovable build.

01 — Three modes

01 — Three modes

Daily Self Check-In is the rich 5-minute version. Simple Mood Check is the 30-second version for low-capacity moments. Process a Trigger walks through a difficult moment in real time. Each card is a different mode for a different day.

02 — Daily flow: Select a Situation

02 — Daily flow: Select a Situation

Step 1 of the 4-step daily tour. The user has typed 'tense before tomorrow morning meeting' as their what's-on-mind item; the tooltip walks them through what the next steps will do. The tour is always skippable.

03 — Choose your emotions

03 — Choose your emotions

Step 2 of the daily tour. The full feelings taxonomy is exposed at once — Joy, Sadness, Anger — with sub-emotions (Happy / Excited / Content / Grateful / Peaceful, Sad / Disappointed / Hopeless / Lonely / Grief, Angry / Frustrated / Irritated / Resentful / Furious) clustered under each. Grounded in Plutchik's wheel.

04 — Simple Mood Check

04 — Simple Mood Check

The 30-second mode. Same taxonomy, no situation framing — for the moments when you just want to mark that you felt something, not work through it. Anxious, Worried, Scared all live in their proper Fear cluster.

05 — Process a Trigger

05 — Process a Trigger

The CBT-style flow for difficult moments. Step 1 of 3: Describe What Happened — write down the facts of what occurred without adding interpretation. Focus on observable events.

06 — Statistics

06 — Statistics

The longitudinal view. Five tabs (Stats / Calendar / Patterns / Body / Rings) with a 1W / 1M / 3M time period selector. Average wellness score, Most Frequent Emotions, and the 'Log Emotion' CTA in the bottom bar so a check-in is always one tap away.

Design approach

Grounded in Plutchik's wheel of emotions and somatic experiencing theory. Each check-in captures emotional state, physical sensations, cognitive patterns, and behavioral responses — building a rich dataset for pattern recognition without ever feeling like a form. The visual language is intentionally soft — pastels, rounded type, generous whitespace — because the alternative (hard data viz) reads as clinical and people stop using it.

The body mapping feature lets users mark where an emotion lives physically, drawing from research on interoception. This was the feature I was most uncertain about — would people actually do it? Testing showed yes, but only if the tap target was a stylized human figure rather than a body-part list. 'Where in your body?' beats 'select from: chest, stomach, throat, head' every time. Live at feelink-app.lovable.app.

Where AI fits — both in the build and in the product

There are two AI layers here. First, in the build: Lovable was the entire dev environment. I spoke to it in natural language for every component, every refactor, every styling decision — the 374 commits in the git history are the receipt. Second, in the product: Claude Sonnet runs at two specific moments — suggesting body-sensation language when users get stuck describing where an emotion lives physically, and writing the weekly synthesis that connects emotional patterns to triggers and behaviors. Both of these earn their place because the alternative is staring at a blank field, and blank fields are where check-ins die.

All projects

© 2025 Hanna de Vries