Loading case · 01
0%
← All work ·Case 01 / 07 ·Team Performance SaaS Built · Active Development

Agent Tracey.

This is not your usual task management tool.

A task manager that tracks quality work, not just check boxes. Clearer briefs, faster feedback, and credit for what actually matters.

StatusBuilt · Internal Use
StackReact · TS · Supabase
ScopeProduct · Systems · UI · Brand
PlatformWeb App / SaaS
Hero image Hero dashboard or team overview
01

How it started

I initially built this to solve my own problem.

Every project tracker in the market celebrates the same thing: tasks completed. Boxes checked. Numbers that look great in a report and mean very little when it’s actually time to evaluate someone’s work.

Nobody was tracking quality. Nobody was tracking whether feedback came fast enough to matter. Whether the brief was clear enough to start with. Whether someone was quietly overloaded or quietly underutilized.

So I built something that did.

02

The problem

Most task tools stop at status.

Done. Not done. Overdue. Assigned. Completed.

For teams doing design, creative, or operations work, that’s about half the story. A task can be finished and still need three rounds of fixes. A team member can have the lowest task count and the highest quality output. Another can look fully booked on paper and be completely underwater in reality.

The tools didn’t show any of that. So nobody could act on it.

Image Task list or team overview screen
03

The bet

Quality tracking shouldn’t be optional. Especially not for teams where the work actually has to be good.

A better system should make quality visible, feedback faster, and workload honest. Not to put anyone under a microscope. Just so everyone has enough information to actually do their best work.

04

What I built

Built around five layers. Each one connected to the next.

Image Product architecture or task flow
FEATURE 01

Quality-first task lifecycle

Creative work rarely moves in a straight line. Tasks go through review, rework, refinement, and sometimes a quiet death by stakeholder feedback. The lifecycle in Tracey reflects how work actually moves, not the clean version where everything flows neatly from to-do to done.

Image Task status flow
FEATURE 02

Multi-round review scoring

Each task can go through multiple review rounds with scores and feedback attached. A first-round approval means something different from a task that needed four rounds of correction. Tracey makes that visible and makes sure credit goes where it actually belongs.

Image Review scoring screen
FEATURE 03

Workload intelligence

Five small tasks does not automatically mean a busier week than two complex ones. The workload model distributes estimated hours across assigned working days and flags overload or underload before it quietly becomes someone’s problem. Because a task due Friday is not the same as a task that needs ten hours between Tuesday and Friday. Somehow most dashboards still haven’t figured that out.

Image Workload overview
FEATURE 04

Daily proof-of-work updates

Not micromanagement. Just less silence. Remote work makes it easy for progress to disappear between check-ins. Tracey gives team members a lightweight way to log what moved, what’s blocked, and drop proof links inside a configurable update window. Everyone stays in the loop without a single unnecessary meeting.

Image Daily update screen
FEATURE 05

Tracey’s Take

The insight layer. Raw data is only useful if someone can actually read it. Tracey’s Take surfaces patterns in plain language so managers aren’t left squinting at charts trying to figure out what any of it means.

Quiet consistency. Overload risk. Quality trends. Stabilization after a rough week.

The AI isn’t the product. It’s the interpreter. The real system is the data underneath it.

Image Tracey’s Take summary card
05

Who it’s for

Both sides of the team. Always.

For managers

See quality not just volume, catch issues early, give feedback that actually sticks, and have real data when review season comes around.

For team members

Get clearer expectations, see your own growth over time, get credit for quality work, and know where you stand without waiting six months to find out.

Image For managers / for team members section
06

Where it is now

My team uses it. Four people, real work, real feedback. That’s where it started and honestly where the best product decisions have come from.

The landing page, the pricing, the waitlist — that’s me thinking a few steps ahead. Whether this grows beyond my team is what’s being figured out next. But the tool works, and it works because it was built around an actual problem, not a hypothetical one.

07

What I’d test next

Whether the scoring system feels helpful or intimidating to team members. That distinction matters a lot.

The goal is visibility, not surveillance. A strong version of Tracey should help managers support their people better. Not just watch them more closely.

Agent Tracey reminded me that a smart system isn’t always the one with the most features. Sometimes it’s just the one that finally names what everyone already knows is happening.
Next ↓ Liberra

Let’s design something worth using.