01. Triage workflow, not a dashboard
I structured the product around a ranked workflow instead of an information-dense overview. Coaches open the app, scan for red and orange, drill into the signal driving the flag, and make a defensible call.
The raw data exists in almost every program. The infrastructure to turn it into pre-session decisions doesn't. Athlete Intelligence is the layer in between.
A performance and injury-risk decision system inside the FiyrPod Coach app
Role
End-to-end product design
Timeline
8 weeks
Team
2 designers, CEO, CTO, 3 engineers
Outcome
95%+ time reduction / 9 out of 10 satisfaction
Overview
The problem was not data collection. The problem was synthesis. Coaches had signals everywhere and no system that could turn them into a call before training started.
FiyrPod was already capturing GPS load from wearables. Athletes were submitting daily wellness inputs. RPE scores were coming in after sessions. Force plate metrics existed too. The raw signals were there, but there was no layer that translated them into a readiness picture a coach could trust.
In elite programs, a sports scientist handles that synthesis every morning. Most coaching staffs do not have that role. Instead they spend 60 to 90 minutes stitching together disconnected tools, spreadsheets, and intuition before they can even start the session.
Athlete Intelligence became that missing infrastructure: a structured intelligence layer that applied evidence-based scoring across distinct signals and surfaced the output in a workflow a coach could understand in under five minutes.
This product was not a better dashboard. It was a decision layer.
Problem
Every morning was a manual assembly job. Coaches pulled GPS data from one platform, chased wellness updates from athletes, cross-referenced RPE in another tool, and then tried to normalize those signals into something actionable.
That time cost was not a coaching failure. It was the tax of doing manually what a proper system should do automatically. Without a scoring framework, raw numbers were noise. Without a sports science layer, coaches carried the liability of every decision without a way to defend it.
Fragmented data sources meant there was no shared view of athlete readiness.
No scoring framework meant coaches could not compare today's signal against an athlete's own baseline.
No sports science layer meant interpretation fell to intuition, spreadsheets, and time-consuming manual work.
Ownership
I treat the model behind the interface as part of the design surface. If the number is wrong, the UI only makes the problem more visible.
I led research with coaches, worked closely with an embedded performance coach, defined MVP scope, and wrote the product brief engineering used to scope the build. I also designed the full workflow from team dashboard to athlete drill-downs and wrote the system copy across labels, tooltips, and empty states.
The key work on this project was making complex product decisions legible: what the system should measure, how those signals should be weighted, what should be surfaced first, and what needed to stay hidden until the moment a coach needed to explain a decision.
Key Decisions
I structured the product around a ranked workflow instead of an information-dense overview. Coaches open the app, scan for red and orange, drill into the signal driving the flag, and make a defensible call.
I cut an already-built expanded row from the team view two weeks before launch. More data on the first screen meant slower decisions, which was the opposite of the product promise.
The math could stay sophisticated while the output became readable. Converting to a 1 to 10 scale made thresholds legible and the score immediately actionable for coaches.
The original preparedness formula treated both signals as equal. Rebuilding the model with separate denominators made the output clinically believable and restored trust in the score.
When post-launch scores looked wrong, I traced the issue back to the athlete app. Requiring athletes to declare pain versus soreness before entering data removed the silent corruption at the source.
What Shipped
Each system measures one thing, from one data source, and produces one score a coach can act on. Readiness translates six self-reported wellness inputs into a daily readiness score. Pain and Soreness uses body-map selections and severity ratings to surface injury risk. Total Load normalizes GPS metrics against personal baseline. Force Plate flags biomechanical changes. RPE captures perceived exertion and session load.
The complexity is in the model, not in the interface. The team list is ranked by risk rather than name, so the sort order itself becomes the workflow. Detailed views exist for explanation, not for daily browsing.
Impact
At 30 days post-launch, coaches reported a 95%+ reduction in time spent on morning synthesis, moving from 60 to 90 minutes down to under 5. Eleven of fourteen coaches stopped maintaining their spreadsheet within the first week. Satisfaction averaged 9 out of 10 across 14 coaches.
The most meaningful behavior change was not adoption of a new screen. Coaches stopped feeling the need to justify their process to themselves. That evidence later became part of enterprise partnership conversations.
Reflection
I assumed coaches would use the drill-down views more often than they actually do. In practice, most daily sessions never leave the team readiness list. The detailed views matter when a coach needs to explain a decision to an athlete or management, not as a routine destination.
If I revisited version one, I would rebuild onboarding so the first session feels immediately useful. The product should open with pre-populated demo data instead of a blank state that asks coaches to do setup work before value is visible.
"I don't use it to make my decision. I use it to show my athletes I didn't just make it up. That's the part that changed everything. They actually listen now."
Performance Coach, Youth Soccer Organization