RWE’s performance analysts investigated turbine issues across 3+ disconnected tools (PI Vision, SCADA dashboards, Excel) with no purpose-built analysis workflow. As the sole UI/UX designer, I designed Delphi — a dedicated web application with interactive time-series charts, event overlays, and a progressive filter system that lets analysts diagnose turbine performance issues in minutes instead of hours. Designed 4 chart types (Scatter, Time Series, Box Plot, Wind Rose), a modular onboarding system, and a built-in feedback loop over 10+ months — consolidating 3 tools into one platform now used daily by the analysis team.
The problem: finding a needle in a data haystack
When a wind turbine underperforms — a pitch fault, a grid dropout, an unexpected power curve deviation — every hour of undiagnosed downtime costs the operator tens of thousands of euros. Performance analysts at RWE investigate these events daily across thousands of turbines. The question is always the same: what happened, when, and why?
The existing workflow relied on PI Vision and SCADA dashboards — powerful but generic tools designed for real-time monitoring, not for ad-hoc investigation. Analysts would pull 10-minute interval data into Excel, manually cross-reference event logs, build their own charts, and compare turbines side by side — all in disconnected tools. A single investigation could take hours of data wrangling before any actual analysis began.
I was brought in as the sole UI/UX designer to help build Delphi — a purpose-built web application that would consolidate event data and historical performance metrics into a single interface with interactive, visual charting. The goal: let analysts go from question to answer without leaving the tool.
Understanding how analysts actually investigate
I started with requirement gathering sessions with the Product Owner, translating business needs into well-defined PBIs. In parallel, I conducted user interviews with performance analysts and site engineers to understand their actual investigation workflows — not what documentation said they did, but what they really did day to day.
Three patterns surfaced clearly across every conversation:
“Analysts were spending hours preparing data — and minutes actually thinking.”
disconnected tools required for a single investigation — PI Vision, SCADA exports, Excel, and sometimes Python scripts
exploration, not upfront specification — analysts narrowed filters progressively, never knowing the full query at the start
spent preparing data before actual analysis could begin — formatting exports, cross-referencing event logs, building charts manually
How I structured the work
Requirement gathering & PBI refinement
Worked closely with the PO to translate business objectives into prioritised product backlog items. Defined acceptance criteria for each feature and identified dependencies between the charting engine, event system, and filter infrastructure — ensuring design and development stayed aligned sprint over sprint.
User interviews & workflow mapping
Interviewed performance analysts and engineers to map their actual investigation process. Documented the typical path from alert → data pull → chart creation → comparison → root-cause hypothesis. This revealed that the investigation flow was non-linear — users backtrack, re-filter, and compare constantly. The UI needed to support that fluidity.
Low-fidelity wireframes & concept exploration
Explored 3 layout approaches for the core analysis view: a single-panel chart with overlay controls, a split-panel chart + data table view, and a modular dashboard with draggable chart tiles. Tested with users — the split-panel approach won clearly because analysts needed raw data alongside the visual for validation.
High-fidelity design & usability testing
Built high-fidelity screens in Figma covering 4 chart types (Scatter, Time Series, Box Plot, Wind Rose), event overlays, progressive filter panels, turbine comparison, data tables, and export flows. Ran usability tests on interactive prototypes with analysts — iterated on filter behaviour, chart interaction patterns, and information density based on direct feedback.
Iterative delivery & ongoing refinement
Delphi shipped in phases over 10+ months. Each sprint included design QA, developer pairing on interaction details, and feedback loops with the analyst team. The ongoing nature of the project meant I continuously refined features post-release based on real usage patterns — not just initial assumptions.
Four decisions that shaped the product
Progressive filter system — show less, let users dig deeper
This was the hardest design problem on the project. Analysts needed dozens of filter parameters — turbine ID, site, date range, event type, severity, operational status, wind speed range, power curve deviation, and more. Showing all of them at once was overwhelming. Hiding them behind a single search bar meant too many clicks.
The solution was a progressive disclosure model: the filter panel opens with the 4 most-used parameters (site, turbine, date range, event type) visible by default. Below that, an “Advanced filters” section expands to reveal the remaining parameters grouped by category — operational, environmental, and event-specific. Each filter showed active state counts so analysts could see at a glance how many constraints were applied.
Critically, I added a “recent filters” feature — since analysts often re-run similar investigations, the system remembered their last 5 filter combinations. This came directly from a user interview where an analyst said he kept a sticky note with his common filter setups.
AI-ready note: The “recent filters” feature is the manual version of what will become AI-recommended filter configurations. I designed the progressive disclosure system to accommodate a “Suggested” section above the primary filters — where the system could recommend parameter combinations based on the analyst’s context.
Event overlays on time-series charts — making incidents visible in context
The core value of Delphi was the interactive time-series chart plotting 10-minute interval data — power output, wind speed, rotor RPM, pitch angle — over user-defined time ranges. But raw performance data alone doesn’t tell the story. Analysts needed to see when events happened in the context of the performance curve.
I designed event markers as vertical annotations on the chart timeline — color-coded by severity (critical, warning, informational) with expandable tooltips showing event details on hover. Users could toggle event categories on and off so the chart wouldn’t become cluttered during dense-event periods. For multi-turbine comparisons, events from different turbines used distinct marker styles to remain distinguishable.
AI-ready note: The event overlay system was built to display machine-flagged anomalies, not just manual entries. The color-coded severity system and expandable tooltips handle ML-generated confidence scores — so when anomaly detection ships, the UI supports it without redesign.
Modular onboarding — per-chart-type walkthroughs with progress tracking
Delphi was replacing workflows analysts had used for years. Even though the new tool was more efficient, the switch itself was a friction point — people don’t abandon familiar tools willingly. The challenge: Delphi had multiple chart types (Scatter Plot, Time Series, Box Plot, Wind Roses) each with their own parameter sets and interaction patterns. A single generic walkthrough wouldn’t work because each module had different controls and concepts to learn.
The solution was a modular onboarding system. On first login, users see a welcome modal (“Welcome to Delphi”) with a clear choice: “Skip For Now” or “Yes, Let’s Go!” — a deliberate autonomy-first pattern that respects the user’s agency from the first interaction. If they opt in, the tour begins with the Alarms module, then offers to continue to Scatter Plot, Time Series, and so on. Each module has its own dedicated walkthrough.
Within each module tour, a progress stepper at the top (Parameters → Assets → Scatter Measures → Color Gradient) shows users exactly where they are and how much is left. The walkthrough highlights relevant UI sections with a spotlight border and presents contextual tooltips explaining what each control does and how to use it — including shortcut tips like “select ‘Power Curves’ to automatically populate X and Y axes.” A persistent “Skip Tour” option in the header lets users exit at any point without penalty.
At the end of each module tour, a completion screen gives users two paths: “Continue To Time Series Onboarding” to keep learning, or “Finish” to start working — with a reminder that they can always re-access the tour from the help icon. This modular approach meant experienced users could skip modules they already understood and only tour the ones that were new to them, while new hires could walk through the entire system end to end.
Dedicated Feedback Area — making user input a first-class feature, not an afterthought
Since Delphi was an ongoing, iteratively shipped product, I needed a mechanism to capture user feedback continuously — not just during scheduled usability tests. Analysts are busy; they won’t send an email or fill out an external survey when something frustrates them. They’ll work around the issue and move on. The feedback channel needed to be built into the tool itself, not bolted on.
Rather than a small floating widget that limits what users can express, I designed a dedicated Feedback Area as a full page in the main navigation — giving it the same status as Alarms, Scatter Plot, or Time Series. The page has two tabs: “Leave Feedback” for submission and “Reported Feedback” for tracking.
The submission form lets users categorise their feedback upfront — Issue or Bug, Enhancement or New Feature Request, or Other Feedback — with a title, a rich-text description field (supporting formatting, lists, and emphasis for detailed bug reports), and a drag-and-drop attachment zone for screenshots, videos, or supporting files (up to 5 files, 8MB each). This structure meant the PO and I received consistently well-formatted, actionable feedback instead of vague one-liners.
The “Reported Feedback” tab was equally important — it showed users a table of all submitted feedback with priority levels (High, Medium, Low), current status (New, In Progress, Recognized, Done, Rejected), reporter name, release target, and a direct “View in ADO” link to the Azure DevOps work item. This transparency closed the loop: users could see that their feedback was acknowledged, prioritised, and tracked — not disappearing into a void. It also reduced duplicate submissions because users could check if an issue was already reported before filing a new one.
Impact: The Feedback Area became the primary channel for analyst input, replacing ad-hoc emails and Slack messages. The PO cited it as one of the features that most improved sprint planning quality — because feedback arrived structured, categorised, and with screenshots attached.
The complete walkthrough sequence
Four stages of the modular onboarding — from the initial welcome prompt through module-specific guided tours to completion.
Onboarding flow · Welcome → Module-specific tour → Guided tooltips with progress → Completion with continuation option
Submission and tracking — both sides of the loop
The Feedback Area lives in the main navigation and has two tabs: a structured submission form for reporting issues and requesting features, and a transparent tracker showing all reported feedback with priority, status, and ADO integration.
Feedback Area · Structured submission form (left) and transparent feedback tracker with ADO integration (right)
Core interface screens · Progressive filter panel, expanded chart with tooltip, onboarding mid-tour, feedback submission
What shipped and what it changed
Delphi shipped in phases and is now actively used by the performance analysis team at RWE for daily ad-hoc turbine investigation. The core charting module entered active use in Q3 2025, with onboarding and feedback features following in subsequent sprints. The tool consolidated what previously required 3–4 disconnected tools into a single interface — and the visual, interactive approach to data analysis fundamentally changed how analysts work.
Chart types designed — Scatter Plot, Time Series, Box Plot, Wind Rose — each with dedicated filter sets and interaction patterns
Tools consolidated — replaced fragmented PI Vision / SCADA / Excel workflow with a single platform
Time to diagnose a turbine performance issue — visual analysis replaced hours of manual data wrangling across disconnected tools
Shipped and actively used by performance analysts and engineers across the team for daily investigation workflows
What I’d do differently
The progressive filter system works well now, but the first iteration was too conservative — I hid too many parameters behind the “Advanced” toggle, and analysts felt they had to click twice to get to filters they used daily. The lesson: progressive disclosure only works if the “default visible” set genuinely covers 80% of use cases. I had to revise what counted as “primary” filters twice after watching real usage.
The other insight was about chart interaction density. In early prototypes, I designed hover tooltips that showed every data point on the chart. During usability testing, analysts told me the tooltips were too noisy at high zoom levels (months of data at 10-minute intervals means thousands of data points). I switched to a “snap to nearest point” model with an intelligent tooltip that only appeared at meaningful intervals. That small interaction change made the charts genuinely usable for long time ranges.
Working on a 10+ month, ongoing project also taught me that the design is never “done” — features I considered finished in month 3 needed redesign by month 8 because the team’s understanding of analyst workflows deepened. Designing for enterprise tools means building in the expectation that you’ll iterate indefinitely, and structuring your Figma files and component libraries to support that.
What I’d do differently next time: I’d invest earlier in a formal design token system shared between Figma and the dev team. As Delphi grew across 4 chart types, maintaining consistency in spacing, color, and interaction patterns became a coordination challenge that a shared token layer would have largely eliminated. I’d also push for embedded analytics earlier — tracking which filters analysts actually used would have shortened the feedback loop on progressive disclosure decisions.
Delphi’s interaction model was designed with intelligent automation in mind. The event overlay system handles ML-generated confidence scores (Decision B), the filter panel accommodates AI-suggested configurations (Decision A), and the split-panel “show your work” layout ensures analysts can verify any machine-surfaced insight before acting on it — a trust calibration pattern I built in from day one.
Filter defaults revised twice — First iteration hid too many parameters behind “Advanced”; revised the primary set to cover 80% of daily use cases after observing real analysis sessions
Tooltip model redesigned — Switched from showing every data point to a “snap to nearest” model after usability testing revealed tooltip noise at high zoom levels
Onboarding added mid-project — Not in original scope; emerged from watching analysts struggle with the tool switch during early rollout phases