Energy · B2B Internal Platform Sole UI/UX Designer 8 min read

Replacing hours of data wrangling with minutes of visual analysis — building Delphi at RWE

Replaced fragmented PI Vision / SCADA workflows with a purpose-built web application — giving analysts and engineers interactive charts, event overlays, and a progressive filter system to diagnose turbine issues in minutes instead of hours.

Role Sole UI/UX Designer
Timeline 10+ months (ongoing)
Team 5+ cross-functional (PO, devs, data engineers)
Tools Figma, Jira, Mural
4

Chart types designed — Scatter, Time Series, Box Plot, Wind Rose

3 → 1

Tools consolidated into one platform

Hrs → Min

Time to diagnose a turbine performance issue

Adopted

Shipped & used by analysts daily

Delphi main interface — Parameters panel with filters on the left, interactive time-series chart results on the right

Delphi · Wind turbine ad-hoc performance analysis platform

Result

Shipped and adopted across the performance analysis team. Designed a full design system covering 4 chart types, progressive filters, event overlays, modular onboarding, and a built-in feedback loop. Reduced the time to diagnose turbine performance issues from hours of manual data wrangling to minutes of visual analysis.

TL;DR — 30-second summary

RWE’s performance analysts investigated turbine issues across 3+ disconnected tools (PI Vision, SCADA dashboards, Excel) with no purpose-built analysis workflow. As the sole UI/UX designer, I designed Delphi — a dedicated web application with interactive time-series charts, event overlays, and a progressive filter system that lets analysts diagnose turbine performance issues in minutes instead of hours. Designed 4 chart types (Scatter, Time Series, Box Plot, Wind Rose), a modular onboarding system, and a built-in feedback loop over 10+ months — consolidating 3 tools into one platform now used daily by the analysis team.

01 · Context

The problem: finding a needle in a data haystack

When a wind turbine underperforms — a pitch fault, a grid dropout, an unexpected power curve deviation — every hour of undiagnosed downtime costs the operator tens of thousands of euros. Performance analysts at RWE investigate these events daily across thousands of turbines. The question is always the same: what happened, when, and why?

The existing workflow relied on PI Vision and SCADA dashboards — powerful but generic tools designed for real-time monitoring, not for ad-hoc investigation. Analysts would pull 10-minute interval data into Excel, manually cross-reference event logs, build their own charts, and compare turbines side by side — all in disconnected tools. A single investigation could take hours of data wrangling before any actual analysis began.

I was brought in as the sole UI/UX designer to help build Delphi — a purpose-built web application that would consolidate event data and historical performance metrics into a single interface with interactive, visual charting. The goal: let analysts go from question to answer without leaving the tool.

Before Existing Issue Management tool
Before Delphi — Issue Management interface with all filter parameters exposed at once, overwhelming for ad-hoc analysis
All filters exposed at once — functional but overwhelming for ad-hoc investigation
After Delphi — purpose-built analysis platform
Delphi main interface — Parameters panel with progressive filters on the left, interactive time-series chart on the right
Progressive filters + interactive charts — analysts go from question to answer in one tool
What I owned
End-to-end UX Research, information architecture, interaction design, and usability testing — from blank canvas to shipped product
Design system Component library, chart patterns, filter UI, and onboarding framework — all built and maintained in Figma
Stakeholder alignment Sprint demos, PBI refinement with PO, developer pairing on interaction details, and feedback loops with the analyst team
02 · Discovery

Understanding how analysts actually investigate

I started with requirement gathering sessions with the Product Owner, translating business needs into well-defined PBIs. In parallel, I conducted user interviews with performance analysts and site engineers to understand their actual investigation workflows — not what documentation said they did, but what they really did day to day.

Three patterns surfaced clearly across every conversation:

“Analysts were spending hours preparing data — and minutes actually thinking.”

Research approach
Requirement gathering Working sessions with PO to translate business needs into prioritised PBIs with clear acceptance criteria
User interviews Performance analysts and site engineers; focused on actual investigation workflows, not documented processes
Iterative usability testing Prototype walkthroughs with analysts; tested filter behaviour, chart interactions, and information density
3–4

disconnected tools required for a single investigation — PI Vision, SCADA exports, Excel, and sometimes Python scripts

Iterative

exploration, not upfront specification — analysts narrowed filters progressively, never knowing the full query at the start

Hours

spent preparing data before actual analysis could begin — formatting exports, cross-referencing event logs, building charts manually

03 · Process

How I structured the work

1

Requirement gathering & PBI refinement

Worked closely with the PO to translate business objectives into prioritised product backlog items. Defined acceptance criteria for each feature and identified dependencies between the charting engine, event system, and filter infrastructure — ensuring design and development stayed aligned sprint over sprint.

2

User interviews & workflow mapping

Interviewed performance analysts and engineers to map their actual investigation process. Documented the typical path from alert → data pull → chart creation → comparison → root-cause hypothesis. This revealed that the investigation flow was non-linear — users backtrack, re-filter, and compare constantly. The UI needed to support that fluidity.

3

Low-fidelity wireframes & concept exploration

Explored 3 layout approaches for the core analysis view: a single-panel chart with overlay controls, a split-panel chart + data table view, and a modular dashboard with draggable chart tiles. Tested with users — the split-panel approach won clearly because analysts needed raw data alongside the visual for validation.

3 layout approaches tested
Rejected Single-panel chart with overlay controls — analysts needed raw data alongside visuals for validation
Rejected Modular dashboard with draggable tiles — too much flexibility created cognitive overload for focused investigation
Chosen Split-panel: chart + data table — analysts validated specific data points directly from the chart; bidirectional linking was the key interaction pattern
4

High-fidelity design & usability testing

Built high-fidelity screens in Figma covering 4 chart types (Scatter, Time Series, Box Plot, Wind Rose), event overlays, progressive filter panels, turbine comparison, data tables, and export flows. Ran usability tests on interactive prototypes with analysts — iterated on filter behaviour, chart interaction patterns, and information density based on direct feedback.

5

Iterative delivery & ongoing refinement

Delphi shipped in phases over 10+ months. Each sprint included design QA, developer pairing on interaction details, and feedback loops with the analyst team. The ongoing nature of the project meant I continuously refined features post-release based on real usage patterns — not just initial assumptions.

Low-fidelity wireframes exploring navigation structure, filter layouts, and chart view configurations
Wireframe exploration · Testing 3 layout approaches for the core analysis view
04 · Key design decisions

Four decisions that shaped the product

A

Progressive filter system — show less, let users dig deeper

This was the hardest design problem on the project. Analysts needed dozens of filter parameters — turbine ID, site, date range, event type, severity, operational status, wind speed range, power curve deviation, and more. Showing all of them at once was overwhelming. Hiding them behind a single search bar meant too many clicks.

The solution was a progressive disclosure model: the filter panel opens with the 4 most-used parameters (site, turbine, date range, event type) visible by default. Below that, an “Advanced filters” section expands to reveal the remaining parameters grouped by category — operational, environmental, and event-specific. Each filter showed active state counts so analysts could see at a glance how many constraints were applied.

Critically, I added a “recent filters” feature — since analysts often re-run similar investigations, the system remembered their last 5 filter combinations. This came directly from a user interview where an analyst said he kept a sticky note with his common filter setups.

AI-ready note: The “recent filters” feature is the manual version of what will become AI-recommended filter configurations. I designed the progressive disclosure system to accommodate a “Suggested” section above the primary filters — where the system could recommend parameter combinations based on the analyst’s context.

Alternatives considered
Chosen Progressive disclosure — 4 primary filters visible, “Advanced” section for the rest, grouped by category
Rejected Show all filters at once — overwhelming; analysts couldn’t parse constraints at a glance
Rejected Single search bar — too many clicks; analysts explore iteratively, not with exact queries
B

Event overlays on time-series charts — making incidents visible in context

The core value of Delphi was the interactive time-series chart plotting 10-minute interval data — power output, wind speed, rotor RPM, pitch angle — over user-defined time ranges. But raw performance data alone doesn’t tell the story. Analysts needed to see when events happened in the context of the performance curve.

I designed event markers as vertical annotations on the chart timeline — color-coded by severity (critical, warning, informational) with expandable tooltips showing event details on hover. Users could toggle event categories on and off so the chart wouldn’t become cluttered during dense-event periods. For multi-turbine comparisons, events from different turbines used distinct marker styles to remain distinguishable.

AI-ready note: The event overlay system was built to display machine-flagged anomalies, not just manual entries. The color-coded severity system and expandable tooltips handle ML-generated confidence scores — so when anomaly detection ships, the UI supports it without redesign.

C

Modular onboarding — per-chart-type walkthroughs with progress tracking

Delphi was replacing workflows analysts had used for years. Even though the new tool was more efficient, the switch itself was a friction point — people don’t abandon familiar tools willingly. The challenge: Delphi had multiple chart types (Scatter Plot, Time Series, Box Plot, Wind Roses) each with their own parameter sets and interaction patterns. A single generic walkthrough wouldn’t work because each module had different controls and concepts to learn.

The solution was a modular onboarding system. On first login, users see a welcome modal (“Welcome to Delphi”) with a clear choice: “Skip For Now” or “Yes, Let’s Go!” — a deliberate autonomy-first pattern that respects the user’s agency from the first interaction. If they opt in, the tour begins with the Alarms module, then offers to continue to Scatter Plot, Time Series, and so on. Each module has its own dedicated walkthrough.

Within each module tour, a progress stepper at the top (Parameters → Assets → Scatter Measures → Color Gradient) shows users exactly where they are and how much is left. The walkthrough highlights relevant UI sections with a spotlight border and presents contextual tooltips explaining what each control does and how to use it — including shortcut tips like “select ‘Power Curves’ to automatically populate X and Y axes.” A persistent “Skip Tour” option in the header lets users exit at any point without penalty.

At the end of each module tour, a completion screen gives users two paths: “Continue To Time Series Onboarding” to keep learning, or “Finish” to start working — with a reminder that they can always re-access the tour from the help icon. This modular approach meant experienced users could skip modules they already understood and only tour the ones that were new to them, while new hires could walk through the entire system end to end.

D

Dedicated Feedback Area — making user input a first-class feature, not an afterthought

Since Delphi was an ongoing, iteratively shipped product, I needed a mechanism to capture user feedback continuously — not just during scheduled usability tests. Analysts are busy; they won’t send an email or fill out an external survey when something frustrates them. They’ll work around the issue and move on. The feedback channel needed to be built into the tool itself, not bolted on.

Rather than a small floating widget that limits what users can express, I designed a dedicated Feedback Area as a full page in the main navigation — giving it the same status as Alarms, Scatter Plot, or Time Series. The page has two tabs: “Leave Feedback” for submission and “Reported Feedback” for tracking.

The submission form lets users categorise their feedback upfront — Issue or Bug, Enhancement or New Feature Request, or Other Feedback — with a title, a rich-text description field (supporting formatting, lists, and emphasis for detailed bug reports), and a drag-and-drop attachment zone for screenshots, videos, or supporting files (up to 5 files, 8MB each). This structure meant the PO and I received consistently well-formatted, actionable feedback instead of vague one-liners.

The “Reported Feedback” tab was equally important — it showed users a table of all submitted feedback with priority levels (High, Medium, Low), current status (New, In Progress, Recognized, Done, Rejected), reporter name, release target, and a direct “View in ADO” link to the Azure DevOps work item. This transparency closed the loop: users could see that their feedback was acknowledged, prioritised, and tracked — not disappearing into a void. It also reduced duplicate submissions because users could check if an issue was already reported before filing a new one.

Impact: The Feedback Area became the primary channel for analyst input, replacing ad-hoc emails and Slack messages. The PO cited it as one of the features that most improved sprint planning quality — because feedback arrived structured, categorised, and with screenshots attached.

Onboarding flow

The complete walkthrough sequence

Four stages of the modular onboarding — from the initial welcome prompt through module-specific guided tours to completion.

Step 1 — Welcome to Delphi modal with Skip For Now and Yes Let's Go options
1 · Welcome prompt
Step 2 — Welcome to Scatter Plot Onboarding module-specific entry
2 · Module entry
Step 3 — Mid-tour with highlighted section, tooltip, and progress stepper
3 · Guided tooltip tour
Step 4 — Tour completion with option to continue to next module or finish
4 · Completion + next module

Onboarding flow · Welcome → Module-specific tour → Guided tooltips with progress → Completion with continuation option

Feedback Area

Submission and tracking — both sides of the loop

The Feedback Area lives in the main navigation and has two tabs: a structured submission form for reporting issues and requesting features, and a transparent tracker showing all reported feedback with priority, status, and ADO integration.

Leave Feedback tab — categorised form with rich text editor and file attachments
Leave Feedback · Submission form
Reported Feedback tab — tracker table with priority, status, reporter, and ADO link
Reported Feedback · Status tracker

Feedback Area · Structured submission form (left) and transparent feedback tracker with ADO integration (right)

Key screens
Progressive filter panel — Scatter Plot view with grouped parameters: Assets, Scatter Measures, and Other Options
Progressive filter panel · Scatter Plot parameters
Expanded chart view with interactive tooltip showing data point details on hover
Expanded chart · Interactive tooltip on hover
Onboarding walkthrough — highlighted UI section with tooltip explanation and progress stepper
Onboarding mid-tour · Guided tooltip with progress
Feedback Area — Leave Feedback tab with type selector, title, rich text description, and file attachments
Feedback submission · Categorised form with attachments

Core interface screens · Progressive filter panel, expanded chart with tooltip, onboarding mid-tour, feedback submission

05 · Outcome

What shipped and what it changed

Delphi shipped in phases and is now actively used by the performance analysis team at RWE for daily ad-hoc turbine investigation. The core charting module entered active use in Q3 2025, with onboarding and feedback features following in subsequent sprints. The tool consolidated what previously required 3–4 disconnected tools into a single interface — and the visual, interactive approach to data analysis fundamentally changed how analysts work.

4

Chart types designed — Scatter Plot, Time Series, Box Plot, Wind Rose — each with dedicated filter sets and interaction patterns

3 → 1

Tools consolidated — replaced fragmented PI Vision / SCADA / Excel workflow with a single platform

Hrs → Min

Time to diagnose a turbine performance issue — visual analysis replaced hours of manual data wrangling across disconnected tools

Adopted

Shipped and actively used by performance analysts and engineers across the team for daily investigation workflows

06 · Learnings

What I’d do differently

The progressive filter system works well now, but the first iteration was too conservative — I hid too many parameters behind the “Advanced” toggle, and analysts felt they had to click twice to get to filters they used daily. The lesson: progressive disclosure only works if the “default visible” set genuinely covers 80% of use cases. I had to revise what counted as “primary” filters twice after watching real usage.

The other insight was about chart interaction density. In early prototypes, I designed hover tooltips that showed every data point on the chart. During usability testing, analysts told me the tooltips were too noisy at high zoom levels (months of data at 10-minute intervals means thousands of data points). I switched to a “snap to nearest point” model with an intelligent tooltip that only appeared at meaningful intervals. That small interaction change made the charts genuinely usable for long time ranges.

Working on a 10+ month, ongoing project also taught me that the design is never “done” — features I considered finished in month 3 needed redesign by month 8 because the team’s understanding of analyst workflows deepened. Designing for enterprise tools means building in the expectation that you’ll iterate indefinitely, and structuring your Figma files and component libraries to support that.

What I’d do differently next time: I’d invest earlier in a formal design token system shared between Figma and the dev team. As Delphi grew across 4 chart types, maintaining consistency in spacing, color, and interaction patterns became a coordination challenge that a shared token layer would have largely eliminated. I’d also push for embedded analytics earlier — tracking which filters analysts actually used would have shortened the feedback loop on progressive disclosure decisions.

Designing for AI — where this product goes next

Delphi’s interaction model was designed with intelligent automation in mind. The event overlay system handles ML-generated confidence scores (Decision B), the filter panel accommodates AI-suggested configurations (Decision A), and the split-panel “show your work” layout ensures analysts can verify any machine-surfaced insight before acting on it — a trust calibration pattern I built in from day one.

Specific iterations that shaped the product

Filter defaults revised twice — First iteration hid too many parameters behind “Advanced”; revised the primary set to cover 80% of daily use cases after observing real analysis sessions

Tooltip model redesigned — Switched from showing every data point to a “snap to nearest” model after usability testing revealed tooltip noise at high zoom levels

Onboarding added mid-project — Not in original scope; emerged from watching analysts struggle with the tool switch during early rollout phases

Interested in working together?