Energy · B2B Internal Platform UI/UX Designer 12 min read

7 steps to 3 — redesigning how offshore wind technicians diagnose turbine faults

Offshore technicians were diagnosing turbine faults using SAP screens, PDF manuals, and phone calls — with zero cross-site knowledge sharing. I shadowed control rooms, mapped the 7-step as-is journey, and designed a platform that collapses it to 3 steps. This is the story of the research, trade-offs, and scope fights behind the product.

Role UI/UX Designer
Timeline 3 months
Team 11 stakeholders
Tools Figma, Mural, SAP
7 → 3

Diagnosis steps reduced

2

Modules across desktop + iPad contexts

3

Review cycles to alignment

12 wk

Research to handoff

RWE Troubleshooting Tool — Key screens

Fault search results & cross-site root cause comparison — the core screen of Module 1

Result

Designed a unified troubleshooting platform that collapses a 7-step, 4-tool diagnosis workflow into 3 steps within a single tool. Delivered a full design system across 2 modules (desktop + iPad), aligned 11 stakeholders in 3 review cycles, and handed off a production-ready prototype accepted by engineering.

TL;DR — 30-second summary

RWE’s offshore wind technicians diagnosed turbine faults across 7 steps and 4 disconnected systems (SAP, PDFs, phone calls, tribal knowledge). As the sole UI/UX designer, I designed a unified troubleshooting platform that consolidates this into 3 steps within a single tool — with cross-site fault search, consolidated root cause comparison, and an integrated iPad documentation flow for offshore technicians. Delivered a full design system covering 2 device types and 4 fault scenarios across 2 modules, aligned 11+ stakeholders in 3 review cycles, and handed off a production-ready prototype accepted by engineering.

01 · Context

A global wind fleet with no dedicated troubleshooting tool

When a wind turbine faults offshore, every hour of downtime costs the operator tens of thousands of euros. RWE’s maintenance teams need to diagnose, resolve, and document these faults — often under time pressure and in physically demanding conditions 80km from shore.

The problem: there was no dedicated troubleshooting tool. Teams relied on SAP transaction codes, scattered PDF manuals, and the tribal knowledge of experienced technicians. Information was siloed by site, hard to find, and impossible to standardize across the fleet.

I joined as the sole UI/UX designer on the RWE side, working alongside BCG consultants and internal stakeholders to design a new digital platform from scratch — owning research, IA, prototyping, and handoff across both modules.

Before
📊 SCADA
🗄️ SAP
📄 PDF Manuals
📞 Phone Calls

4 disconnected systems, zero cross-site search, and tribal knowledge as the primary diagnostic resource.

My contribution

What I owned

As the only designer on the RWE side, I was responsible for the full design workstream — from initial research through to production-ready handoff. Here’s the breakdown.

End-to-end design

Research, journey mapping, IA definition, wireframing, prototyping, and interaction specs — continuous from kickoff to handoff.

Design direction for BCG designer

Set design principles, reviewed output against quality standards, and maintained consistency across 2 modules and 15+ screens.

Stakeholder alignment

Facilitated design reviews with 11+ stakeholders across Business Partners, Product Owners, Team Leads, and BCG consultants. Translated business requirements into design decisions.

Research → strategy → execution

Ran the full research arc — ~10 participant interviews, contextual shadowing sessions, journey mapping, and problem framing — then translated findings into a product vision that the entire team could rally around.

02 · Users

Two distinct roles, two different contexts

Through ~10 participant interviews and contextual shadowing sessions across onshore and offshore teams, I identified two primary user profiles with fundamentally different work environments, device contexts, and information needs.

🖥️

Maintenance Coordinator /
Lead Technician

Onshore · Control Room · Desktop
Context Works from the onshore control room, monitoring turbine fleets via SCADA systems. Coordinates remote teams and manages fault resolution workflows.
Primary tasks Diagnose faults remotely, search for known root causes across sites, draft and assign work orders to offshore technicians.
Pain points Information scattered across SAP, PDFs, and colleagues’ memory. No way to search previous fault resolutions from other wind farms.
Design implication

Keyboard-first search, data-dense tables, multi-monitor support assumption.

Desktop SCADA / SAP Control Room
🌊

Technician

Offshore · Turbine · iPad
Context Works directly on turbines offshore, often in physically demanding and time-constrained conditions. Needs clear, glanceable instructions on a tablet.
Primary tasks Execute repairs based on work orders, document completed work and findings, capture photos and notes for knowledge transfer.
Pain points Relies heavily on personal experience. No structured way to access troubleshooting guides on-site. Documentation is an afterthought done post-trip.
Design implication

Touch-first, gloved hands (48px+ tap targets), offline-first, camera-priority workflow.

iPad Offshore Hands-on
03 · Discovery

Mapping the as-is journey revealed deep structural gaps

I conducted contextual shadowing sessions in control rooms, observing maintenance coordinators handle real fault scenarios in real time. I supplemented this with semi-structured interviews across both user groups, focusing on their actual behaviors — not just the official process.

“The most valuable research came from watching, not asking.”

In interviews, participants described the standard process. In shadowing, I saw the workarounds — the sticky notes on monitors, the WhatsApp groups for fault advice, the informal knowledge networks that actually kept operations running.

Research approach
Contextual shadowing Observed real fault resolution workflows in control rooms across sites
Semi-structured interviews ~10 participants across coordinators, leads, and offshore technicians
Journey mapping workshops Collaborative Mural sessions to validate pain points with cross-functional team
Zero

No dedicated diagnostic tool existed across the entire fleet — teams relied entirely on SAP, PDFs, and personal experience

4+ tools

Per single fault — SCADA, SAP, email, phone, PDF manuals all touched during one resolution

100% tribal

Every fault resolution lived in someone’s head, not a system — no structured capture of institutional knowledge

Before As-Is Troubleshooting Journey
As-Is troubleshooting user journey map
As-is journey — SCADA alarm through post-trip documentation. Scroll to explore. Click to zoom ↑
04 · Problem framing

Four fault scenarios, two product modules

From the research, I synthesized four distinct fault scenarios that technicians encounter. Each requires a different lookup and resolution path — this became the backbone of the product’s information architecture.

Why this framing matters: instead of building a generic tool, we scoped the product around real diagnostic patterns. This let us prioritize the highest-impact path first (known fault at other site) and design progressive disclosure for complex scenarios.

Known fault — own site

The fault has occurred before at this wind farm. The resolution is documented locally — the challenge is finding it quickly.

Known fault — other site Highest impact

Another RWE wind farm has solved this exact fault. But knowledge doesn’t travel between sites — it sits in someone’s head or a local PDF.

Known fault — multiple sites

The fault has been resolved at multiple sites. Consolidating root causes and comparing approaches would save significant diagnosis time.

Unknown fault

A completely new fault with no prior resolution. Requires on-site investigation, hypothesis testing, and thorough documentation for future reference.

Product scope

Module 1: Troubleshooting

Fault search, root cause lookup across sites, consolidated cause comparison, and work order drafting. Primarily used by the Maintenance Coordinator on desktop.

Module 2: Documentation

Structured capture of repair outcomes, photos, notes, and knowledge transfer. Primarily used by Technicians on iPad while offshore or immediately post-trip.

Collaboration

Navigating a complex stakeholder landscape

A cross-functional environment with competing priorities — BCG consultants focused on operational efficiency, Product Owners concerned with SAP integration timelines, Business Partners representing site-specific needs, and Team Leads managing offshore logistics.

3 BCG Consultants Strategy & operations
2 Product Owners Backlog & priorities
2 Team Leads Offshore operations
3 Business Partners Site-specific requirements
1 BCG Designer Design execution

How I managed this: Weekly design review rituals with a rotating set of stakeholders, design decisions framed as hypotheses tied to business metrics, and a shared Mural board as the single source of truth. When priorities conflicted, I grounded the argument in data from shadowing sessions.

Process artifacts I created
Shared Mural board Single source of truth for all design decisions. Journey maps, IA proposals, wireframe feedback, and scope decisions — all time-stamped and linked to review outcomes.
Design principles doc 7 formalized principles to steer the BCG designer. Examples: “Search is always the entry point,” “Status visible without interaction,” “One primary action per screen.”
Annotated handoff specs 15+ screens with interaction annotations, component documentation, edge case specs, and responsive behavior notes. Delivered via Figma with linked Confluence pages.
05 · Process

How I structured the work

1

Research & shadowing Week 1–3

Conducted interviews and contextual shadowing sessions with ~10 participants across maintenance coordinators, lead technicians, and offshore technicians. Observed real troubleshooting workflows in control rooms. Synthesized findings into a research report and presented to the full stakeholder group.

2

As-is journey mapping Week 3–4

Mapped the end-to-end troubleshooting journey across both user roles, covering every location transition — SCADA alarm, control room diagnosis, offshore travel, turbine-level work, and post-trip documentation. Identified critical pain points and information gaps at each stage. Validated with stakeholders in a collaborative Mural workshop.

3

Vision journey & IA Week 4–6

Defined the target-state journey — where the new tool would intervene. Designed the IA around two modules (Troubleshooting + Documentation) and four fault scenarios. Aligned BCG consultants, Team Leads, and Product Owners on scope. This was the hardest alignment phase — I had to defend the 2-module split against pushback for a single monolithic tool.

4

Wireframing & prototyping Week 6–10

Designed key screens in Figma: dashboard with fault search, fault results, consolidated root causes, work order drafts, and the technician documentation flow (iPad). Iterated rapidly through 3 rounds of stakeholder feedback, refining interactions and information hierarchy at each pass.

5

QA & handoff Week 10–12

Quality reviews across 15+ screens for consistency, interaction patterns, and edge cases. Directed the BCG designer’s remaining screens. Final reviews with Business Partners and Product Owners. Delivered a production-ready prototype with interaction specs and annotated handoff documentation.

06 · Vision

The redesigned journey — one platform, two modules

The vision journey shows where the new tool plugs in. Instead of context-switching between SAP, PDFs, phone calls, and personal notes, both user roles now follow a streamlined digital workflow within a single platform — with each module optimized for its specific user context.

After Vision Troubleshooting Journey
Vision troubleshooting user journey map with new tool
Vision journey — the new tool intervenes at control room and offshore. Scroll to explore. Click to zoom ↑
07 · Key screens

The solution: a structured troubleshooting platform

The final design centers around a searchable fault database, consolidated root cause views, and integrated work order creation — all in one tool. Here are the key screens from Module 1 (Troubleshooting) and Module 2 (Documentation).

The 3-screen flow
Step 1: Dashboard & fault search
1 Search fault
Step 2: Draft work order
2 Draft work order
Step 3: Document on iPad
3 Document on iPad

Coordinator searches a fault code → reviews cross-site root causes → drafts a work order → technician receives it on iPad offshore and documents the repair. The documentation feeds back into the search database. Scroll down for annotated deep-dives of each screen.

Module 1 Desktop — used by Maintenance Coordinators in control room
Dashboard with fault search
1
Fault Search modal Fault code entry as the primary action. Coordinator arrives with a SCADA alarm code and goes directly to search — zero navigation clicks.
2
Recent Alarms panel Active & solved faults across the fleet with color-coded status. Gives coordinators situational awareness alongside search.
3
Site network overview Visual map of turbine assets and connections. Designed for spatial orientation — coordinators think in terms of physical topology, not lists.
Hover the numbered hotspots to explore design decisions. Click anywhere to zoom.
Module 1 Desktop — coordinator drafts tasks for the offshore technician
Work order draft
1
Auto-populated from root cause Fault details, selected root cause, and recommended actions pre-fill from Module 1’s search. The coordinator edits, not writes from scratch.
2
Structured task fields Instead of free-text, each action is a discrete row. This ensures the technician gets clear, step-by-step instructions on their iPad.
3
Direct flow to Module 2 Submitted work orders automatically appear in the technician’s documentation queue. This is the handoff bridge between the two modules.
Hover the numbered hotspots to explore design decisions. Click anywhere to zoom.
Module 2 iPad — used by Technicians offshore with gloved hands
Documentation flow for technicians on iPad
1
Photo-first capture Large camera button at the top. Technicians photograph the fault first, then add structured metadata. Designed for gloved hands in harsh conditions.
2
Structured fields, not free-text Repair outcome, root cause confirmation, and parts used are dropdowns/selects, not text entry. Consistent data feeds back into Module 1’s knowledge base.
3
Offline sync indicator Clear status showing whether data has synced to the server. Designed for unreliable offshore connectivity — the technician knows their work is saved locally even without network.
Hover the numbered hotspots to explore design decisions. Click anywhere to zoom.
Design system & scalability

Building patterns, not just screens

This project wasn’t a one-off prototype. RWE planned to extend the tool to additional turbine types, new wind farm sites, and potentially onshore wind operations. Every design decision needed to account for that future scale — not just the immediate 3-month scope.

I built a modular component library in Figma that served both modules and both device contexts (desktop + iPad). Here’s how I approached scalability:

Shared component library

Fault cards, status indicators, search patterns, and form elements were designed as reusable components across both modules. New turbine types or fault categories could be added without designing new UI — just new data.

Device-adaptive patterns

Module 1 (desktop) and Module 2 (iPad) share the same data model and interaction grammar, but the UI adapts to context. Large tap targets and photo-first capture for offshore iPad use; dense data tables and keyboard-driven search for the control room.

Edge case handling

What happens when a fault has zero matches? When a technician is offline at sea? When two sites document contradictory root causes? I designed empty states, offline indicators, and conflict resolution flows — not just the happy path.

Why this matters for scalability: The component library and pattern system meant the BCG designer I steered could produce consistent screens faster. It also meant the development team (who received the handoff) had clear, repeatable patterns to implement — reducing the gap between design intent and production output. I documented every component with usage guidelines, not just visual specs.

Accessibility & context-aware design

Designing for harsh environments, not just screens

This wasn’t a typical office SaaS product. One module is used in a control room under time pressure; the other on an iPad offshore, in rain, with gloved hands. Every design decision had to account for the physical reality of the user’s environment.

Touch targets for gloved hands

Module 2 (iPad) uses minimum 48×48px touch targets — 10px above Apple’s guideline. Buttons are spaced with 12px gutters to prevent mis-taps. Photo capture is a single large button, not a small icon. Tested by simulating gloved interaction with thickened fingertips on the prototype.

Color contrast & outdoor visibility

All status indicators use WCAG AA contrast ratios minimum. Critical fault statuses use both color and icon (red + warning triangle) to work for color-blind users. The iPad module uses higher contrast ratios (5.5:1+) because offshore screens compete with direct sunlight and salt spray.

Desktop ↔ iPad responsive logic

Not just responsive breakpoints — a fundamentally different interaction model per device. Module 1 (desktop) prioritizes keyboard-driven search and data density for the control room. Module 2 (iPad) prioritizes one-handed operation, camera-first workflows, and progressive disclosure to reduce cognitive load while on a turbine.

Offline-first architecture

Module 2 works without network by default. All form data saves locally, photos store on-device, and a persistent sync status bar tells the technician exactly what’s uploaded and what’s pending. This wasn’t a graceful degradation — it was the primary design constraint discovered during shadowing.

08 · Key design decisions

Three decisions that shaped the product

Every design decision involved trade-offs. Here are three strategic choices I made — and the alternatives I considered before committing.

A

Cross-site fault search as the entry point

The biggest unlock was making fault resolutions searchable across all RWE wind farms. Previously, a technician at Site A had no way to know that Site B had already solved the exact same fault.

Alternatives considered
Chosen Search-first IA — search bar as the primary entry point, zero clicks to start
Rejected Navigation-first IA — browse by site → turbine → fault. Too many clicks for the most common scenario.

The conceptual reasoning: Navigation-first works when users browse. But in fault resolution, coordinators arrive with a specific SCADA alarm code in hand. They need to go from “I have error code X” to “here’s how other sites resolved it” in one action. A search-first IA mirrors their mental model — they’re not exploring, they’re hunting. The IA needed to match that intent.

AI-ready: I structured the search results UI with relevance scores and confidence indicators — when NLP-based semantic search replaces keyword matching, the interface already supports ranked results with explainability context.

B

Consolidated root cause comparison

When a fault has been resolved at multiple sites, the tool shows all root causes side-by-side with frequency data. This eliminates one of the biggest time sinks in the as-is workflow — calling colleagues at other sites.

Alternatives considered
Chosen Consolidated view with frequency — side-by-side comparison with occurrence counts across sites
Rejected Flat list of resolutions — simpler to build but doesn’t help the coordinator pick the most likely cause

The conceptual reasoning: A flat list puts the cognitive burden on the coordinator — they have to read every resolution and mentally rank them. The consolidated view with frequency counts does that work for them. If 4 out of 5 sites resolved the same fault with “sensor recalibration,” that signal should be visually obvious, not buried in a list. This is about reducing decision-making time under pressure, not just information display.

AI-ready: The frequency counts are the manual version of what becomes probabilistic ranking — “based on fleet-wide data, the most likely root cause is X with 78% confidence.” I designed comparison cards with expandable detail sections to accommodate richer AI metadata.

C

Integrated work order → documentation loop

Work orders created in Module 1 flow directly into Module 2 as structured documentation templates on the technician’s iPad. This closes the feedback loop: the diagnosis informs the task, and the task outcome feeds back into the knowledge base.

Why this matters

The documentation flow is the flywheel, not a secondary feature. Without structured contributions from technicians, the fault database stays empty. I designed the documentation module to make contributing feel effortless — so the knowledge base would grow organically over time.

The conceptual reasoning: The BCG team initially wanted to treat documentation as a separate backlog item — “ship troubleshooting first, add documentation later.” I pushed back because the tool’s value compounds over time — it only works if the knowledge base grows. I framed this as an operational risk, not a design preference, which is what finally got stakeholder buy-in.

Core insight: Module 2 isn’t a feature — it’s the product strategy. Without the documentation loop, you’re building a search engine with no content.

AI-ready: Module 2’s structured documentation captures fault resolutions in categorised, machine-readable fields — not free-form text. Every resolved fault becomes training data for smarter recommendations.

09 · Outcome

What I shipped in 12 weeks

Production-ready prototype delivered and accepted by engineering in 3 months — from first stakeholder interview to annotated handoff documentation. Module 1 entered development sprint cycle in Q1 2025. Here’s what I can measure directly:

7 → 3

Fault diagnosis workflow steps reduced — from 4 disconnected systems to 1 unified platform

2 modules

Full design system covering 2 device types (desktop + iPad) and 4 fault scenarios

3

Stakeholder review cycles to full alignment (vs. the typical 6+ for cross-functional projects at RWE)

0 → 1

Cross-site knowledge sharing tools before this project: zero. Now: a searchable fleet-wide fault database

Broader impact

Before this project: Each RWE wind farm was an island. Knowledge stayed local, experienced technicians were single points of failure, and fault diagnosis depended on who you knew.

After this project: A single platform connects all sites. Every resolved fault becomes a searchable resource for the entire fleet. New technicians can access institutional knowledge from day one — reducing onboarding risk and operational dependency on senior staff.

10 · Learnings

What I took from this project

01

The scope fight that shaped the product

The hardest moment in this project was Week 5, when BCG pushed to merge both modules into a single monolithic tool to “simplify development.” I argued that the two modules serve fundamentally different contexts (desktop control room vs. offshore iPad with gloved hands), and merging them would compromise both. The debate lasted two review cycles. I won it by showing the journey map with physical location transitions highlighted — you can’t use the same UI in a control room and on a turbine nacelle. This taught me that defending a design decision requires evidence, not conviction.

02

Steering someone’s output requires principles, not preferences

When I first reviewed the BCG designer’s screens, the quality was inconsistent — not because they lacked skill, but because I hadn’t given them clear enough constraints. Saying “make the spacing feel better” doesn’t work across review cycles. I formalized my thinking into documented design principles: “search is always the entry point,” “status information is visible without interaction,” “one primary action per screen.” Once I did that, their output snapped into place.

03

The technical constraint that improved the design

Offshore iPad connectivity is unreliable — technicians often work with intermittent or no network access. This wasn’t in the original brief; I discovered it during shadowing. It forced a design rethink: Module 2’s documentation flow had to work offline-first, with sync-when-available. This constraint actually made the design better — structured capture fields instead of free-form text, local photo storage before upload, and clear sync status indicators. The best constraints come from real user environments, not product specs.

04

What I’d do differently — a specific testing plan

We validated through stakeholder reviews and workflow analysis, but I didn’t get prototype-in-hand testing with actual end users until late in the process. If I did this again, here’s the testing plan I’d run:

  • 1.Week 3 — First-click test with 3–4 onshore coordinators on a static Figma prototype of the dashboard. Goal: validate that search-first IA matches their mental model. Can run remotely in 20-minute sessions.
  • 2.Week 6 — Task completion test with 2–3 coordinators using the interactive prototype. Three tasks: search a fault code, compare root causes, draft a work order. Measure completion rate and time-on-task.
  • 3.Week 9 — Contextual walkthrough of Module 2 with 1–2 technicians on-site or via video call with screen share to an iPad. Test the documentation flow with gloved-hand simulation and offline scenario.

The logistics of offshore access made this genuinely hard — technicians are on rotating schedules and travel to turbines days in advance. But I should have prioritized the remote coordinator tests (steps 1–2), which had no logistical barrier. I’d also invest more upfront in the component library, because the speed gains would have compounded through prototyping.

Designing for AI

This project is a knowledge retrieval system at its core — and that’s exactly where AI creates the most impact. Each of the three design decisions above was shaped by awareness of where intelligent features would eventually plug in: semantic search for Decision A, probabilistic ranking for Decision B, and structured training data for Decision C. The details are woven into each decision’s reasoning above.

Interested in working together?