Building Data Logic That Works
Like Marketers Think

When enterprise teams spend their days cleaning up data errors that the product could have prevented, that's not a usability problem — it's a systems failure.

I led the end-to-end design of Attribute Dependencies: a 0→1 configuration system that moved business rules out of spreadsheets and into the product itself.

timeline

Q4 2024 – Q1 2025 · ~5 months

team

PM, Eng Lead, 4 Devs, 2 Professional Services, UX Writer

platform

Marketing Performance Management · B2B SaaS

my role

Lead Designer

context

0→1 feature · Uptempo Plan (Enterprise SaaS)



the problem

Enterprise marketing teams were selecting attribute combinations that were technically valid in the UI but operationally invalid — violating their own compliance rules, regional constraints, and campaign logic. The platform had no way to enforce these relationships. Teams relied on documentation, memory, and manual review to catch errors after the fact.

The result: Teams couldn't trust the data they were entering. And without confidence at the point of entry, every downstream report carried the same uncertainty. Based on PS team logs and support ticket analysis, attribute-related configuration issues accounted for the majority of recurring support requests — and each enterprise onboarding required manual dependency documentation, adding days to client setup time.

from the product brief

Customers needed "a way to limit the possible combinations of attribute values" — because preventing invalid entries was the only reliable path to data marketers could actually trust.

Research Example: A marketer selecting "Germany" as a region would see all global campaign types — including those that violate EU compliance requirements. There was no filtering, no warning, no guardrail. The burden of knowing the rules was entirely on the user.

Before any design work began, we worked with PM and PS to translate this operational pain into measurable targets — so we'd know exactly what success looked like when we shipped.

impact


65%

Decrease in data inconsistencies flagged post-launch

the challenge

HMW enable customers to enforce complex, non-hierarchical business rules within the product, without introducing additional complexity for marketers entering data?

The challenge was to design a system that:

  • supports flexible, real-world attribute relationships

  • prevents invalid combinations at the point of entry

  • remains intuitive for users who are not aware of the underlying rules

my contribution

Led discovery, definition, and design end-to-end

Synthesized research from Dovetail transcripts across IBM, SAP, SolarWinds

Structured a problem framework that aligned cross-functional teams

Negotiated scope to protect system integrity under deadline pressure

Contributed a reusable table component to the design system

Ran validation sessions with Professional Services teams

40%

Reduction in time spent on attribute updates

15%

Faster enterprise client onboarding

  • Attribute Dependencies cut down errors and manual work, making our data much easier to manage.

    —Operations Lead, SAP Implementation Partner

  • Attribute Dependencies gave our admins more control while keeping reporting data aligned across teams.

    —Data Systems Manager, SolarWinds

  • Attribute Dependencies feature keeps our marketing data clean and saves us from endless manual checks. Setting up rules feels intuitive - we got it right the first time without needing extra documentation.

    —Marketing Operations Manager, IBM Corporate

discovery → definition

This part of the project focuses on understanding our users, their challenges, and our internal constraints - setting the foundation for design decisions that followed.

going deeper
than the

Instead of jumping into solutions, I focused on understanding how teams were currently managing these constraints. I used GPT and Claude o analyze customer call transcripts from Dovetail and then mapped patterns in FigJam to distinguish between universal pain points across clients and company-specific needs.

I identified that the problem wasn’t lack of awareness — it was lack of system support. Users knew the rules, but the product wasn’t helping them apply those rules at the right moment.

This helped me prioritize what needed to be solved at a system level versus what should remain flexible, ensuring the solution addressed shared enterprise challenges without overfitting to a single customer.

Working with the PM, we used PS ticket logs and onboarding benchmarks to set pre-launch success targets. This wasn't about improving a metric that already existed in the product — Attribute Dependencies was a 0→1 feature, so the baseline was the manual workaround. The goal was to eliminate a category of operational failure entirely.

What we were aiming for:

Goal Target


Reduce attribute-related data inconsistencies -50% post-launch

Reduce PS time spent on configuration troubleshooting. -30% per client

Reduce enterprise onboarding setup time -20%

Reduce attribute-related data inconsistencies Zero additional training required

three jobs,

professional services
changed


One framework

After synthesizing research, I partnered with the PM to structure the problem in a way that would drive effective solutioning. I broke the challenge into three core user jobs — then decomposed each into micro-jobs to make the complexity actionable for engineering and product.

  1. Admins need to define and manage attribute relationships
    They hold the institutional knowledge about which combinations are valid. The product had no mechanism for them to encode that knowledge — so it lived nowhere permanently.

  2. Marketers need to be guided without being blocked

    Enforcement can't feel punitive. Marketers need to understand why certain combinations aren't available — not just encounter a wall. The system had to guide without removing agency.

  3. Professional Services need the system to hold up at scale

    PS was absorbing significant operational overhead from configuration issues. The solution had to be reliable enough that they could stop being the backstop.

By giving the team a shared framework before we started generating solutions, we ensured that product, design, and engineering were solving the same problem — not three adjacent versions of it. This prevented scope from expanding around edge cases and kept the team anchored to core user needs throughout the project.


surface findings.

i ran the workshops,

to align the team

After validating the problem with Professional Services, I brought together the PM, engineering lead, and developers to align on technical constraints, backend dependencies, and solution feasibility early on. I facilitated a structured workshop using a Miro board to guide collaborative ideation.

We used “How Might We” exercises to reframe key pain points into actionable design opportunities, followed by brainwriting sessions to generate ideas across disciplines without bias, and Crazy 8s sketching to explore multiple solution directions.

The goal wasn’t just to generate ideas, but to create shared ownership of the problem and solution space. By involving engineering early in real constraints, we ensured that the concepts we explored were feasible, reducing iteration cycles later and keeping the team aligned on what we were solving and why.

Early in the project, I worked closely with our Professional Services team to pressure-test the problem definition. What I found reframed the opportunity entirely.

PS was spending significant time troubleshooting configuration issues that were, in theory, preventable. The cost wasn't just in support tickets — it was in ongoing operational overhead that scaled with each enterprise client. This shifted my framing from "how do we improve data entry UX" to "how do we eliminate a category of operational failure."

two features got cut,

I pushed back!

After the workshop, I worked with the PM and engineering lead to build a prioritization matrix — mapping solutions against user impact and technical effort. We defined a focused first iteration: the core dependency model and real-time validation. Two major features were intentionally deferred.

Then the deadline pressure hit. To meet the release date, PM and engineering proposed cutting two additional features that I had included in the scoped MVP. I disagreed — not because I was attached to the work, but because I had a clear, research-backed reason.

These weren't independent enhancements. They were critical parts of the same workflow. Without them, the system would still require manual input in exactly the scenarios it was designed to automate. Cutting them didn't simplify the release — it shipped an incomplete system that would erode trust in the feature before it had a chance to prove its value.

Rather than blocking the decision, I proposed a phased delivery strategy. Keep both features out of the initial release — but commit to shipping them together in the immediately following sprint. I defined the specifications in advance, outlined edge cases and system behavior in documentation, and aligned with PM and engineering through working sessions before the MVP shipped.

The Original Scope Definition

Research

Defining the Problem Space


Engineering was in the room.


Scope Negotiation + Delivery Strategy

Scope Change After Team Discussion

This gave us a path that met the deadline without compromising the integrity of the solution — and established a foundation that the next sprint could build on directly, without rework.


Internal Expert Validation

Cross-Functional Solutioning

how I framed the problem.

Solutioning → Design Execution

Building on the insights and priorities defined during the discovery phase, I moved into translating concepts into tangible designs, validating them with users, and refining the experience through iteration.

two roles.
one data model.


Very different mental models.

i started mid-fi


On purpose.

Before moving into high-fidelity design, I used mid-fi wireframes to validate the core interaction model and system logic, not visual polish. The goal was to test whether the two-panel configuration pattern — controlling attributes on one side and dependent values on the other — was intuitively understood, since this was the most complex part of the experience.

Working at mid-fi allowed us to define the MVP clearly, identify required features, and structure a focused backlog for development, while keeping iteration fast and low-cost. It also enabled early collaboration with engineering, helping us validate feasibility and surface constraints before committing to a direction, ultimately reducing the risk of costly redesigns later in the process.

twice-weekly reviews.


Not for approval,
for alignment.

After roughly 6 weeks of kickoff, stakeholders requested high-fidelity designs. I used early hi-fi not as a final deliverable, but as a tool to drive alignment and validate decisions in context.

We held stakeholder and product reviews twice a week to surface feedback early and adjust direction in real time — rather than waiting for formal handoffs. This cadence reduced rework, kept progress on track, and ensured cross-functional alignment as complexity increased.

I didn't just design the feature,


I built the component
that scales it.

Once direction was aligned, I identified that the dependency configuration included a significant amount of structured, editable data — making it a strong candidate for system-level consistency rather than a one-off implementation.

I contributed a table component to the design system, defining key interaction patterns including inline editing, validation states, loading states, empty states, and dynamic updates. While only part of this project, formalizing it ensured these interactions remained consistent and reusable across the product — including other admin settings like the policy page.

I also created a documentation prototype outlining component behavior, edge cases, and usage guidelines — enabling design and engineering to align on implementation and reducing ambiguity during development.

i tested with the
people who would feel

I conducted three validation sessions with Professional Services and representatives from IBM, SAP, and SolarWinds — the teams responsible for configuring and managing data in the platform daily. This wasn’t just about identifying usability issues; it was about validating decisions made under uncertainty and ensuring the system held up against real-world complexity.

By testing with the teams closest to implementation, I was able to confirm whether the dependency model was not only understandable, but also accurate, scalable, and aligned with how enterprise customers actually operate. This step was critical in moving from a well-designed concept to a trusted system that could perform reliably in production.

We validated system behavior and refined critical interaction patterns.

Design System Contribution


the consequences of getting it wrong.

and we’re

Live!


The feature rolled out to enterprise customers in Q1 2025, introducing a reliable and scalable way to enforce attribute relationships directly within the platform. We measured post-launch against the same targets we set before building — based on six months of post-launch analysis, support ticket volume, and stakeholder feedback.

Information Architecture

Mid-Fidelity Wireframes

High-Fidelity + Stakeholder Cadence

Testing + Validation

Impact

Before touching wireframes, I mapped the system end-to-end — connecting the admin configuration layer and the marketer-facing activity creation experience into a single coherent information architecture.

This was critical because both roles interact with the same underlying data, but from fundamentally different mental models. Admins define the rules. Marketers operate within them — often without knowing the rules exist. Without mapping both perspectives upfront, the two surfaces risked becoming logically inconsistent under real-world scenarios.

Defining the architecture first ensured the system would behave predictably across both experiences, and created a shared foundation for design, engineering, and data logic before implementation began.

Quantitative Results :

40%

Reduction in time spent on attribute updates

Users no longer needed to manually verify valid attribute combinations against documentation

The goals was: 30%

65%

Decrease in data inconsistencies flagged

Invalid combinations were prevented at the point of entry during campaign reviews

The goal was: 50%

15%

Faster enterprise client onboarding

Dependency rules configurable from day one

The goals was: 20%

While we didn't hit our initial target of a 30% reduction in PS configuration troubleshooting time, PS reported a significant reduction in configuration-related escalations — and while formal time tracking was not in place to quantify the exact impact, the directional signal was clear: a category of operational failure that once required manual intervention at every client touchpoint was no longer the default.

Qualitative wins :

Reduced support burden

Customer Success teams reported noticeably fewer tickets related to invalid attribute combinations and data corrections

Increased user confidence

Marketers expressed greater trust in their data entry, knowing the system prevented compliance violations and invalid combinations

Competitive advantage

Feature became a key differentiator in enterprise sales demos, particularly for clients with complex compliance requirements

Immediate adoption

Power users integrated the feature into their workflows with zero additional training required

Business Value:
By preventing errors upstream, clients avoided costly downstream fixes in reporting and compliance audits — directly improving their ROI on the Uptempo platform. The feature shipped in Q1 2025 and moved directly into a follow-on sprint that delivered chained dependencies and auto-tagging, building on the foundation established in v1 without rework.

next steps

Following the initial release, we moved directly into the next sprint to expand the system with three key enhancements that had been intentionally scoped out — including chained dependencies and auto-tagging. Because these were identified and aligned early during scoping, we were able to build on the existing foundation without rework, maintaining momentum while moving toward a more complete and scalable solution.


reflection

The most important thing I did on this project wasn't a design decision — it was insisting we define the problem space and set measurable goals before generating any solutions. By building a shared framework with the PM before solutioning began, we ensured that every decision that followed was anchored to actual user needs, not feature assumptions.

The scope negotiation was harder. Pushing back on a decision supported by both PM and engineering — under deadline pressure — required me to have a clear, well-evidenced position. Research gave me that. The ability to articulate exactly why these features were parts of the same workflow, not add-ons, is what made the conversation productive instead of contentious.

The broader lesson: in enterprise SaaS, the most valuable thing a designer can do is raise the ceiling of what the product can be trusted to do. This feature didn't just improve an interaction — it shifted a category of operational responsibility from people to the system. That's the kind of design impact that compounds over time.


Next
Next

Timeline Visuals