I'm a mixed-methods UX researcher specializing in B2B SaaS products, behavioral research, and bridging qualitative insight with quantitative evidence.
Scroll down to see my selected work ↓User feedback indicated that partners were not fully leveraging the AI-powered journey creation feature. The research aimed to understand expectation gaps and evaluate whether the feature truly met user needs in real-world workflows.
International enterprise B2B partners across multiple regions:
Number of participants who mentioned each need across all usability testing sessions. Lack of guidance and Need for guidance emerged as the dominant themes (n=10 each), followed by Manual editing (n=8) and Lack of content & preview (n=7).
🔒 Some key insights from the final UXR report — the rest is not included due to confidentiality.
AI-generated outputs had an approximate accuracy rate of ~40%, making users feel that correcting the output was more exhausting than building from scratch — leading many to abandon the feature entirely.
Lack of guidance emerged as the top pain point across sessions. Instructions did not communicate the required level of detail, resulting in vague inputs and irrelevant outputs that further eroded trust.
Most participants reported needing to manually correct AI output. Even when directionally correct, fine-tuning branch-by-branch was seen as a net negative — users felt they were doing more work, not less.
Users preferred a back-and-forth, iterative refinement model rather than a single-prompt-to-output flow — indicating a desire for AI as a collaborative tool, not a black box.
The high standard deviation (SD=2) reflects a polarized user experience — deeply divided between those who found it straightforward and those who struggled significantly. This suggests the feature works for a specific user type but fails to serve the broader partner base.
85% of research outputs were actioned. The product team incorporated findings into their prioritization framework, shifting focus toward AI-assisted guidance and iterative journey editing rather than full one-shot generation.
As journey complexity scales — some workflows containing 300 to 800+ elements — navigating the canvas becomes increasingly difficult. This research mapped real user mental models to identify friction points and surface evidence-based product opportunities.
End-to-end research ownership: research plan, participant recruitment, interview moderation, affinity mapping, quantitative analysis, and stakeholder readout. Solo researcher.
Higher SD in Requirement Fulfillment (SD=1.11) suggests unmet needs beneath a surface-level acceptable experience — likely tied to hidden or undiscoverable features.
🔒 Some key insights from the final UXR report — the rest is not included due to confidentiality.
Experienced users develop internalized spatial memory of their journeys. This self-built navigation strategy works — but breaks down for new team members or during handoffs.
When zoomed out, users rely on color patterns rather than text labels to navigate — indicating a genuine unmet need for visual hierarchy tooling within the product.
The search function indexes content, but users think in terms of logic and conditions. Custom channel names — which users invest time creating — remain unsearchable, making the tool largely unusable for power users.
Slow load times on large journeys create a "fear of breaking things" — users actively avoid editing, leading to stagnant workflows and reduced overall feature engagement.
Research recently completed. Based on the findings, design decisions are now being made by the product and design teams. Outputs are actively shaping the next iteration of the canvas experience.
Understanding how enterprise partners test their journeys on Architect — which methods they use, why certain approaches are avoided, and what friction exists — to inform product decisions around testing tooling.
Enterprise partners across multiple regions and verticals:
🔒 Some key insights from the final UXR report — the rest is not included due to confidentiality.
Differences between available testing methods are not well understood. Users default to the most familiar approach — not because it's optimal, but because alternatives feel unclear, leading to under-utilization of more precise tools.
Testing a live journey requires stopping the entire flow — not viable for ongoing campaigns. This forces users to build parallel test setups, adding overhead and increasing the risk of misconfiguration.
Users prioritize flow-level testing (conditions, logic, attributes) over channel content verification — yet the product's UI affords the opposite, creating a fundamental mismatch between user goals and product design.
A mean score of 2.7/5 places journey testing solidly in the "difficult" range — confirming the current testing experience is a significant friction point across user segments and regions.
75% of research outputs were actioned. Recommendations included clearer in-product differentiation between test methods, a dedicated test mode that doesn't require halting live journeys, and guided onboarding for the Specific User Testing feature. Design and product teams incorporated these into sprint planning.
Storybooks are one of the most preferred childhood activities — but does the story theme (realistic, anthropomorphic, or fantastical) affect what children actually learn from them? This thesis examined whether theme influences both analogical problem-solving and prosocial behavior.
Advisor: Assoc. Prof. Deniz Tahiroğlu · Boğaziçi University, Institute for Graduate Studies in Social Sciences
Children who listened to realistic stories were significantly more successful at solving physical problem analogies compared to those who heard anthropomorphic or fantastical stories. However, this effect did not hold for social problem contexts — suggesting theme-learning relationships are domain-specific.
Children exposed to realistic storybooks showed a greater increase in sharing behavior from pre-test to post-test, compared to anthropomorphic or fantastical conditions. Helping and honesty behaviors improved across all conditions, suggesting storybooks broadly promote prosocial development regardless of theme.
I'm a UX researcher with 4+ years of experience in B2B SaaS, currently at Insider One, where I've led 20+ end-to-end research cycles across product discovery, usability evaluation, and behavioral analytics.
My background sits at the intersection of academic psychology and applied product research. I led a self-designed thesis study with 200+ preschool-aged children at Boğaziçi University, contributed to a TÜBİTAK-1001 funded national study with 5,000+ participants in collaboration with METU and Ege University, and presented research at international conferences including SRCD (USA) and BCCD CEU (Budapest).
I care deeply about research that actually gets used — not just documented.
Let's ConnectI'd love to hear about your project. Drop me a message.
haticeseymagurbetoglu@gmail.com