Hi, I'm Şeyma! 👋
I turn human behavior
into product decisions.

I'm a mixed-methods UX researcher specializing in B2B SaaS products, behavioral research, and bridging qualitative insight with quantitative evidence.

Scroll down to see my selected work ↓
Selected Work

Case Studies

01
Smart Journey Creator / Sirius AI — Usability Testing Completed
Moderated Usability Testing · Thematic Analysis · B2B SaaS · AI Feature
Insider One
2024
+

The Challenge

User feedback indicated that partners were not fully leveraging the AI-powered journey creation feature. The research aimed to understand expectation gaps and evaluate whether the feature truly met user needs in real-world workflows.

Methods

  • Moderated usability testing on live feature
  • In-depth interviews with international B2B partners
  • Thematic analysis & synthesis in Dovetail
  • Perceived trustworthiness & difficulty rating (7-point Likert)

Participants

International enterprise B2B partners across multiple regions:

🌏 Southeast Asia 🇹🇷 Turkey 🌍 MENA 🌎 LATAM 🌐 Global

Participant Needs — Frequency Analysis

0 1 2 3 4 5 6 7 Number of participants who mentioned the need Lack of content & preview 7 Lack of channel 4 Instructions SJ 4 Supported Language 3 Lack of guidance 10 Manual editing 8 Increased effort 3 Trustworthiness 2 Need for guidance 10 Flexibility of use 3 Generating alternatives 3 Reasoning behind results 4 Conversational interaction 4 AI on canvas 2 Prompt history 1

Number of participants who mentioned each need across all usability testing sessions. Lack of guidance and Need for guidance emerged as the dominant themes (n=10 each), followed by Manual editing (n=8) and Lack of content & preview (n=7).

Key Findings

🔒 Some key insights from the final UXR report — the rest is not included due to confidentiality.

🎯 The Trust Gap Critical

AI-generated outputs had an approximate accuracy rate of ~40%, making users feel that correcting the output was more exhausting than building from scratch — leading many to abandon the feature entirely.

📝 Prompt Guidance Gap Critical

Lack of guidance emerged as the top pain point across sessions. Instructions did not communicate the required level of detail, resulting in vague inputs and irrelevant outputs that further eroded trust.

✏️ Heavy Manual Editing Burden High

Most participants reported needing to manually correct AI output. Even when directionally correct, fine-tuning branch-by-branch was seen as a net negative — users felt they were doing more work, not less.

💬 Need for Conversational Interaction Opportunity

Users preferred a back-and-forth, iterative refinement model rather than a single-prompt-to-output flow — indicating a desire for AI as a collaborative tool, not a black box.

Perceived Difficulty Rating — Feature Use (1 = Very Difficult · 7 = Very Easy)

5.3 Mean Score
2.0 Std. Deviation

The high standard deviation (SD=2) reflects a polarized user experience — deeply divided between those who found it straightforward and those who struggled significantly. This suggests the feature works for a specific user type but fails to serve the broader partner base.

Impact

85% of research outputs were actioned. The product team incorporated findings into their prioritization framework, shifting focus toward AI-assisted guidance and iterative journey editing rather than full one-shot generation.

02
Canvas Visibility — User Interview Research Completed
User Interviews · Affinity Mapping · UMUX-L · SUS · Mixed Methods
Insider One
Q2 2026
+

The Challenge

As journey complexity scales — some workflows containing 300 to 800+ elements — navigating the canvas becomes increasingly difficult. This research mapped real user mental models to identify friction points and surface evidence-based product opportunities.

Methods

  • In-depth user interviews with enterprise B2B partners
  • Affinity mapping & thematic synthesis in Dovetail
  • UMUX-L scale (ease of use + requirement fulfillment)
  • System Usability Scale (SUS)
  • Direct observational notes during sessions

My Role

End-to-end research ownership: research plan, participant recruitment, interview moderation, affinity mapping, quantitative analysis, and stakeholder readout. Solo researcher.

Quantitative Validation

72.8 Avg. SUS Score
B Grade: "Good"
80+ Target: "Excellent"
Ease of Use (UMUX-L)
5.5/7
Requirement Fulfillment
5.7/7
Overall SUS
72.8

Higher SD in Requirement Fulfillment (SD=1.11) suggests unmet needs beneath a surface-level acceptable experience — likely tied to hidden or undiscoverable features.

Key Findings

🔒 Some key insights from the final UXR report — the rest is not included due to confidentiality.

🧠 Expert Mental Maps Behavioral

Experienced users develop internalized spatial memory of their journeys. This self-built navigation strategy works — but breaks down for new team members or during handoffs.

🎨 Color as Navigation Anchor Workaround

When zoomed out, users rely on color patterns rather than text labels to navigate — indicating a genuine unmet need for visual hierarchy tooling within the product.

🔍 Search Dead-end Critical

The search function indexes content, but users think in terms of logic and conditions. Custom channel names — which users invest time creating — remain unsearchable, making the tool largely unusable for power users.

⏳ Performance Anxiety Critical

Slow load times on large journeys create a "fear of breaking things" — users actively avoid editing, leading to stagnant workflows and reduced overall feature engagement.

Impact

Research recently completed. Based on the findings, design decisions are now being made by the product and design teams. Outputs are actively shaping the next iteration of the canvas experience.

03
Architect Journey Testing — Partner Research Report Completed
Qualitative Research · User Interviews · Behavioral Analysis · Journey Mapping
Insider One
2024
+

The Challenge

Understanding how enterprise partners test their journeys on Architect — which methods they use, why certain approaches are avoided, and what friction exists — to inform product decisions around testing tooling.

Methods

  • In-depth interviews with enterprise B2B partners
  • Testing flow mapping & behavioral analysis
  • Integrated synthesis with internal stakeholder interviews
  • Dovetail for data organization and tagging

Participants

Enterprise partners across multiple regions and verticals:

🇹🇷 Turkey 🌍 MENA 🌏 Southeast Asia 🏥 Healthcare 👗 Retail 📱 Telecom

Key Findings

🔒 Some key insights from the final UXR report — the rest is not included due to confidentiality.

🌫️ Method Confusion Critical

Differences between available testing methods are not well understood. Users default to the most familiar approach — not because it's optimal, but because alternatives feel unclear, leading to under-utilization of more precise tools.

⛔ Journey Halt Problem Critical

Testing a live journey requires stopping the entire flow — not viable for ongoing campaigns. This forces users to build parallel test setups, adding overhead and increasing the risk of misconfiguration.

🎯 Flow vs. Channel Mismatch Behavioral

Users prioritize flow-level testing (conditions, logic, attributes) over channel content verification — yet the product's UI affords the opposite, creating a fundamental mismatch between user goals and product design.

Perceived Testing Difficulty (1 = Very Difficult · 5 = Very Easy)

2.7 Mean Score
Difficult Experience Rating

A mean score of 2.7/5 places journey testing solidly in the "difficult" range — confirming the current testing experience is a significant friction point across user segments and regions.

Impact

75% of research outputs were actioned. Recommendations included clearer in-product differentiation between test methods, a dedicated test mode that doesn't require halting live journeys, and guided onboarding for the Specific User Testing feature. Design and product teams incorporated these into sprint planning.

04
Learning from Storybooks: Does the Theme Matter? MA Thesis
Experimental Research · Mixed Methods · Developmental Psychology · Boğaziçi University
Boğaziçi University
2023
+

The Research Question

Storybooks are one of the most preferred childhood activities — but does the story theme (realistic, anthropomorphic, or fantastical) affect what children actually learn from them? This thesis examined whether theme influences both analogical problem-solving and prosocial behavior.

Methods

  • Two experimental studies with preschool-aged children
  • Study 1: 91 children — analogical problem-solving tasks
  • Study 2: 78 six-year-olds — prosocial behavior pre/post tasks
  • Mixed methods: experimental design + behavioral observation
  • Statistical analysis: SPSS, inferential & descriptive statistics

Scale & Context

169 Total participants
3 Story conditions

Advisor: Assoc. Prof. Deniz Tahiroğlu · Boğaziçi University, Institute for Graduate Studies in Social Sciences

Key Findings

📖 Story Theme Shapes Problem-Solving Study 1

Children who listened to realistic stories were significantly more successful at solving physical problem analogies compared to those who heard anthropomorphic or fantastical stories. However, this effect did not hold for social problem contexts — suggesting theme-learning relationships are domain-specific.

🤝 Realistic Stories Boost Sharing Behavior Study 2

Children exposed to realistic storybooks showed a greater increase in sharing behavior from pre-test to post-test, compared to anthropomorphic or fantastical conditions. Helping and honesty behaviors improved across all conditions, suggesting storybooks broadly promote prosocial development regardless of theme.

Abstract: Reading storybooks is among the most preferred pastime activities in childhood. Beyond entertainment, storybooks can be used to develop children's social and cognitive skills. Study 1 (n=91) investigated whether story theme and problem context influence children's analogical problem-solving. Study 2 (n=78) examined whether story theme impacts prosocial behaviors such as sharing, helping, and honesty. Results indicate that realistic story themes support physical problem-solving and sharing behavior, while storybooks broadly promote prosocial development across all conditions.

I'm a UX researcher with 4+ years of experience in B2B SaaS, currently at Insider One, where I've led 20+ end-to-end research cycles across product discovery, usability evaluation, and behavioral analytics.

My background sits at the intersection of academic psychology and applied product research. I led a self-designed thesis study with 200+ preschool-aged children at Boğaziçi University, contributed to a TÜBİTAK-1001 funded national study with 5,000+ participants in collaboration with METU and Ege University, and presented research at international conferences including SRCD (USA) and BCCD CEU (Budapest).

I care deeply about research that actually gets used — not just documented.

Let's Connect
🎙️ User InterviewsModerated · in-depth
🖥️ Usability TestingModerated & remote
📋 Survey DesignQual + quant mixed
📊 Statistical AnalysisSPSS · Jamovi · inferential
🗂️ Affinity MappingThematic synthesis
🗺️ Journey MappingExperience mapping
👁️ Behavioral AnalyticsFullStory · PowerBI
📐 Research ScalesSUS · UMUX-L · NPS
Tools: Dovetail · Qualtrics · Figma · Miro · Maze · Lookback · SPSS · Jamovi · PowerBI · FullStory

From research question
to product action 🎯

01
🔎 Define
I clarify the research question with stakeholders, identify the right method, and review existing data before investing in new fieldwork.
02
🎙️ Research
I run interviews, usability tests, or surveys with the right participant profiles — combining qualitative depth with quantitative validation.
03
🧩 Synthesize
I transform raw data into insights through affinity mapping and thematic analysis — surfacing patterns, tensions, and opportunities.
04
📈 Measure Impact
I don't stop at recommendations. I track whether design decisions actually work — using behavioral analytics tools like FullStory to measure feature adoption, spot regressions, and validate the effectiveness of research-driven changes in production.
Academic Contributions

Publications & Conference Presentations 📄

BCCD CEU Cognitive Development Conference (2023) · Budapest, Hungary Kara, H. Ş. & Tahiroğlu, D. — Children's learning of prosocial behaviors and problem-solving through story themes
3rd Developmental Psychology Symposium (2022) · Istanbul, Turkey Kara, H. Ş. & Tahiroğlu, D. — Story content and its effect on children's learning processes
Kamber, E., Kara, H. Ş. & Tahiroğlu, D. (2021). Relations between Fantasy Orientation, Pretense and Parental Attitudes in Preschool Children. Çukurova University Faculty of Education Journal · 50(2), 929–964
Society for Research in Child Development (SRCD) Biennial Meeting (2021) · USA Kara, H. Ş., Kamber, E. & Tahiroğlu, D. — Learning through play, imagination, and parental attitudes in child development
Oğuz, N. & Kara, H. Ş. (2018). The relationships between children's prosocial lie telling behavior, theory of mind and executive functions. Studies in Psychology · 38(2), 129–154
Get In Touch

Let's work together.

I'd love to hear about your project. Drop me a message.

haticeseymagurbetoglu@gmail.com