Product Analytics and Experimentation expert with a focus on AB Testing and Causal Impact, with 17 years of experience in research and analysis of complex data. My communication skills allow me to translate complex insights into actionable recommendations for stakeholders and to help Product Teams streamline Analytics as part of their day-to-day.
Experience Level
Language
Work Experience
Education
Qualifications
Industry Experience
- Why AB Testing is necessary
- Test setup considerations for valid results
- Investigative Metrics: the reward for ytour efforts
- What errors can occur when interpreting results
- What if I can’t run an AB test?
- How to successfully assess a feature’s success
When creating a new feature or changing an existing one, we try our best to match solution to opportunity, find the right balance between our users’ needs and company goals and create a usable and beautiful design that will both delight users and elevate our metrics (or at least won’t tank them).
Unfortunately, even with a lot of effort into UXR, even the most experienced of Product teams only get it right 10- 30% of the time. This is a difficult fact of life, and it spans industries, technologies and centuries.
We are not our end users, and the only way for us to know if our feature was successful is to try, see and iterate.
“Try and see” can mean different things that fit different scenarios: AB Testing and Causal Impact are the ones that get you valid quantitative data, but let’s not discard User Tests and other qualitative data, such as heatmaps and session recordings.
This text is part of an A/B Tesating 101 training I created at my last project.
On the agenda:
Hire a Data Analyst
We have the best data analyst experts on Twine. Hire a data analyst in Berlin today.