Written by Lucy Luo on November 05, 2020
The best plans for experimentation don’t always come through. We’ve learned this in working with teams to design, run, and analyze experiments over the years. Part of learning this process is becoming more proficient at quickly running experiments while making progress through data-driven insights. Below is a list of the most common experiment pitfalls we’ve seen in the field. Hopefully, by identifying these pitfalls early on in your testing process you will be able to avoid these mistakes.
Time Trap
Not dedicating enough time.
This is often the strongest pain point we hear from innovation teams. Teams that don’t put in enough time to test business ideas won’t get great results. Too often, teams underestimate what it takes to conduct multiple experiments and test ideas well.
Things you can do:
Outsource Testing
When you outsource what you should be doing and learning yourself.
Outsourcing testing is rarely a wise idea. Testing is about rapid iterations between testing, capturing insights, and adapting your business idea accordingly. An agency can’t make those rapid decisions for you and you risk wasting time and energy by outsourcing. Insight by definition is “the capacity to gain an accurate and deep understanding of someone or something”. Without this deep understanding, how can you have the confidence to make rapid decisions on what to do next. You will only risk wasting time and energy by outsourcing.
Things you can do:
Analysis Paralysis
Overthinking things that you should just test and adapt.
Having ideas and concepts are good, but too many teams overthink and waste time, rather than getting out of the building to test and adapt their ideas. Keep your eye on the prize. Ideas are not the most important thing. What’s more important is to run experiments so you can gather enough evidence to inform your next decision.
Things you can do:
Running Too Few Experiments
Conduct only one experiment for your most important hypothesis.
Few teams realize how many experiments they should conduct to validate a hypothesis. They make decisions on important hypotheses based on one experiment with weak evidence.
Things you can do:
Incomparable Data/Evidence
Messy data that are not comparable.
Too many teams are sloppy in defining their exact hypothesis, experiment, and metrics. That leads to data that are not comparable (e.g., not testing with the exact same customer segment or in wildly different contexts).
What you can do:
Weak Data/Evidence
Only measure what people say, not what they do.
Often teams are happy with running surveys and interviews and they fail to go deeper into how people act in real-life situations.
What you can do
Confirmation Bias
Only believing evidence that agrees with your hypothesis.
Confirmation bias is the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one's prior beliefs or values. Sometimes teams discard or underplay evidence that conflicts with their hypothesis. They prefer the illusion of being correct in their prediction.
Things you can do
Failure to Learn and Adapt
When you don’t take time to analyze the evidence to generate insights and action.
Some teams get so deep into testing that they forget to keep their eyes on the prize. The goal is not to test and learn. The goal is to decide, based on evidence and insights, to progress from idea to business.
Things you can do:
Discover and apply our latest thinking, trade secrets, tools and processes.
Discover and apply our latest thinking, trade secrets, tools and processes.
100-pages of Value Proposition Design - completely free!
No Comments Yet
Let us know what you think