Growth Hacking is used by giants as Dropbox, AirBnB and Instagram. Discover how you can get the best results
Ebel Slijp, from Refreshworks, recently visited The Hague Tech to give a talk about Growth Hacking. Growth Hacking is an experimental marketing strategy used by giants as Dropbox, AirBnB and Instagram. Besides Ebel being an motivational speaker, his type of content got us inspired, with unique angles and approaches to marketing. Did you miss out on this event? We asked him to give his top 3 of Growth Hacking tips.
1. Don't be afraid to fail
Rapid experimentation is the core of the Growth Hacking mindset. The quicker you find out that something doesn't work the more you can learn from it. That's why shouldn't aim to create the perfect experiment. Instead, you should wonder whether you met the threshold of minimum viability. This means that the experiment covers all the basics that it needs to cover in order the measure what you want to know. To accentuate the importance of this mindset let's do a quick calculation.
If you accept that almost 85% of all experiments fail, the amount of experiments you run is crucial to your success. If you run 10 experiments per month and thus 120 per year, you'll have 18 experiments that succeed. If you spend less time on perfecting your experiments and spend more time on launching extra experiments you might be able to launch 15 experiments per month. This equals 27 successful experiments on a yearly basis (a 50% increase!).
2. Prioritize your experiments
Having to test at a high pace, is not a reason to choose experiments at random. Choosing what to test is part of a thoughtful process that should be followed meticulously. After all, a true Growth Hacker is not a magician that creates growth by waving his wand. He or she is a gatekeeper that carefully protects the growth process in order to achieve structural and sustainable results. Prioritizing you experiments is an important part of this process. One way to make a clear prioritization is by ranking all experiments based on important criteria like:
- How likely is this experiment to succeed?
- How big is the impact if this experiment succeeds?
- How much does it cost to run this experiment (in terms of time and money)?
The next step is to apply these questions to every single experiment. Use a ten-point scale when answering these questions. In this case, a 10 resembles a positive score. So a 10 on the likeliness of success and size of the impact means that an experiment is very likely to succeed and will have a big impact. A 10 on costs resembles a low-cost experiment (remember a 10 is more positive than a 1 and having low costs is great!). Then take the average of these criteria and use that as a guideline for your prioritization. When certain criteria are more important to your company (for example costs) than others, you can simply adjust the weight of the variables.
3. Clearly, state what you want to measure before you start experimenting
A common obstacle to successful experimentation is the inability to objectively judge the outcome of an experiment. When you set up an experiment try to be cautious for secondary motives that cloud your judgment. It could, for example, be the case that one of your team members is overly positive about the results of an experiment because he or she strongly believes that the experiment could work in the future or because someone wants to show that they did a good job by being overly positive about the results. This seems obvious but happens more often than you might expect. So let's say you ran an AdWords campaign that led to 150 conversions. Then it's still not clear whether that's a positive result. Because whether it can be qualified as a good result depends on many factors: What would the result have been if we would have used other channels? What would happen if we test on a different target group? Are the amount of conversions due to the acquisition channel or the landing page? and much more. Given that it is always hard to determine when a result is 'good enough' it is important to determine this before you actually run the experiment. In this way, the interpretation of the results can be as emotion-free as possible.