Imagine using a size 14 blue font instead of a size 12 red font and that being the difference between six hundred thousand dollars in revenue.

Even the top marketing experts are often left scratching their heads when it comes to the impact a subtle change, like font color, can have on conversion results. There’s no doubt about it; split testing is incredibly valuable. But I often see testing efforts fall short because they remain the exclusive domain of websites, direct response, and online marketing.

You should be relentlessly testing all aspects of your business and the customer experience. You should be looking outside of your industry to defy industry norms. Perhaps most importantly, you should expect almost all of your tests to fail. On the rare occasions when you find a winner, you need to be ready to pounce on it.

I recently shared a video about this, and I’ve worked with my clients on testing various aspects of the customer and buying experience. For example, we’ve tested new sales processes, and new customer follow-up processes. We’ve introduced new ways of presenting their offerings or bundling offerings together. We’ve tested new methods of asking for referrals and testimonials, and we’ve even tested new ways of presenting quotes and proposals.

One of the best things about the work I do is that I get to see change efforts across an incredibly varied sample of businesses, from one-person companies barely breaking $5M/year to multinational giants doing well over $2B/year. This provides me with a (broad) fantastic view of what companies are doing right now, what’s working, what’s not working, and where their energy is best spent.

I’ve seen other companies that are enamored with making enormous changes. For example, they have 24-36 month projects happening. But more often as not, these efforts fail simply because the world has changed so much between inception and realization.

Some of the biggest successes I’ve seen are when companies embrace a new idea or worldview, test it out quickly, and scale the winners.

Consider, for example, a manufacturing client who tested an entirely new model for manufacturing, distributing, and selling directly to its customers which resulted in not only dramatically increasing revenues but also being recognized by their ultra-competitive entire industry as the company that was “leading the forefront of manufacturing and marketing.” Most impressively, they were able to go from concept to full implementation within months, not years.

But as mentioned above, you should expect most tests to fail. With one company, we tested out three different series of questions to ask customers after purchase. We tested right after the sale, 30 days after the sale, and 90 days later. Two of these question had ZERO impact, but the third increased repeat purchase rate by 34%. Within three months, every customer was being asked the successful series after each sale, and it was starting to work.

The joy and the curse of testing is that you can test anything and everything. You will have much better results if you know where to start, and what areas can have the biggest impact. This is why I occasionally bring together a few clients for small, extremely focused sessions to discuss what’s working in various industries, and get ideas for what to test in their own.

Your challenge for this week:

Look at how your business currently interacts with your customers after the sale. I want you to test a new way of carrying out your post-purchase customer experience.

For example, if a client orders from you and usually doesn’t hear from you until your product or service is delivered, then I want you to test adding in a phone call or personal reach out (perhaps both) immediately after the sale to let them know what to expect next. If there’s a long delay between order and delivery, I want you to double up the customer touch points during that time.

If there’s almost no lag between order and delivery, I want you to add a personal touch within 24 hours of delivery.

Test the experience for no less than 30 days and measure your results. How did it make your customers feel, act, or behave? How did you feel? What about your employees? What did you learn? What were the results?