Archive

Author Archive

Mobile Landing Page Design and AI in Marketing [Your marketing questions answered]

April 28th, 2023

Every Wednesday we hold a free Marketing LiveClass as part of ChatGPT, CRO and AI: 40 Days to build a MECLABS SuperFunnel. Marketers and entrepreneurs ask questions in the webinar chat. And we answer them right here…

Do you see it to be an issue that we’re all designing first for desktop and analyzing pages on desktop, but the majority of traffic will likely be on mobile. Is it a best practice to design for mobile first?

It is a best practice to put the ideal customer first. If the ideal customer is more likely to be using a mobile phone, then yes, start with a mobile landing page. Even better if you focus on the most popular device size (if you have a website already, you can find this info in your analytics).

But don’t stop there. How else can you put the customer first?

  • Speed first – how fast is the connection where they are located? Some nations and areas have faster connections than others. If they are in an area with a slow-loading page, you should start by stripping the page down to its essence and only adding an element if it’s absolutely necessary.
  • Contact-preference first – Phone? Email? Chat? Social media? How do they want to contact your company? You should likely provide several options, but for a landing page that revolves around an action, make sure you are starting with the action most preferable to the customer.
  • Price first (or not) – Is the ideal customer more likely to want a deal? Or care less about price and more about service or durability? Lead with what is most valuable to them by, for example, including a coupon code front and center.

These are just some ideas to get you thinking about how to put the customer first. And if you do have an ideal customer that is likely to be on mobile, this article may help give you some mobile-specific ideas – Exploring the Mobile Customer Experience: Three discoveries for designing an effective mobile experience.

But a word of caution – don’t just design for mobile first because it’s a general best practice. Remember, it’s only a best practice if the majority of your brand’s ideal customers are accessing your landing pages with a mobile device. There are still likely many companies that would better serve their customer with desktop-first (or perhaps laptop-first) designs – for example, B2B companies or those serving older customers.

And lastly – a thank you to this questioner. One reason I hope Flint McGlaughlin and I have been able to add value to participants of the MECLABS SuperFunnel Research Cohort is by bringing the MECLABS methodology and conversion psychology to bear on their funnel, their ads, and their landing pages as we offer conversion optimization and marketing strategy suggestions.

But another reason I’ve brought value is because I’m simply not steeped in their business every day. I have an outside perspective. As Mark Twain said, an expert is just “an ordinary fellow from another town.” Being from another town, I can help point out their blind spots.

The SuperFunnel Cohort community has helped point out my blind spots as well. And this was one. I have provided optimization suggestions to many cohort participants’ landing pages, but I don’t think I ever challenged them to design for mobile first. Thanks to this questioner, I realize that this is a really obvious approach I overlooked.

But that’s how blind spots are, right? You don’t notice them at all. Until one day someone points it out to you, and then it seems breathtakingly obvious.

As an attendee, what’s the best way to get started with AI?

I like that this questioner didn’t ask – “what’s the easiest way to get started with artificial intelligence?” They asked for the best way to get started with artificial intelligence.

And I would say – take a look at the answer above. Don’t begin with AI, just like you shouldn’t necessarily begin with mobile. Begin with the customer:

  1. What goals does the customer want to achieve in their life? What challenges do they want to overcome?
  2. If you find many answers to Question #1 that have nothing to do with your brand…fantastic! You’re doing it right. You’re laser focused on your ideal customer, not your own self-interest. However, unless you have chosen the wrong addressable market, your brand should be able to help with some of those things. So how can your brand help?
  3. Break down your answers to Question #2 into two buckets – ways that require a monetary payment from the customer (this is your “product”), and ways that do not require monetary payment (this is your “marketing”).
  4. Now, how can you deliver the value identified in Question #3? This is where you discover what role artificial intelligence can play.

The rest is experimentation. Experimenting with AI tools to see how they can deliver that value. But also, conducting marketing experiments to see if the addition of artificial intelligence is helping you “move the needle” in your funnel. We answer questions about running tests in Marketing Experimentation: Answers to marketers’ and entrepreneurs’ questions about marketing experiments.

+1000 Daniel, when everyone has AI and a bot, who cares that you do?

In fairness, this isn’t a question, so to speak. The participant agreed with something I said in the LiveClass.

But I included it, because it is the reason for the four-step framework I gave to address the previous question.

When you approach your marketing with a technology-first mindset, someone else will always be breathing down your neck. Ready to replicate or outpace your success. Always ready with a better, faster, cheaper technology.

Which is why you should approach these types of decisions – yes, even technology decisions…especially technology decisions – with a value-proposition first mindset. This is how epic brands are built, and how you architect a sustainable competitive advantage.

Here’s a great example. Anyone can put an AI-powered chatbot on their site. But Medieval Times implemented a chatbot that helped communicate the dinner theater’s value proposition. You can read how they did it in – Artificial Intelligence and Machine Learning in Marketing: What marketers (even those who don’t care about tech) should know about AI and ML.

Incidentally enough, we published that article in the Middle Ages of AI – September 23rd, 2022…more than two months before ChatGPT’s launch as a public prototype on November 30th. However, in my (biased) opinion, the takeaways discussed in the article are just as relevant today because we focused (less) on the technology itself and more on the humans behind it – the ideal ‘customer’ for our content, marketers and entrepreneurs.

How can I join the next cohort?

How can I join a cohort? Is there info on that? I have purchased MarketingSherpa books and watched videos, but would love active feedback on my new project

At the end of the LiveClass, we answered questions about joining the MECLABS SuperFunnel Research Cohort. Feel free to join us for a Wednesday LiveClass to get ideas for your marketing funnel, and if you stick around to the end, we’ll answer your questions as well.

Here are some quick experts from previous LiveClasses:

Can we put all 8 micro-yes(es) on the landing page?

How do we measure the strategy?

How do we weigh the appeal or exclusivity of a claim?

Marketing Experimentation: Answers to marketers’ and entrepreneurs’ questions about marketing experiments

April 17th, 2023

Here are answers to some chat questions from last week’s ChatGPT, CRO and AI: 40 Days to build a MECLABS SuperFunnel. I hope they help with your own marketing efforts. And feel free to join us on any Wednesday to get your marketing questions answered as well.

Am I understanding the message correctly that the main value at first isn’t in more conversions or $$$ but in deeper understanding of the customer mindset?

This questioner is asking about the value of marketing experimentation. And it reminds me of this great movie quote…

Alfred: Why do we fall, Master Wayne?
The (future) Batman: So we can learn to pick ourselves back up again.

Similarly, we might say…

Marketer: Why do we run experiments, Flint?
Flint (the real Batman) McGlaughlin: “The goal of a test is not to get a lift, but rather to get a learning.”

So when you see marketing experiments in articles or videos (and we are guilty of this as well), they usually focus on the conversion rate increase. It is a great way to get marketers’ attention. And of course we do want to get performance improvements from our tests.

But if you’re always getting performance improvements, you’re doing it wrong. Here’s why…

Marketing is essentially communicating value to the customer in the most efficient and effective way possible so they will want to take an action (from Customer Value: The 4 essential levels of value propositions).

So if you’re always getting performance improvements, you’re probably not pushing the envelope hard enough. You’re probably not finding the most efficient and effective way, you’re only fixing the major problems in your funnel. Which, of course, is helpful as well.

In other words, don’t feel bad about getting a loss in your marketing experiment. Little Bruce Wayne didn’t become Batman by always doing the obvious, always playing it safe. He had to try new things, fall down from time to time, so he could learn how to pick himself back up.

While that immediate lift feels good, and you should get many if you keep at it, the long-term, sustainable business improvement comes from actually learning from those lifts and losses to do one of the hardest thing any marketer, nay, any person can do – get into another human being’s head. We just happen to call those other human beings customers.

Which leads us to some questions about how to conduct marketing experiments…

Do we need a control number?

In the MEC300 LiveClass, we practiced using a calculator to determine if results from advertisement tests are statistically significant

The specific tool we practiced with for test analysis was the AB+ Test Calculator by CXL.

I’m guessing this questioner may think ‘control number’ comes from previous performance. And when we conducted pre-test analysis, we did use previous performance to help us plan (for more on pre-test planning and why you should calculate statistical significant with your advertising experiments, you can read Factors Affecting Marketing Experimentation: Statistical significance in marketing, calculating sample size for marketing tests, and more).

But once you’ve run an advertising experiment, your ‘control number’ – really two numbers, users or sessions and conversions – will be data from your ads’ or landing pages’ performance.

You may be testing your new ideas against an ad you have run previously by splitting your budget between the old and new ads. In this case, you would usually label the incumbent ad the control, and the new ad idea would be the treatment or variation.

If both ads are new, technically they would both be treatments or variations because you do not have any standard control that you are comparing against. For practical purposes in using the test analysis tool, it is usually easier to put the lower-performing ad’s numbers in the control, so you are dealing with a positive lift.

Remember, what you are doing with the test calculator is ensuring you have enough samples to provide a high likelihood that the difference in results you are seeing is not due to random chance. So for the sake of the calculator, it does not matter which results you put in the control section.

Labeling a version a ‘control’ is most helpful when actually analyzing the results, and realizing which ad you had originally been running, and what your hypothesis was for making a change.

Which brings us to what numbers you should put in the boxes in the test calculator…

Users or sessions, would that be landing page views? I’m running on Facebook.

In this specific test calculator, it asks for two numbers for the control and variation – ‘users or sessions’ and ‘conversions.’

What the calculator is basically asking for is – how many people saw it, and how many people acted on it – to get the conversion rate.

What you fill into these boxes will depend on the primary KPI for successes in the experiment (for more on primary KPI selection, you can read Marketing Experimentation: How to get real-world answers to questions about a company’s marketing efforts).

If your primary KPI is a conversion on a landing page, then yes, you could use landing page views or, even better, unique pageviews – conversion rate would be calculated by dividing the conversion actions (like a form fill or button click) by unique pageviews.

However, if your primary KPI is clickthrough on a Facebook ad, then the conversion rate would be calculated by dividing the ad’s clicks by its impressions.

Which brings us to the next question, since this tool allows you to add in a control and up to five variations of the control (so six total treatments)…

Can you confirm the definition of a variation really fast? Is it a change in copy/imagery or just different size of ad?

Remember what we are doing when we’re running a marketing experiment – we are trying to determine that there is a change in performance because of our change in messaging. For example, a headline about our convenient location works better than a headline about our user-friendly app, so we’ll focus our messaging about our convenient location.

When there are no validity threats and we just make that one change, we can be highly confident that the one change is the reason for the difference in results.

But when there are two changes – well, which change caused the difference in results?

For this reason, every change is a variation.

That said, testing in a business context with a necessarily limited budget and the need for reasonably quick action, it could make sense to group variations.

So in the question asked, each ad size should be a variation, but you can group those into Headline A ads, and Headline B ads.

Then you can see the difference in performance between the two headlines. But you also have the flexibility to get more granular and see if there are any differences among the sizes themselves. There shouldn’t be right? But by having the flexibility to zoom in and see what’s going on, you might discover that small space ads for Headline B are performing worst. Why? Maybe Headline B works better overall, but it is longer than Headline A, and that makes the small space ads too cluttered.

Ad size is a change unrelated to the hypothesis. But for other changes, this is where a hypothesis helps guide your testing. Changing two unrelated things would result in multiple variations (two headlines, and two images, would create four variations). However, if your experimentation is guided by a hypothesis, all of the changes you make should tie into that hypothesis.

So if you were testing what color car is most likely to attract customers, and you tested a headline of “See our really nice red car” versus “See our really nice blue car,” it would make no sense to have a picture of a red car in both ads. In this case, if you didn’t change the image, you wouldn’t really be testing the hypothesis.

For a real-world example see Experiment #1 (a classic test from MarketingExperiments) in No Unsupervised Thinking: How to increase conversions by guiding your audience. The team was testing a hypothesis that the original landing page had many objectives competing in an unorganized way that may have been creating friction. Testing this hypothesis necessitated making multiple changes, so they didn’t create a variation for each. However, when making a new variation would be informative (namely, how far should they go in reducing distractions) they created a new variation.

So there were three variations total. The control (original), treatment #1 which tested simplifying by making multiple changes, and treatment #2 which tested simplifying even further.

When we discuss testing, we usually talk about splitting traffic in half (or thirds, in the case above) and sending an equal amount of traffic to each variation to see how they perform. But what if your platform is fighting you on that…

One thing I’m noticing is that Google isn’t showing my ads evenly – very heavily skewed. If it continues, should I pause the high impression group to let the others have a go?

It really comes down to a business decision for you to make – how much transparency, risk, and reward are you after? Here are the factors to consider.

On the one hand, this could seem like a less risky approach. Google is probably using a statistical model (probably Bayesian) paired with artificial intelligence to skew towards your better performing ad to make you happy – i.e., to keep you buying Google ads because you see they are working. This is similar to multi-armed bandit testing, a methodology that emphasizes the higher performers while a test is running. You can see an example in case study #1 in Understanding Customer Experience: 3 quick marketing case studies (including test results).

So you could view trusting Google to do this well as less risky. After all, you are testing in a business context (not a perfect academic environment for a peer-reviewed paper). If one ad is performing better, why pay money to get a worse-performing ad in front of people?

And you can still reach statistical significance on uneven sample sizes. The downside I can see is that Google is doing this in a black box, and you essentially just have to trust Google. It’s up to you how comfortable you are doing that.

When you go to validate your test, you could get a Sample Ratio Mismatch warning in a test calculator, warning you that you don’t have a 50/50 split. But read the warning carefully (my emphasis added), “SRM-alert: if you intended to have a 50% / 50% split, we measured a possible Sample Ratio Mismatch (SRM). Please check your traffic distribution.”

This warning is likely meant to warn you of a difference that isn’t obvious to the naked eye if you intended to run a 50/50 split. This could be due to validity threats like instrumentation effect and selection effect. Let’s say your splitter wasn’t working properly, and some notable social media accounts shared a landing page link but it only went to one of the treatments. That could threaten the validity of the experiment. You are no longer randomly splitting the traffic.

On the flip side, if you want more control over things, you could evenly split the impressions or traffic and use a Frequentist (Z Test) statistical methodology and choose the winner after things have run. You aren’t trusting that Google is picking the right winner, and not giving up on an initially under-performing treatment too soon.

I can’t personally say that one approach is definitely right or wrong, it comes down to what you consider more risky, and how much control and transparency you would like to have.

And if you would like to get deeper into these different statistical models, you can read A/B Testing: Why do different sample size calculators and testing platforms produce different estimates of statistical significance?

Flint, do I remember right? 4 words to action in the headline?

This question is a nice reminder that we’ve been answering questions about testing methodology – the infrastructure that helps you get reliable data – but you still need to craft powerful treatments for your tests.

The teaching from Flint McGlaughlin that you are referring to is actually about four words to value in the headline, not action. Four words to value. To give you ideas for headline testing, you can watch Flint teach this concept in the free FastClass – Effective Headlines: How to write the first 4 words for maximum conversion.

Hi, how would one gain access to this? Looks fascinating.

How can I join a cohort?

How do you join a cohort on 4/26?

Are the classes in the cohort live or self-paced? I’m based in Australia so there’s a big-time difference.

Oh, I would be very interested in joining that cohort. Do I email and see if I can join it?

At the end of the LiveClass, we stayed on the Zoom and talked directly to the attendees who had questions about joining the MECLABS SuperFunnel Cohort. If you are thinking of joining, or just looking for a few good takeaways for your own marketing, RSVP now for a Wednesday LiveClass.

Here are some quick excerpt videos to give you an idea of what you can experience in a LiveClass:

Single variable testing vs variable cluster testing

What is an acceptable level of statistical confidence?

Paul Good talks about the need for the MECLABS cohort

Marketing Experimentation: How to get real-world answers to questions about a company’s marketing efforts

April 11th, 2023

Whenever we have questions in our weekly LiveClass – ChatGPT, CRO and AI: 40 Days to build a MECLABS SuperFunnel – we answer them here on this blog to help attendees, but to help any of our readers who didn’t attend but may have had a similar challenge as well. Read below and get ideas for powering the growth of your business with marketing experimentation. And feel free to join us for a Wednesday LiveClass using the link I just mentioned.

Has anyone tested video ads vs image? Preston’s question in the LinkedIn group got me thinking. Daniel Burstein, do you know of any?

Here’s an example. “The team discovered through testing these Facebook ads that medium-form copy received 33% more clickthrough than short- or long-form, and video drove more clickthroughs and had greater reach than static images. Because of this testing, the team realized that video was one of their most powerful tools” (from Email Marketing: List size increased 600% in one year through content, paid ad strategy).

The reason I start with this question is because it lays out the reason we should be conducting marketing experimentation – so our customers answer key questions we have about our company and our marketing with their real-world behavior, instead of just making decisions based on internal guesswork.

No matter what has worked for another company (including the above example I just used), it doesn’t mean it will work for your unique customers, your unique value proposition, your unique situation. So don’t just follow what others do, use it as fodder to come up with your own wildly creative ideas – and then test them.

Now that we know why we should test, let’s get into some of the mechanics of testing, starting with test planning…

What is the main purpose of the [pre-test planning] calculator? Is it to estimate how long you’ll need to run the campaign? If I have a set daily budget, set level of confidence, set conversion rate, set number of variants and an unknown variance [the questioner is referring to the relative conversion rate difference between variants of the ad], what is the main purpose of estimating the data? As soon as the ads start running the numbers will all change and need updating. What’s the main metric we’re aiming on understanding?

As I have mentioned in a previous blog post, I’m not the mathematician, I’m the storyteller. So I won’t get into the math behind these concepts. But I do think it helps for marketers to understand the concepts at a basic level, to inform their testing. And if you haven’t already, reading last week’s blog post will help get you up-to-speed on some basic marketing testing concepts – Factors Affecting Marketing Experimentation: Statistical significance in marketing, calculating sample size for marketing tests, and more.

In this question the person is outlining the different metrics in the pre-test estimation tool (that is included as part of membership in a MECLABS SuperFunnel Cohort).

The goal of the tool – as with any planning – is to inform your efforts. Things may not turn out exactly as you think they will, but when you see the levers that you can pull to shape results, it should help you make decisions on what you want to move forward with and actually execute.

One of those levers is budget, to affect how large of a sample size your treatments receive. In this question the “daily budget” can’t be changed, so if you run the pre-test calculation and realize it would take an inordinate amount of time to reach statistical validity based on the amount of impressions or traffic your budget can buy, you may have to get creative.

Here’s one example, using Bugs Bunny and Daffy Duck. The team behind Looney Tunes and Merrie Melodies had a set ‘daily budget’ for how long they could work on an episode. So if they wanted to do something a little more groundbreaking, they would ‘borrow’ from other episodes. Maybe they would take only four days instead of five for a few episodes, and then using that time they saved up they would dedicate eight days to an episode to really push the envelope (I recommend Duck Amuck and What’s Opera, Doc?).

Perhaps this planning tool might help you do the same thing. Invest less of your fixed daily budget in experiments where you know there will be a large conversion rate difference between the ad or landing page variants, so you save up some of that budget for experiments where there is a smaller difference.

Which brings us to another decision pre-test planning can help us make…

Ave Test Users = Impressions … Primary KPI Successes = Clicks (Or is it Opt-ins?) Variants = number of ads, Assumed Real Difference = Between the ads

The question is around what the primary KPI (key performance indicator) should be for an experiment – clicks on the ad or opt-ins on the landing page.

This is important to statistical significance because it will impact the sample size. The deeper you test in the funnel, the smaller your sample size will be. More people will click on an ad then will opt-in on the landing page. So the larger budget you will need to get a large enough sample size.

This is another reason why using the pre-test estimation calculator can be helpful – to inform where in the funnel you decide to test, based on your budget and/or other capabilities for getting people to see the conversion action you are trying to test.

What should your sample size be? There is no set sample size you need to reach. It is affected by how different the performance is in the different treatments. And we use the pre-test planning calculator to help us find that number.  “An important factor in sample size determination is the difference in results between the treatments. If the treatments return very different results, it’s much easier to confidently say that you really do have two (or however many) emails that will perform differently. You don’t need as many samples to do that. However, if the treatments have very similar results, you want many more observations to see if there really is a difference.” (from Marketing Optimization: How to determine the proper sample size)

Why make impressions-to-ad-clicks as the primary KPI and then ad-clicks-to-appointments a secondary KPI? Why not make impressions-to-appointments the primary KPI? Is that just because the volume of data for the latter would be higher? So measuring more the message lever, rather the whole funnel effectiveness?

This question naturally follows from the previous one. As discussed, your KPI for a test is partially informed by the sample size and relative difference between the control and treatment.

But it is also partially informed by what you are trying to learn. Let’s not get too deep in the math and mechanics that we forget the goal of test – to learn how to better serve the customer and better communicate to the customer so we can improve our results (as discussed in the first question in this blog post).

And remember, you don’t have to learn everything in one big test. Your goal is to test and learn repeatedly. So your first test can be in the channel, and then your second test could be on the landing page, for example.

To inform future testing it helps to establish secondary KPIs when you set up your test. Of course, you could just look at every possible metric under the sun. But the reason we do pre-test planning is to run our experiments with intention.

One of our former data scientists explained it to me this way – an experiment is picking a specific tree in a forest, then throwing a rock and seeing if you hit that specific tree versus throwing a rock into a forest, seeing which tree it randomly hits, and then remarking, “oh yeah, I meant to hit that tree.”

Pre-test planning focuses our thinking and efforts on hitting that tree. And secondary KPIs can help here as well. “Don’t try to analyze it all; you’ll get lost in data and become discouraged and confused. Instead, narrow your focus to the metrics that will provide the most relevant insights. Having primary and secondary KPIs for your site will help you begin to narrow your focus” (from Marketing Analytics: 6 simple steps for interpreting your data).

So you could first test message levers in the channel using a primary KPI of clickthrough. Then if your secondary KPI is conversion rate on the landing page, and the conversion rate is low for the winning ad treatment, you have a next (and very interesting) question to test.

Did the ad message you used simply bring lower-quality traffic, less motivated people to your website? For example, an ad message of “Free iPaid” might win a test in the channel but isn’t necessarily doing your funnel any favors if you can’t pay off that promise.

Or, is it an effective ad message to attract your ideal customer, but you don’t pay off that message well with you landing page. There is poor continuity between the ad and the landing page, which was simply a previous page you already had and didn’t align with the treatment ad’s messaging. Which brings us to…

Why would we use a page separate from our webpage?

You may have an ad message you want to test that is very different from the message on any webpage you have now. In that case, it would likely make sense to create a new landing page for it.

We could use the experimentation process I just mentioned to determine that our current webpage does not deliver on the new ad message, that there is poor continuity from the new ad message to our webpage.

However, signing up for marketing experimentation is not a binding agreement to endure unnecessary agony. We can also use our common sense and marketer’s intuition to avoid changes that are very likely to have poor results – after all, we are testing with a real budget and real people – and focus our energies on the questions that are more difficult to answer and will have a big impact on our business.

Of course, this means you may have to build a new landing page. Which brings us to our next question…

Chris it looks like you followed the template exactly, Daniel are you saying feel free to move the layout around and put the hero image in place of the form?

Participants in the MECLABS SuperFunnel Cohort get access to MECLABS SuperFunnel builder software (a simple drag-and-drop landing page builder that is embedded with the MECLABS methodology). This includes templates to help you build a landing page based on an approach that has shown to be effective in previous experimentation.

I won’t go through the entire template, but it begins at the top of your landing page with Micro-Yes 1 – Yes, I will pay attention. This section should be a synopsis of your core offer. Followed by Micro-Yes 2 – Yes, I will engage deeper – where you address any friction or anxiety. Also, in this section you can add a video, image, or a form that supports your message.

The Cohort member I was providing optimization advice for in the LiveClass has a spokesperson that is well-known in his industry, and my point was invoking that spokesperson earlier on the landing page may help grab attention and be a core element of the offer that gets the ideal customer to say ‘yes.’ It may also be a way to reduce anxiety.

For example, I am a huge fan of Jerry Seinfeld. If he ever produced a show focused on marketing (let’s call it Copywriters In Cars Getting Conversions), his participation in the show would be a core part of the offer that would grab my attention. His participation in the show would also reduce my anxiety that it is a quality show and get me to engage deeper.

The SuperFunnel template can guide you to build your landing page but should also be spurring future hypotheses in you for further testing. For example, is our founder a core part of the offer and she should be included at the top of the page along with her image? Or is she a tertiary credibility indicator, and she should be used lower down on the page to help address any last-minute anxiety?

This is an example of a question you can test that has implications for the business. You can start by testing in the channel and conduct further testing on the page.

To get you thinking of possibilities for your own business, you can see three value categories we test through email for VolunteerMatch in A/B Testing: How to improve already effective marketing.

If you are conducting a marketing experiment in the channel, you will also need to create the different ads to test. Which bring us to…

Ask ChatGPT what colors to use: “I have an ad with #CFB82C as the primary text color, #384E6C as the background color, and #333333 and #FFFFFF as additional colors. What color should my call to action button be and what color should the text be on top of it. Give me html codes.” … “Is there a green color that will work?”

OK, this isn’t really a question. I found this in the Zoom chat, and I just thought it was some nifty advice for using artificial intelligence to help you build an ad.

Hi Flint, what does the cohort entail? 4-5 weekly meetings. 40 days total/ price? Also if I’m generating a sales funnel for the first time is this a good option for me or should I start on another level (finishing my Micro Yes’s now)

How do I join this group!?

How do I join the MEC200 group?

How do I register for the next cohort?

Is there another cohort scheduled yet after April or approx. start date?

Jane, Is this video zoom call a part of the first cohort?

What happens with the transition to cohort 300?

There were a lot of questions from attendees about the cohort itself, and we answered them with a Q&A session at the end of the cohort. If you would like to learn more about the cohort, and the five ways you can ‘pay’ if you choose to participate (there are monetary and non-monetary options), just join us on a Wednesday LiveClass of ChatGPT, CRO and AI: 40 Days to build a MECLABS SuperFunnel.

Even if you don’t choose to join, by attending you should get a few ideas you can implement to your marketing funnel right away to improve conversion.

Here’s a quick excerpt from a recent LiveClass to give you an idea of what to expect – Hypothesis Articulation vs Essence.

Factors Affecting Marketing Experimentation: Statistical significance in marketing, calculating sample size for marketing tests, and more

April 4th, 2023

Here are answers to questions SuperFunnel Cohort members put in the chat of recent MEC200 and MEC300 LiveClasses for ChatGPT, CRO and AI: 40 Days to build a MECLABS SuperFunnel (feel free to register at that link to join us for an upcoming MECLABS LiveClass).

How many impressions or how much reach do we need for statistical significance?

I can’t give you a specific number, because the answer will vary based on several factors (described below). Also, MECLABS SuperFunnel Cohort members now have access to a Simplified Test Protocol in their Hub, and you can use that tool to calculate these numbers, as shown in Wednesday’s LiveClass.

But I included the question in this blog post because I thought it would be helpful to explain the factors that go into this calculation. And to be clear, I’m not the math guy here. So I won’t get into the formulas and calculations. However, a basic understanding of these factors has always helped me better understand marketing experimentation, and hopefully it will help you as well.

First of all, why do we even care about statistical significance in marketing experimentation? When we run a marketing test, essentially we are trying to measure a small group to learn lessons that would be applicable to all potential customers – take a lesson from this group, and apply it to everyone else.

Statistical significance helps us understand that our test results represent a real difference and aren’t just the result of random chance.

We want to feel like the change in results is because of our own hand. It’s human nature. A better headline on the treatment landing page, or a better offer. And we can see the results with our own eyes, so it is very hard to understand that a 10% conversion rate may not really be any different than an 8% conversion rate.

But it may just be randomness. “Why is the human need to be in control relevant to a discussion of random patterns? Because if events are random, we are not in control, and if we are in control of events, they are not random, there is therefore a fundamental clash between our need to feel we are in control and our ability to recognize randomness,” Dr. Leonard Mlodinow explains in The Drunkard’s Walk: How Randomness Rules Our Lives.

You can see the effect of randomness for yourself if you run a double control experiment – split traffic to two identical landing pages and even though they are exactly the same, they will likely get a different number of conversions.

We fight randomness with statistical significance. The key numbers we want to know to determine statistical significance are:

  • Sample size – How many people see your message?
  • Conversions – How many people act on your message?
  • Number of treatments – For example, are you testing two different landing pages, or four?
  • Level of confidence – Based on those numbers, how sure can you be that there really is a difference between your treatments?

And this is the reason I cannot give you a standard answer for the number of impressions you need to reach statistical significance – because of these multiple factors.

I’ll give you an (extreme) example. Let’s say your sample size is 100 and you have four treatments. That means, each landing page was visited by 25 people. Three of the landing pages each get three conversions, and the other landing page gets four conversions. Since so few people saw these pages and the difference in conversions is so small, how confident are you that they are different? Or perhaps you randomly had one more motivated person in that last group that gave you the extra conversion.

And this assumes an even traffic split, which you may not want to do based on how concerned you are about the change you are making. As we teach in How to Plan Landing Page Tests: 6 Steps to Guide Your Process, “Using an uneven traffic split is helpful when your team is testing major changes that could impact brand perception or another area of your business. Although the results will take longer to reach statistical significance, the test is less likely to have an immediate negative impact on business.”

Now, let’s take another extreme example. Say your sample size is 10,000,000 and you have just a control and a treatment. The control gets 11 conversions, but the treatment gets 842,957 conversions. In that case, you can be pretty confident that the control and treatment are different.

But there is another number at play here – Level of Confidence (LoC). When we say there is a statistically significant difference, it is at a specific Level of Confidence. How sure do you want to be that the control and treatment are different? For marketing experimentation, 95% is the gold standard. But 90%, or even 80% could be a enough if it is a change that likely isn’t going to be harmful, and doesn’t take too many resources to make. And the lower Level of Confidence you are OK with, the lower sample size you need and the less difference in conversions you need to be statistically significant at that LoC.

So is Estimated Minimum Relative Difference our desired/target lift if our test performs as expected?

Once you understand how statistical significance works (as I described in the previous question), the next natural question is – well, how does this affect my business decisions?

The first answer is, this understanding will help you run marketing experiments that are more likely to predict your potential customers’ real-world behavior.

But second answer is – this should impact how you plan and run tests.

This question refers to the Estimated Minimum Relative Difference in the Simplified Test Protocol that SuperFunnel Cohort members receive, specifically in the test planning section that helps you forecast how long to run a test to reach statistical significance. And yes, the Estimated Minimum Relative Difference is the difference in conversion rate you expect between the control and treatment.

As discussed above, the larger this number is, the less samples and time (to get those samples) it takes to run a test.

Which means that companies with a lot of traffic can run tests that reach statistical significance even if they make very small changes. For example, let’s say you’re running a test on the homepage of a major brand, like Google or YouTube, which get billions of visits per month. Even a very small change like button color may be able to reach statistical significance.

But if you have lower traffic and a smaller budget, you likely need to take a bigger swing with your test to find a big enough difference. This does not necessarily mean it has to require major dev work. For example, the headlines “Free courtside March Madness tickets, no credit card required” and “$12,000 upper level March Madness tickets, $400 application fee to see if you qualify” are very quick changes on a landing page. However, they are major changes in the mind of a potential customer and will likely receive very different results.

Which brings us to risk. When you run valid experiments, you decrease the risk in general. Instead of just making a change and hoping for the best, only part of your potential customer base sees the change. So if your change actually leads to a decrease, you learn before shifting your entire business. And you know what caused the decrease in results because you have isolated all the other variables.

But the results from your experiments will never guarantee a result. They will only tell you how likely there will be a difference when you roll out that change to all your customers for a longer period. So if you take that big swing you’ve always wanted to take, and the results aren’t what you expect, that may rein your team in from a major fail.

As we say in Quick Guide to Online Testing: 10 tactics to start or expand your testing process, “If a treatment has a significant increase over the control, it may be worth the risk for the possibility of high reward. However, if the relative difference between treatments is small and the LoC is low, you may decide you are not willing to take that risk.”

With a test running past 4 weeks, how concerned are you about audience contamination between the variants?

Up until now we’ve been talking about a validity threat called sampling distortion effect – failure to collect a sufficient sample size. As discussed, this could mean your marketing experiment results are due to random variability, and not a true difference between how your customers will react to your treatments when rolled out to your entire customer set.

But there are other validity threats as well. A validity threat simply means that a factor other than the change you made – say, different headlines or different CTAs – was the reason for the difference in performance you saw. You are necessarily testing with a small slice of your total addressable market, and you want to ensure that the results have a high probability of replicability – you will see an improvement when you roll out this change to all of your potential customers.

Other validity threats include instrumentation effect – your measurement instrument affecting the results, and selection effect – the mix of customers seeing the treatments do not represent the customers that you will ultimately try to sell to, or in this case, the same customer seeing multiple treatments.

These are the types of validity threats this questioner is referring to. However, I think there is a fairly low (but not zero) chance of these validity threats only coming from running the test (not too much) past four weeks. While we have seen this problem many years ago, most major platforms have gotten pretty good at assigning a visitor to a specific treatment and keeping them there on repeat visits.

That said, people can visit on multiple devices, so the split certainly isn’t perfect. And if your offer is something that calls for many repeat visits, especially from multiple devices (like at home and at work), this may become a bigger validity threat. If this is a concern, I suggest you ask your testing software provider how they mitigate against these validity threats.

However, when I see your question, the validity threat I would worry about most is history effect, an extraneous variable that occurs with the passage of time. And this one is all on you, friend, there is not much your testing software can do to mitigate against it.

As I said, you are trying to isolate your test so the only variables that affect the outcome are the ones you’ve purposefully changed and are intending to test based on your hypothesis. The longer a test runs, the harder this gets. For example, you (or someone else in your organization) may choose to run a promotion during that period. Maybe you can keep a tight lid on promotions for a seven-day test, but can you keep the promotion wolves at bay in your organization for a full two months?

Or you may work at an ecommerce company looking to get some customer wisdom to impact your holiday sales. If you have to test for two months before rolling anything out, you may test in September and October. However, customers may behave very differently earlier in the year than they would in December, when their motivation to purchase a gift near a looming deadline is a much bigger factor.

While a long test makes a history effect more likely, it can occur even during a shorter test. In fact, our most well-known history effect case study occurred during a seven-day experiment because of the NBC television program Dateline. You can read about it (along with info about other validity threats) in the classic MarketingExperiments article Optimization Testing Tested: Validity Threats Beyond Sample Size.

Join us for a Wednesday LiveClass

As I mentioned, these questions came from the chat of recent LiveClasses. You can RSVP now to join us for an upcoming LiveClass. Here are some short videos to give you an idea of what you can learn from a LiveClass…

“If there’s not a strong enough difference in these two Google ads…the difference isn’t going to be stark enough to probably produce a meaningful set of statistics [for a marketing test]…” – Flint McGlaughlin in this 27-second video.

“…but that’s what Daniel was really touching on a moment ago. OK, you’ve got a [marketing] test, you’ve got a hypothesis, but is this really where you want to invest your money? Is this really going to get the most dollars or the most impact for the energy you invest?…” – Flint McGlaughlin, from this 46-second video about finding the most important hypotheses to test.

How far do you have to go to with your marketing to take potential customers from the problem they think they have, to the problem they do have? I discuss this topic while coaching the co-founders of an eyebrow beauty salon training company on their marketing test hypothesis in this 54-second video.

Exploring Value Proposition and AI Technology: How to create unique ideas that you can execute with artificial intelligence

March 28th, 2023

In this blog post, I answer questions cohort members put in the chat of recent MEC200 and MEC300 LiveClasses for ChatGPT, CRO and AI: 40 Days to build a MECLABS SuperFunnel (feel free to register at that link to join us for an upcoming MECLABS LiveClass).

Maybe this can be answered in an email later? I am curious—in an industry like insurance where the commission structures are regulated, all agents are essentially selling the same products so exclusivity is very low, I wonder if there are any insights in how to differentiate an agent and provide exceptional value to drive customers.

You have put your finger on the essential value proposition challenge. I know it feels like this is uniquely an insurance problem, but many industries face this same challenge.

Every car can get me from point A to point B, yet Tesla has a unique value proposition. I could create this blog post on any computer, yet Apple has a unique value proposition. Every health care provider has to comply with government regulations and mandates from third-party payers in the insurance industry, yet Mayo Clinic has a unique value proposition.

All that said, I do agree, the situation you explained is harder than most. So to get your creative juices flowing, here are a few ideas for differentiating an insurance agent:

  • Knowledge of the local market, or a niche in the local market – For example, I live in Jacksonville. It is a significantly lower risk area of Florida (for hurricanes) than almost all of the rest of Florida. But national insurance companies don’t treat it that way. If an agent specialized in low-risk areas of Florida, that might draw my attention.
  • Specific demographics – My daughter is a college student, and I pay her auto insurance. If an agent specialized in auto insurance for adult dependents, that might stick out to me.
  • Concierge service – This has started to take off in medicine, where people are sick of waiting for primary care doctors. If an agent provided proactive review of my policy every year, and its office was available 24/7 to file a claim on my behalf, that my catch my attention.
  • The fiduciary agent – This is taking off in the financial planning space, where customers are increasingly skeptical of financial advisors that make commissions. How can they ensure these planners are putting their own interests first, and not just going with the biggest commission? Enter fee-only financial planners who put the customer first.
  • Quality play – What if the agent didn’t write insurance from every possible company, but only companies recommended by Consumer Reports?
  • Additional services – What other insurance adjacent services could they provide? Manage home renovation projects? Manage auto maintenance? All-in-one funeral services including wills and making sure burial wishes are respected for life insurance clients?

Now, you understand your industry better than I do. Some of these ideas might not be practical, or even feasible at all. But using lateral thinking, maybe they’ll spark a new idea in you. If you’re not familiar with the practice, Edward de Bono explains it as, “Lateral thinking…is the process of using information to bring about creativity and insight restructuring.”

And if you really want to get your creative juices flowing, read about Trōv’s disruptive idea to use micro-duration policies in this article – Mobile Marketing and Value Decoupling: Interview with Harvard professor about eight years of research into business disruption.

The company ultimately pivoted from B2C to B2B, so the model described in this article didn’t work out, but that is often the case for early adopters. And the company’s creativity may give you an idea for creating differentiated that you ultimately perfect with your insurance clients.

Also, may the marketing professional become a business consultant in helping an agent to create new value for their business offerings in order to create more value for their customers? Thank you!

Exactly. You’re hitting the nail on the head for why mastering a value proposition methodology can be so crucial.

I’ve had the opportunity to interview many marketing leaders for the How I Made It In Marketing podcast. And these successful leaders do not stay siloed in the marketing department, they don’t only focus on media buys and SEO tactics and automation settings.

Time and time again I’ve heard them tell me how they use the understanding of the customer, along with their marketing acumen, to help the business better serve the customer.

For example…

“…and your business will go out of business, or your client’s business in the case where we were, if you don’t really understand – what does a customer need, why will they choose you, and what can you do to be different from the competition…” This quote is from Radhika Duggal, Chief Marketing Officer, Super, is was a lesson she shared from a mentor in Consumer Financial Services Marketing: Your customer is your most important stakeholder (podcast episode #39).

While consumer financial services isn’t exactly insurance, it has a lot of similarities, and that episode might give you some ideas as well.

What is margin ratio?

It compares the margin of a company to its revenue. A 10% ratio means that 10 cents of every dollar of revenue is over and above the costs of producing the product. There are different ways to calculate this metric – for example, you could calculate gross profit margin, operating profit margin, or net profit margin. But I’ll leave that to an accounting publication to explain.

The reason that I included this question from the chat in here, is because margin ratio is a golden metric to tell you how effective your value proposition is.

Anyone can sell a bunch of products. Just throw enough media buys at it, enough incentives, enough discounting. In fact, the margin ratio can even be negative. You’ll still be selling products, you’ll just be losing money doing it.

A sustainably successful business with a forceful value proposition has a high margin ratio.

Let’s take Apple again, as an example. An Apple device isn’t simply priced a little more than its component parts. It is priced way higher. Because it has a strong value proposition, and therefore pricing power in the marketplace.

On the flip side, a commodity computing device can only eke out a small margin over and above the price of its parts, because there are many similar offerings in the marketplace with no clear differentiation.

Incidentally enough, this is another important reason for marketers to get involved in business and product decisions. A short-sighted business leader could choose to keep making the product just a little worse to save money – thus helping short-term margins but ultimately hurting the company’s value proposition. I discuss that challenge in How Companies Fail, and Why the Customer Always Wins in the End.

Is it fair to say that you only want to test one hypothesis at a time, whether it has one or 1,000 elements doesn’t matter, provided they all contribute to the same hypothesis?

With this question, let’s move on to how to discover the most effective value proposition – marketing experimentation.

First, some quick background. You would start with a framework to assess an existing or new value proposition. From going through this exercise, your team may have multiple questions. For example, which feature is most appealing? Which expression of that feature is most effective? For each of these, you would create a hypothesis.

Now to answer the question – each test should have a single hypothesis. You can change multiple elements on the landing page or ad IF (and only if) they all help you test the same hypothesis.

So if you were testing whether Feature A or Feature B was most appealing on a landing page, you could have a headline and CTA focused on Feature A, and a headline and CTA focused on Feature B. However, you could not have a headline focused on Feature A and a blue CTA button, and then a headline focused on Feature B and an orange button. This is introducing an extraneous variable that would make it harder to interpret the test. Did Treatment #1 win because of the headline about Feature A, or because of the button color?

Incidentally enough, you could have multiple treatments you are testing in the same experiment under the same hypothesis. So you could have four landing pages you test at the same time – one focused on Feature A, and others focused on Feature B, Feature C, and Feature D. However, you must make sure you have a large enough sample size that your results reach statistical validity.

I like this hypothesis but not sure if you should test both an offer and an audience in the same test?

A good hypothesis will help you home in on a key question you are trying to answer about your customer. This is important because you want to be able to clearly understand the results. If the control wins, it means X. If the treatment wins, it means Y.

So putting multiple unrelated variables into a single hypothesis is not a good idea because it will make it harder for you to interpret your test results.

That said, remember, no hypothesis exists in a vacuum. You should run a series of experiments powered by hypotheses that inform each other. So you’re on to something here.

For example, if you discover that, let’s say, Offer A gets a higher conversion rate than Offer B, it may be because Offer A is more powerful for all of your customers.

Or it may mean that you have more than one type of customer set, and there are more that would act on Offer A in that customer set, but Offer B still has large enough group of customer that find it more appealing to represent a profitable segment.

Looking at secondary KPIs can help you discover these groupings. Maybe a certain age group or geographic grouping or device type was more likely to go with Offer B…even though Offer A clearly got more conversions overall.

Again, this is an opportunity to do follow-up testing – focused follow-up testing – to help you answer the new questions that the previous experiment brought up.

I discuss learning the motivations of different customer segments, along with other marketing experimentation topics, in Marketing Experiment: Learn from our split testing mistakes.

Wow! My son is a computer science major. Waste of time?

This question was in response to some artificial intelligence capabilities we previewed in the LiveClass.

I included this question because I’m sure many marketers have the same question.

I don’t have any better crystal ball than you do, of course, but I’m happy to share my hunch – the tools always change, but the task remains the same.

Our task as marketers has always been to help a customer perceive the value of the products and services a company offers. If that is your focus, I believe you will always have a career. The tools will always need that human intervention, that human guidance, that human collaboration. The tools will always need the human to set the direction, even if the tool is actually sailing the ship.

Now, if your role as a marketer has simply been to try to fool algorithms or batch and blast emails to purchased lists…well, the AI may be able to replace you.

I’ll give you an example. I interviewed Melissa-Ann Chan, Head of Marketing, Arta Finance, in Fintech Marketing: Creativity and technology is a killer combo (podcast episode #50). Artificial intelligence and machine learning are key to Arta Finance’s offering focused on the future of personal finance.

But even listening to Chan talk about it, that is only part of the company’s go-to-market strategy. She described how collaboration was key. The ex-Googlers built the company with three core groups of people with different expertise:

  • consumer product and growth expertise
  • experience running quant hedge funds deep, private equity, and options trading
  • AI and ML researchers

Yes AI plays an important role, but without the people who know how to build products and grow brands (marketers), and the people who know how to make the things you’re selling (subject matter experts), my experience tells me that AI on its own will just be creating another commodity. A bunch of AIs spinning out undifferentiated marketing and products, competing on speed to market, losing market share quickly, trying to find ephemeral advantages in markets for traffic arbitrage, and ultimately, burning through capital.

I’m sure there are companies that will survive on those technical abilities, just like there are marketers today who can find momentary advantages in ad buying and algorithm changes and combine that with drop shipping or affiliate networks to turn a profit in an ever-changing world.

But that is an impossible way to build a sustainable competitive advantage. And a roller coaster of a career.

So the marketer’s role comes right back around to the topic we discussed in the beginning of this article – building powerful, unique value propositions to win high-margin business by serving a customer with differentiated value.

How does this not become plagiarism?

This is the dirty little secret of artificial intelligence – it’s not too different from humans in that it is only as smart as its training.

For example, a few years ago, AI would flag up any picture with a ruler in it as skin cancer. “We noted that the algorithm appeared more likely to interpret images with rulers as malignant. Why? In our dataset, images with rulers were more likely to be malignant; thus the algorithm inadvertently ‘learned’ that rulers are malignant. These biases in AI models are inherent unless specific attention is paid to address inputs with variability,” Akhila Narla, Brett Kuprel, Kavita Sarin, Roberto Novoa, and Justin Ko explained in Automated Classification of Skin Lesions: From Pixels to Practice from the Journal of Investigative Dermatology.

And to the questioner’s point, the current crop of generative AI tools that is getting so much press lately doesn’t inherently have knowledge. It is getting trained on the content others have created.

“We have valuable content that’s being used constantly to generate revenue for others off the backs of investments that we make, that requires real human work, and that has to be compensated,” Danielle Coffey, executive vice president and general counsel, News Media Alliance, said in The Wall Street Journal article Publishers Seek Pay For Help With AI by Keach Hagey, Alexandra Bruell, Tom Dotan, and Miles Kruppa.

Which brings us back to the need for marketers in an AI-driven world, and what we can learn from them. Artificial intelligence can copy better than you. So to truly succeed in your marketing career, don’t copy. Use your unique experience, honed skillsets, and a repeatable methodology to create truly original ideas.

Not an easy task I know, but to help you create those original ideas that you can use AI to help you execute, we meet every Wednesday for the ChatGPT, CRO and AI: 40 Days to build a MECLABS SuperFunnel LiveClass, as part of the MECLABS SuperFunnel Research Cohort (and you are welcome to join us, just RSVP at that link).

Here’s how Toby Wilson described the Cohort –”All of that cross-pollination of skill sets and ideas and everything like that creates a synergy that ends up being more than what any individual could bring…” (hear Toby for yourself in this 59-second video).

And here is a quick 56-second excerpt from a recent LiveClass to give you an idea of what you can learn by attending.

Resources From the Latest MECLABS LiveClass: Answering your questions on Customer Theory, value propositions, and Customer-First Objectives

March 20th, 2023

In the MEC200 LiveClass and MEC300 LiveClass for ChatGPT, CRO and AI: 40 Days to build a MECLABS SuperFunnel, we got a few questions in the chat. I’ll use this blog post to provide some resources to help you with those questions as you prepare for our next LiveClass on Wednesday

What is Customer Theory?

Is there an example of a fairly advanced custom theory profile? E.g., what’s the document or artifact and format, after multiple tests. Is Customer Theory an actual doc?

The Customer Theory is an understanding of the customer that enables us to more accurately predict the total response to a given offer. It is your organization’s collected wisdom about the customer. Hopefully this comes from a cycle of experiments. But at first, it may come from data analysis. Or even gut wisdom.

Here’s a document you can use to begin building your Customer Theory, adding to it over time as you test – Introductory Guide to Developing Your Customer Theory [an interactive worksheet].

This article provides an example – Customer Theory: How to leverage empathy in your marketing (with free tool).

And this document can help you organize all the discoveries from your marketing experiments to discover patterns that will inform your customer theory – Get Your Free Test Discovery Tool to Help Log all the Results and Discoveries from Your Company’s Marketing Tests

The Four Levels of Value Propositions for Landing Pages

Do all landing pages have all four value propositions?

The four levels of value proposition are:

  • Primary value proposition (overall company)
  • Prospect-level value proposition
  • Product-level value proposition
  • Process-level value proposition

A successful landing page will tend to focus on one of these levels of value proposition, but have other elements as well. It’s like the 80/20 rule – 80% of your landing page will focus on one level, and the other 20% will support it with the other three levels.

For example, if you created a landing page for the Tesla Model S, the main thrust of your landing page would be a product-level value proposition. But you would also work in Tesla’s primary value proposition (perhaps showing Tesla’s charging network), prospect-level value propositions (perhaps showing why it is a good fit for a prospect focused on being environmentally conscious as well as a prospect interested in a sports car), and a process-level value proposition (perhaps there would be CTAs to sign up for a test drive, explaining the value of that process).

Keep in mind, that the landing page is also a great place to test your value proposition and further inform your Customer Theory (although not the only place, as we discuss in Value Proposition Testing: 64% of marketers say landing pages are most effective.

Customer-First Objectives (CFO) Framework

I missed the first 30 minutes, what are CFO again?

The CFO is your Customer-First Objective, a three-part framework for focusing your webpage and marketing messaging developed by Flint McGlaughlin, the founder of MECLABS Institute. This framework is an attempt to bring discipline to marketer’s approaches to their landing pages and messaging BEFORE they start to create their funnels, to make sure their funnels put the customer first.

Many marketing leaders intuitively understand the importance of putting the customer first. It is a common topic on the How I Made It In Marketing podcast. For example, when I interviewed Michelle Huff, CMO, UserTesting, she had discussed many stories with lessons that focused on understanding other people – like “Utilize customer empathy when trying to involve the customer in marketing efforts” and “Marketers should get involved with the sales team to learn from them” (you can hear our discussion in Product Management & Marketing: Surround yourself with the right people (podcast episode #38).

The MECLABS CFO framework helps discipline and codify that focus on understanding our customers, and uses it to inform all funnel creation activities.

Do you have any examples of a CFO that I can reference or do you recommend going back and reviewing the FastClasses?

You can check out FastClass #5 – Customer-First Objectives: Discover a 3-part formula for focusing your webpage message – and FastClass #6 – Customer-First Objectives Application Session: See real webpages optimized for marketing conversion.

Also, if you download the PDF copy (no form fill required) of The Way of the Marketer (in Chaos): A Path through the complexities of the AI Revolution, the cover has a CFO created by Flint.

Join Us for the Next LiveClass

You can RSVP here to join us for a Wednesday LiveClass. Here’s some feedback from current attendees to give you an idea what you can experience in these LiveClasses…

“…being able to come here and learn the ‘why’s’ behind things and getting the understanding has just been, like, life changing…and that was not a paid testimonial…” – Kristi Linebaugh, Sales and Marketing Specialist, Vigoa Cuisine. Hear directly from Kristi in this 51-second video.

“…My stress level has gone down because this is tough stuff…And Flint you’ve mastered all of this and it’s so nice to have access to this and you’ve been so gracious with your time and, well, what a generous soul…” – David B. Justiss, Agency Owner, Social Ink Works LLC. Hear directly from David in this 52-second video.

“…you said something very profound, and I’ve never heard this in the entire time of the Cohort yet. And it’s spot on to why we have the Cohort, why we need the Cohort, why the Cohort’s been so valuable as a community to us and I’m going to presume for so many, but it was something to the effect – be aware of incremental improvements to the wrong offer…” – Paul Good, Chief Executive Officer, PhotoPros, in this 58-second video

Gain Valuable Insights into Ad Optimization: Key takeaways from the MECLABS Institute LiveClass

March 2nd, 2023

Here is a summary of this week’s MECLABS SuperFunnel Research Cohort LiveClass. It was written with the help of artificial intelligence, part of our exploration into using AI in marketing (scroll down to the Process section if you are curious for how it was written).

On March 1, 2022, MECLABS Institute hosted a LiveClass on “Ad Optimization.” The session was conducted by Flint McGlaughlin, the Founder and Managing Director of MECLABS Institute. The session was insightful and provided valuable insights into the world of advertising optimization. In this blog post, we’ll discuss the key takeaways from the LiveClass transcript.

Importance of value proposition

A value proposition is a statement that communicates why a customer should buy from you instead of your competitors. During the LiveClass, Flint emphasized the importance of a strong value proposition. He explained that a value proposition should not only be clear and concise but also differentiated from your competitors. He also mentioned that a value proposition should answer the following questions:

  • What is it?
  • Who is it for?
  • How is it different or better than the alternatives?

The power of clarity

Clarity is an important aspect of any advertisement. Flint explained that an ad should have a clear and concise headline that immediately communicates the value proposition. He also emphasized the importance of using clear and simple language that is easy to understand. He suggested that you should avoid using technical jargon or industry-specific terms that your target audience may not be familiar with.

The importance of testing

The final takeaway from the LiveClass was the importance of testing. Flint explained that optimization is an ongoing process and that you should always be testing different elements of your ads to see what works best. He suggested using A/B testing to test different variations of your ad and measuring the results. He also emphasized that you should not rely on best practices or assumptions but instead let the data guide your decisions.

Conclusion

In conclusion, the LiveClass on “Ad Optimization” provided valuable insights into the world of advertising optimization. We discussed the importance of a strong value proposition, the power of clarity, and the importance of testing. By implementing these key takeaways, you can optimize your advertisements and improve your conversion rates.

The process for creating this blog post using artificial intelligence

This is another attempt to use artificial intelligence to create second-level content. Not primary content that needs subject matter expertise, but content that has a rich feedstock of information that needs to be repurposed from one format to the other.

In this case, I took a session of the MECLABS SuperFunnel Research Cohort, and created a quick blog post summary. You may try to do something similar by publishing a simple writeup of a webinar, podcast or video.

Should you use an entry-level writer? Or can AI handle this?

I’ll include links to my first two attempts at the bottom of this blog post, and here is a little insight into the third attempt.

This time I tried using the Draft with AI feature in Notion, specifically asking Notion to write a blog post, and then I pasted in a transcript of the LiveClass.

The output was basically a long blog post about me specifically, saying how great I was. This isn’t unique to AI, of course. I’ve had human vendors working for me create press releases and other collateral that made me personally or our brands or events sound just amazing. Everybody wants to please the client, right?

And I’ll be honest, I was tempted to publish that as the blog post and just move on to my next task. It really made me sound pretty darn amazing.

But then I remembered the golden rule of marketing – nobody cares about me. Or our brands.

They care about themselves.

So I tried the same process again. As I’ve mentioned before, using sometimes you can use the same prompt twice and got very different results from artificial intelligence. The blog post was different the second time, but still was all about me. It sounded like a letter of recommendation. I tried a third time. Different again, but about me again. OK, I can’t resist, here is a quick bit of it…

Daniel Burstein: A Leadership Example

Leadership comes in many forms, and Daniel Burstein is an excellent example of how a leader can bring people together to achieve a goal… in this blog post, we will explore three key aspects of Burstein’s leadership style: his ability to create a sense of community, his willingness to help others, and his focus on achieving goals…

I mean, who can argue with that, amiright? In all seriousness, I just couldn’t bring myself to publish a full blog post like that.

So I tried a different approach. As I’ve written about before, even a slightly different approach can bring wildly varied results from AI.

I created a page in Notion with the transcript. (Note: A transcript will probably be so long that you can’t just paste it into a Page, you actually have to import the original text file).

And then I created another Page in Notion and told the AI to write a blog post about… and pasted in the link to the transcript page.

This time it created the full blog post we published above.

I didn’t like the title though, so I went into the OpenAI Playground, and asked it to come up with a title for this post.

And then of course – human oversight, although this only needed some very light editing.

On the upside, Notion helped overcome one of the challenges I was having with ChatGPT – that the transcripts were too long to paste in. And while still pretty basic, this may be the best AI-generated summary blog post yet. Here are the two previous posts, and I’ll let you judge for yourself.

MECLABS SuperFunnel Research Cohort LiveClass: A recap of AI, marketing strategies, and collaborative learning

Lessons Learned from a MECLABS SuperFunnel Research Cohort LiveClass: A marketer’s perspective

March 3rd Update: This blog post was originally called “Gain Valuable Insights into Landing Page Optimization…” until Hellie wrote to me with an excellent point – Wednesday’s LiveClass was actually about ad optimization, not landing page optimization. I question why I didn’t notice this obvious error, which I surely would have from a human writer. I think I was so amazed that the AI had come up with a summary focused on a specific topic, since previous versions were much more generic, like “a marketer’s perspective.” And, in fairness, we did extensively address LPO in many LiveClasses before this.

The irony of course is – I’ve been writing (and thinking) and cautioning so much about the blindspots AI can cause because we are so wowed by the parlor trick that we overlook the obvious. And yes, even being conscious of it, I fell victim to this blindspot myself. Thanks for pointing that out, Hellie!

MECLABS SuperFunnel Research Cohort LiveClass: A recap of AI, marketing strategies, and collaborative learning

February 24th, 2023

Here is a summary of the February 22nd MECLABS SuperFunnel Research Cohort LiveClass, written by artificial intelligence (scroll down to the Process section if you are curious for how it was written).

The MECLABS SuperFunnel Research Cohort recently conducted a LiveClass, which attracted a diverse group of marketers from different backgrounds. The participants shared their experiences, insights, and suggestions on various aspects of digital marketing, from AI to headlines. Here are some of the key takeaways from the LiveClass.

AI as collaborators rather than doppelgangers

One of the participants suggested that AI should be viewed as collaborators rather than doppelgangers. In other words, AI should be used to enhance human life rather than replace it. The participant added that businesses should focus on using AI to augment their existing processes and workflows, rather than trying to replace humans with machines.

Headlines should contain four value words

Another participant noted that headlines should contain four value words. The participant added that these value words should address the customers’ pain points and offer a solution to their problems. Another suggestion was to include the words “you” and “your” in the headlines, which could help to make them more personal and engaging.

Tips and ideas for marketing a new product

During the LiveClass, participants discussed various marketing ideas for a product, including sample stands in local supermarkets, geotargeting filters on ads, and shipping free samples in a time of inflated shipping costs. One participant suggested that the product’s entire step-by-step handmade recipe could be shared as a marketing strategy, emphasizing the ancient human craftsmanship and unending tradition.

Another participant jokingly suggested that the taste profile could be converted to binary so customers could download the taste to their cerebral cortex. Some participants suggested selling the product to restaurants or stores instead of individuals. The participants also discussed the importance of storytelling in marketing and how it can create an emotional connection with customers.

Tips for creating a compelling landing page for a free ebook about logos

One participant submitted a landing page with a free ebook about logos.

The participants discussed the importance of a good logo in building trust with potential customers. They suggested that the design of a page should be more breathtaking if a company wants potential customers to trust them with their logo. Participants recommended showcasing recognizable logos of companies the design team has supported. They also suggested analyzing the company’s own logo to demonstrate their expertise and convey the logic behind the choice of every aspect of the logo.

Participants suggested creating an infographic that dissects the company’s logo to provide a visual representation of their expertise. They also recommended creating a personal value proposition to help businesses develop their brand identity. Overall, participants agreed that a good logo is a crucial element in building trust with potential customers.

Overall, the LiveClass provided marketers and entrepreneurs with various creative marketing strategies to help them promote their products effectively. It was a valuable learning experience for marketers. The participants shared their experiences, insights, and suggestions on various aspects of digital marketing. The key takeaways from the LiveClass highlight the importance of using AI as collaborators, crafting effective headlines, and paying attention to website design.

The process for creating this blog post using artificial intelligence

When I tried this last week, I went into much more detail about the process for using AI, and thoughts for using AI for content creation. You can read that in Lessons Learned from a MECLABS SuperFunnel Research Cohort LiveClass: A marketer’s perspective.

The basic thrust for using AI is – for something simple like summarizing a webinar, is AI good enough? Should you use an entry-level human writer? Or do you need a skilled, experienced human writer with deep subject matter expertise? One of the participants of the LiveClass summed it up well by discussing marginal returns – when it’s not worth trying 50% harder to get a 1.4% improvement.

The goal this time was to use the transcript from the LiveClass as the feedstock for the AI. So first we attempted to use Fathom. But Fathom only works with Zoom Meetings, not Zoom Webinars. So that didn’t work.

Then I tried to use ChatGPT and paste the transcript in. But that didn’t work either. The transcript was over 22,000 words, too long for ChatGPT. ChatGPT recommended about 1,000 words.

So I decided to go back to what I used last week – the chat log. This provides an extra filter, adding the wisdom of the community. Of course, filters have an upside and a downside. The downside being that you’re not learning from the teaching directly.

Even this was too long for ChatGPT, at over 5,000 words. So I had to cut it into thirds.

This brings up another challenge with ChatGPT. Even if you give it the same exact prompt, it will create different outputs each time (there must be some level of randomness programmed into it).

Not ideal for having a blog post with a consistent voice. Although after some tinkering with different prompts, I was able to get something close to uniformity in voice.

The most effective prompt had the least amount of information. “Write a blog post based on a MECLABS SuperFunnel Research Cohort LiveClass. Here is the first third of the chat from the LiveClass.”

This worked better than prompts discussing the target audience or asking for transferable principles or key takeaways. I found ChatGPT used those prompts like an entry-level SEO writer fond of keyword stuffing. It just repeated those words and synonyms of them throughout (trying to please me, I guess?) When I just asked it to write a blog post with less info, it seemed to use natural language processing more to determine what the chat was actually about. There was also more consistency of voice with this approach.

And then, of course, it required human insight and oversight, although I tried to use a very light editor’s touch, since the purpose of these blog posts is not just to give you a summary of the LiveClasses, but also, to further all of our knowledge in using artificial intelligence (paired with human intelligence) in our marketing and content creation.

So this was another step on our journey into the future.

Lessons Learned from a MECLABS SuperFunnel Research Cohort LiveClass: A marketer’s perspective

February 17th, 2023

Here is a summary of the February 15th LiveClass with the MECLABS SuperFunnel Research Cohort. I have the byline, but in truth I didn’t really write this like the articles I normally write. I was just the content generator (or AI writer or automated content writer if you prefer) and used artificial intelligence to create these key takeaways.

At the end of this blog post I share the process I used to create it. This goes along with a key aspect of these cohorts – to experiment with AI tools and see how they can help optimize a marketing funnel.

Hopefully these AI-derived summaries give you an idea or two for improving your own marketing.

Lessons from a Marketing Class: Zoom chat highlights, Part 1

In this marketing class conducted on Zoom, the participants discussed various topics, ranging from ChatGPT’s behavior to the weather in different parts of the world. In addition to marketing-related topics, they discussed the power of community building, and the use of search engines like Bing and Google.

The participants concluded that search engines were increasingly influenced by artificial intelligence, which in turn could be used to further their marketing goals.

One participant noted that “Ads will be much more targeted with this type of information,” suggesting that marketing campaigns are becoming increasingly personalized and data-driven thanks to AI. Another participant shared an infographic of Google Trends showing the popularity of searches related to “Microsoft Bing,” underlining the impact artificial intelligence is already having on the competition between search engines.

The participants discussed the potential implications of artificial intelligence on marketing, noting that AI-driven insights can help brands better understand customer behavior and preferences and create more targeted and effective campaigns.

The participants also discussed the importance of gathering feedback and comments from their audience and then ranking them to determine which ones are most valuable. When asked for feedback on what the cost of the MECLABS SuperFunnel Research Cohort should be, they even joked about splitting a hypothetical $100M equally among themselves.

Overall, the class served as a reminder that marketing is about understanding your audience and building a strong community around your brand. Businesses can create more effective and personalized marketing campaigns by gathering feedback, keeping up with the latest trends, and using tools such as AI.

For example, by using customer feedback to understand their needs and wants, businesses can tailor their marketing campaigns to appeal to their target audience, such as offering discounts or special promotions.

Lessons from a Marketing Class: Zoom chat highlights, Part 2

This is a summary of a MECLABS SuperFunnel Research Cohort LiveClass that was conducted on Zoom. The chat took place at a frantic pace and covered a wide range of topics, which are summarized below:

  • BG suggested skipping ChatGPT and heading straight to openai.com.
  • DF suggested using Flint’s account.
  • The group shared laughter, with JF, TW, and IS using emojis.
  • MP noted his preference for OpenAI’s sandbox, despite it not being as conversational in historical referencing.
  • BH shared a link to the page [LINK]
  • DC made a joke about having tripled his dose of phenobarbital and now being unfazed by K’s page.
  • DJ shared his experience of helping a client increase their project rates from $4,000 to $10,973. He credited the increase to helping the client see the value of their work and finding the right clientele. He wondered if the same principles could be applied to the SuperFunnel course and its clients.
  • KB expressed her dislike of images with text that are unreadable, noting that they are a bad experience for accessibility and a risk in the USA.
  • CG praised B’s work and suggested that the testimonials on his page should include the name of the client’s company.
  • DF suggested breaking the instant access form into steps to make it less intimidating.
  • HI shared links to three pages: [LINK], [LINK], and [LINK]

The chat was lively and covered a wide range of topics, from humor to serious business advice. It is a testament to the value of the SuperFunnel course that so many people from different backgrounds and locations came together to share their experiences and insights.

This kind of interaction is key to fostering a community of entrepreneurs who are willing to share their experiences with others and help each other succeed. The SuperFunnel course provides a platform for this kind of engagement and encourages its students to build a network of like-minded individuals.

The Process – Can you use artificial intelligence to create written content from your webinars, meetings, classes, etc?

Every content marketer tries to squeeze the most juice from her content, taking information from one medium and bringing it into another. For example, repurposing content from a live event and then sharing videos, transcripts, blog posts, articles, audio podcasts, social media posts, slides, reports, etc., etc.

I like to think of this as secondary content. Primary content is original and requires a subject matter expert of some sorts. But for secondary content, you don’t need a creator with subject matter expertise – just the ability to communicate. I’ve used a more junior writer for this in the past, an intern could do it as well, and we’ve even had this in writer tests before we make a hire.

But to borrow from the GEICO ad, is this so simple even an AI could do it?

Judge for yourself. You can see the AI-written summary blog post above. And you can compare it to previous blog posts I’ve written after LiveClasses of the SuperFunnel Research Cohort – Marketing Funnel Strategy: 3 principles to help you make a high-converting landing page and Lead Generation: Generating business from an ebook, infographic, etc.

I’d like to think mine are better than what the AI wrote, but maybe they’re not? Or how much better do they really need to be? When creating content, I’m often weighted down by the need to deliver enough value to the reader or listener. That is difficult and time consuming. But have I overdone it? And is what the AI created enough?

Of course, the artificial intelligence didn’t create this on its own. It took work from me to engineer. And you may use the same (or different) AI tools to get a better result. AI is still just a tool, and you are the craftsman. A paintbrush, and you are the painter.

So here’s how I wielded the paintbrush in case you would like to do it as well, or let me know a better process.

STEP #1: Determine the source material

I could have used a transcript of the audio from the LiveClass. And I may in the future.

But I decided to use the chat log instead. So the wisdom of the community served as the basis of content before AI was even involved. There is a very active, experienced community on these LiveClasses and they bring up a lot of good and helpful information. If you don’t have an active community in your chat for webinars or other meetings, this may not work as well for you.

STEP #2: Determine the artificial intelligence technology you are going to use

I started with ChatGPT, because, well, the hype is to the moon for it right now.

But I’m a writer at heart, and while ChatGPT can provide good information, I’m not always sure it has the best wording. So, I used Wordtune as well to copy edit and suggest better wording.

Wordtune, with its Spices feature, can also help add content that punches up the copy, and I figure two AI (brains? neural networks?) are better than one.

STEP #3: Craft prompts to get a rough draft

The chat transcript was too long to paste into ChatGPT. And when I asked ChatGPT how long of a discussion I could use, I got the evasive type of answer you would expect from a politician who’s hand was just caught in the cookie jar, not the crisp analytical answer I was expecting of a specific character count.

As an AI language model, I can process queries of various lengths, and there is no hard limit on the length of a query. However, it’s important to keep in mind that longer queries may take longer to process, and may also be more difficult for me to understand and provide a relevant response. So, it’s generally best to keep your queries concise and focused on the information you’re looking for.

When I cut the chat in half, that did the trick. So I did it twice (hence the two parts above). The first prompt I gave was:

Here is the first half of a chat from a marketing class conducted on Zoom. Please write a blog post summarizing this class so other marketers and entrepreneurs can learn from it

I then asked it for a title, and got “Zoom Chat Highlights: Lessons from a Marketing Class.”

After getting that summary, I realized I missed a huge opportunity for specificity (and, frankly, branding) by mentioning the name of the Zoom meeting. So here is the prompt I gave with the second half of the chat transcript:

Here is the second half of a chat from a MECLABS SuperFunnel Research Cohort LiveClass conducted on Zoom. Please write a blog post summarizing this class so other marketers and entrepreneurs can learn from it

I then asked it for a title as well for that version. I ultimately went with this title for this blog post because I liked it better. I passed the title through Wordtune before using it (see next step).

The upside of using artificial intelligence – it provided a different perspective than I would have. For example, the AI discussed some of the humanity of the chat – “the group shared laughter” or “discussed the weather around the world” – while I would have left that out and focused more on helpful information to marketers.

It’s kind of ironic, too, because Flint McGlaughlin and I have discussed at length the importance of making a community about more than just information, and that if we shared video of the LiveClasses we should include some of the camaraderie and fun that has been built up. Even knowing that though, I would have totally missed adding it in.

It’s also interesting that the two parts ChatGPT crafted are so radically different. My prompts were slightly different, as you can see above. And the LiveClass did have two parts – the first half was more informational, and the second half was more interactive as Flint and I provided live conversion optimization suggestions to the community’s landing pages.

But I also question if this is just part of the randomness of AI. If I did this 10 times, would I just get 10 totally different styles. Is this an infinite number of monkeys with an infinite number of typewriters? Is there consistency or just randomness and luck? Something to watch as use of AI progresses.

And really, ChatGPT is called “conversational AI,” but it’s a pretty bad conversation. I give a command and it outputs a response. It would work much better if (like a real human would in a conversation) it asked clarifying questions to hone in on what you really want and how it can help. In this (in fairness, very early) version of conversational AI, too much rests on how well or how clearly you state your prompt, so you need to try multiple prompts and prompt stacking, which lessens the time savings from using AI.

It reminds me of a question I asked Siri recently, “Can you eat the rind of brie cheese?” to which it responded, “I cannot.” When I worded the question better, I discovered the real answer to my question – yes, the rind is edible.

STEP #4: Edit the rough draft

To edit the rough draft, I used Wordtune, an AI writing tool that offers AI-powered writing suggestions.

The tool did some copy editing, although ChatGPT was pretty good there. I’ve heard this function of Wordtune derided because Microsoft Word has similar features. But as I’m sitting here typing, Word is suggesting to me that I change ChatGPT to Catgut…even though Microsoft has invested $1 billion in OpenAI, the parent company of ChatGPT. So my hope is that Wordtune – an online, AI-driven service – has more updated copy editing than Word.

Wordtune offered rewrite suggestions. ChatGPT often wrote in passive voice, and Wordtune changed it to active voice.

It also offered some good wording suggestions. Although sometimes, in fairness, ChatGPT had a little more personality, which surprised me. For example, when writing about the conversation in part 2, ChatGPT described the chat as having a “frantic” pace while Wordtune suggest “fast-paced.” Fast-paced is more professional and business like, so would probably have been the better word to use. And it was probably a more accurate word. But I enjoyed the humanity (can I say that about AI writing?) of “frantic,” so I kept that in.

The thing I really liked about Wordtune, though, was the Spices feature. It’s meant to “spice up your writing” based on some input you give it. It can explain, add emphasis, give a counterexample, and on and on.

This is where the two AI brains came in. There was ChatGPT’s description, and then Wordtune adding to it. Kind of like sports announcers – play-by-play and color commentary.

Incidentally, since the second half was just a bulleted run down of what individual participants did, it would not have made sense to use the Spices feature in that area.

STEP #5: Fact check and use discretion

My intention was not to put my fingers to keyboard and write anything at all in these summaries. Just orchestrate the two AIs off of each other and choose what worked best.

However, there were a few fact errors. So I manually corrected those. In fairness, any writer who had not attended the LiveClass could have made similar errors if they were not given the video recording and only had the chat log to work with.

I also anonymized the participants’ names (since I didn’t have their permission to use them in this experiment) and the links to their landing pages (since they were all rough drafts that attendees were getting conversion optimization ideas for during the LiveClass).

STEP #6: Determine the byline

As you can see, my byline is on this blog post. I felt comfortable doing that because I transparently told you about the process of using AI. Had I not, and just tried to pass this off as any other blog post I had written, I would not have felt comfortable putting my name on this as the writer.

Content marketing is so effective because it builds trust. So in my opinion, if artificial intelligence creates your content, you should tell your audience.

So was it worth using artificial intelligence to write a blog post? Or should I have just written it myself?

I’m not sure I saved much, if any time, using AI to write the summary in this blog post, and I’m not even counting this lengthy process description I wrote.

First, I’m probably faster than the average writer because of my lengthy experience in this industry. So if you are a slower writer, you might save more time than I did.

Frankly, it took some time playing with the two AI tools and getting it just right.

Of course, this was my first time doing it. If I did this regularly, I would probably get much quicker and more efficient with the process, or even find a better process. And in fairness, I could have used a simpler process, and just taken what ChatGPT gave me directly, not using Wordtune at all.

I also want to point out – artificial intelligence takes some time because it requires human oversight. I worry we can be lulled into a false sense of confidence in what may be an advanced parlor trick. Yes, isn’t it amazing what these machines can do with their intelligence? But I fear they are a bit like Toonces the Driving Cat.

If you’re unfamiliar with the old Saturday Night Live sketch, the people are so impressed the cat is driving them in a car. Isn’t it amazing what the cat can do? Until…

…the cat drives them off a cliff. At which point you get the punch line “Toonces can drive a car, just not very well.”

For marketing leaders, the other question is – would using AI in this situation hurt your ability to grow your team? As I mentioned, this kind of assignment can be great for a junior writer, to build their writing chops and domain acumen in your industry. So will this steal from your bench strength?

No easy answers, at least to me. But when I asked ChatGPT, it had the answer. It told me…

Incorporating AI technology into certain marketing tasks can actually enhance the growth of your team, rather than hurt it. By automating some of the more repetitive and time-consuming tasks, your team can focus on higher-level work that requires a more strategic and creative approach. This means that your team can be more productive, efficient, and effective in their work.

Additionally, AI can serve as a valuable tool for learning and skill-building. For example, an AI-powered writing assistant can provide real-time feedback on grammar, tone, and style, allowing junior writers to learn and improve their writing skills more quickly. This can accelerate their development and help them become more effective writers in a shorter period of time.

Finally, it’s important to note that while AI can automate some tasks, it can never replace the creativity, strategic thinking, and human touch that marketing professionals bring to the table. Therefore, rather than taking away from your bench strength, AI can actually enhance it by freeing up your team’s time and resources to focus on more impactful work.

Related Resources

Sales Funnel and ChatGPT: 6 ways to leverage AI now to develop a powerful value proposition

Artificial Intelligence Case Studies: Two companies that boosted brand awareness with AI and another marketer that used humans instead

Artificial Intelligence and Machine Learning in Marketing: What marketers (even those who don’t care about tech) should know about AI and ML

AI Marketing Tools: How marketers are using artificial intelligence to help their campaigns right now

Marketing Funnel Strategy: 3 principles to help you make a high-converting landing page

February 2nd, 2023

Flint McGlaughlin and I conducted live optimization of landing pages in a recent LiveClass with the MECLABS SuperFunnel Research Cohort (MECLABS is parent organization of MarketingSherpa). We offered specific conversion optimization suggestions for landing pages in this Zoom meeting, while every member of the cohort offered even more ideas for improving those landing pages in the Zoom chat.

A few transferable principles arose from this session that you can use to improve your own landing pages, and we’ll share those today on the MarketingSherpa blog. They form a sensible process you can use for your conversion optimization and marketing strategy.

PRINCIPLE #1: Don’t lay on claims; foster conclusions.

The job of the marketer is helping people come to their own conclusions rather than telling them what to think.

Why? When we tell them what to think, they will naturally resist. When they come to their own conclusions, they will sell themselves.

So how can you tell if you are making claims of value or fostering conclusions of value? I like this simple test from Flint – “Print your webpage and take a red pen through every declarative statement. See what you have left,” he said in The Prospect’s Perception Gap: How to bridge the gap between the results we want and the results we have.

If you landing pages and other marketing don’t do too well on that test, here are some great examples from your peers to spur some ideas for improving – Show, Don’t Tell: 3 quick case studies where companies help customers reach their own conclusions.

PRINCIPLE #2: Spend 5-10X more on your offer than on your landing page.

As a writer my whole career, there are many times a marketing or business leader would come to me with a writing challenge that wasn’t really a writing challenge.

The real challenge was – they didn’t have a value proposition. The most well-crafted headlines and body copy won’t move the needle much when you don’t have a value proposition. I always say writing is 80% having something worth saying, and 20% saying it well.

And it really resonated with me when Flint taught the above transferrable principle – spend the bulk of your time, resources, energy, etc. on creating an offer that serves a customer, not on trying to sell the offer with your landing page.

As Flint mentioned on the call, that offer doesn’t have to be an ebook. He mentioned surveys we have run here at MarketingSherpa as an example.

There are probably key questions your audience needs answered – either to shape their own strategy, as a proof point when they sell an idea to a leader or client, or simply out of curiosity. If you are able to answer those questions, you are able to win a key “yes” in their customer journey – the ability to begin a relationship with them and continue to build trust.

One way to do that is with survey research. Here are a few ways we have fielded these surveys before, to give you some ideas:

  • With Nielsen, to their panel (a panel is a group of potential survey respondents, you may want to represent all American consumers or you may want insights from a specific role in a specific industry)
  • With SurveyGizmo, to their panel (and I believe SurveyMonkey offers similar options)
  • To our own audience, or with partners/sponsors to both of our audiences

Note for the first option, Nielsen provided data science expertise, and for the second and third option, we used our own data scientists. Data science is important to make sure the results are representative of the population you are talking about (again, could be all American consumers, or could be a specific group of people in your industry).

For all three options, and every piece of survey research we conducted, we came up with our own questions.

Two quick tips on coming up with questions. First, don’t prime your audience – this means, don’t drive them to a specific conclusion with the way you word your questions. Truly seek to discover.

Also, have a plan for how you will message the survey no matter what the results are. For example, we asked 1,200 American consumers “In general, which type of advertising channels do you trust more when you want to make a purchase decision?” And then we asked them about a series of traditional and digital channels.

I realized if digital channels won, this would help our audience make the case for increasing digital budgets and the digital industry would pick up on this and promote it. And if traditional channels won, it would help marketers make the case for traditional budgets and the industry behind traditional marketing channels would want to share it. You can see how we messaged the results in Marketing Chart: Which advertising channels consumers trust most and least when making purchases.

We worked with a public relations agency to share the results. And as Flint mentioned it was covered by publications like The Wall Street Journal (The Marketing Virtues of Good Ol’ Snail Mail) and Harvard Business Review (Why Marketers Are Returning to Traditional Advertising).

As the above example shows, there are usually two potential outcomes of a survey question – the results will either reaffirm what your audience believes (in which case they can use it to win over others) or provide an “aha” moment by having them question if what they believe is true (and win more attention for your results – like the classic journalistic aphorism “man bites dog.”)

Keep in mind, this only works if you have questions your audience cares about.

Here is the landing page we created for the report of the survey’s results, in case it gives you ideas for your own landing pages. Looking back at it now I see many ways it can be improved (no subhead?!), but hopefully it gives you some ideas for your own landing pages – MarketingSherpa Customer Satisfaction Research Study.

PRINCIPLE #3: Audit the landing page. Where would a potential customer be concerned? (Anxiety) Where is there resistance? (Friction)

Once you’ve created value and communicated it on your landing page, ask what might hold a customer back from saying “yes” to your offer? What is the non-monetary cost to them?

Anxiety and Friction are part of the MECLABS Conversion Sequence Heuristic, and explained here – Improve your Marketing Collateral with a Proven Methodology.