Tuesday, July 17, 2018

Marketing Automation without Drag and Drop

A lot of companies want to sell you on their "Journey"-making programs - usually a drag-and-drop interface for creating Marketing Automation for their customers.  Meet the criteria, we start to market to you.  Based on how you engage (or don't), the journey changes.   It's really cool - it looks great, it makes people salivate, and usually the people selling the product have created an amazing example.

from centricconsulting.com

Unfortunately, reality is that Journeys are difficult to build.  Evolving business requirements, realities of a changing marketplace, reporting complexities, difficulties diagnosing issues when people enter (or are dropped) from journeys when they shouldn't are just some things the practitioners face when they try to actually implement and maintain journeys.

Some platforms offer SMS, display advertising and other methods of engagement exclusively in their journey tools.  Since everything I've worked on to date has been exclusively email-based, I have eschewed these pretty tools in the various places I've worked to implement marketing automation.

I will still draw out diagrams like this on whiteboards when planning out what we want to do. Once we're satisfied, I'll sit down at a far-less graphically-oriented tool to start building out the vision.

This gives me a lot more control, a lot more visibility into the inner-workings, it makes it easier to tweak things as we go, but it also means a lot more of housekeeping is also up-to-me to keep clean if I want anyone else to partner with me on the work, or if I want to be able quickly jump in weeks or months later and make changes without having to spend a lot of time trying to remember what does what.

So, I've been working to refine a standard naming convention and process flow for moving people through the process.  In some places, this may seem like overkill, but it's all borne out of years of refinement.

For each step of a process, I will take people through a standard process which includes a standard naming convention for tables.



A (Import) - this catches new people entering the audience.  This can be from an FTP import, a filedrop, or an API.  The naming conventions for the fields (or indeed the entire schema) may be determined by an outside source outside of my control.

The automation starts here....

B (Batched) - this is where an audience is batched.  New records are brought into B and then A is cleared.  I have full control over the schema and field names for B.  If there is additional data that I need, I may have one or more queries that add additional information to the records in B.

C (Cleaned) - because the tool I work with only allows use of the SELECT SQL statement (no DELETE), this is how I remove records I don't want.  I copy over only the valid records from B to C.  Then I immediately copy the records back to B.  Now B contains the valid, ready-to-go audience.

V (Validation/Verification Needed) - if the quality of the incoming audience is suspect, I may set new records aside for verification or validation with a tool like Kickbox.  After validating, I can drop the records back into the B table with a flag set to either indicate verified-ok-to-use or verified-bad (in which case the subscriber would be dropped the next time the records are cleaned.)

Sometimes the automation ends here and a second automation handles the rest of the program.  This is important in high-volume settings.

P (Prequeued) - optional, not shown.  I use this when I only want to pull out a subset of my overall eligible audience.  I do this when I want to spread out a send over a longer period of time, instead of just sending all eligible right away. 

Q (Queued) - this draws forth an audience.  It's been segmented to the people you want to have in the cohort.   During testing, you can constrain email addresses only to your domain or set it to only grab 5 or 10 people.  In the event that you need to very quickly prevent anyone from entering an automation (effectively a pause without touching the automation itself, add a nonsensical bit of SQL criteria here) like "AND EmailAddress = 'this-will-never@match@anything.com'"

You can stop the automation here if Q is empty.  No need to send blank emails.

Sn (Sends) - Divide your queued audience into one or more sends.  You could split into two or more audiences to test, or you could take only a subset of Q in order to test whether it's better to send an email or not.  

...something else goes here (described below)....

T (Sent) - This optional step brings all the Sn audiences back together.  This is great for troubleshooting, or if you want to do anything based on the difference between Q and T (who didn't get sent, who was suppressed, who was dropped by the ESP, etc.)

X (All Sent) - This optional file captures everyone who was sent, drawing from the Sns or from T.  This may be useful in research/reporting, especially if you didn't send to everyone who was eligible.

Z (Suppression) - Bring back everyone who was in the audience.  In a lot of these treatment streams, you only want them to get the email once. By placing them here after the send, you now have a nice handy file you can examine in SQL queries or place as a suppression on the time-of-send instructions used by your ESP.

...but wait, there's more... so I skipped a step up there after Sn - the actual send.  This is where you stick the instructions for the sends.  This nice thing about this methodology is that you can add them later.  You can run this automation hundreds of times to make sure everyone is flowing through correctly (using live data) without ever actually sending an email.  Once you're satisfied, clear out your Z file, set your Q query to only allow your corporate domain in, add the sends, and start up the automation and watch it go to your coworkers and colleagues.  Once you're happy, allow it to go to everyone.  

So how about naming?

Name all your folders the same, whether they hold tables, queries, automations or email. That way, it's easier for others (or yourself in two months) to navigate.  

I like to name my tables like this:

welcome0_a_import
welcome0_b_batched
welcome0_c_cleaned
welcome0_q_queue

The lettering scheme helps keep them in alphabetical order.

And I like to name my queries like this:

welcome0_01_a_to_b_bring_in_new_records
welcome0_02_b_to_b_get_subscriber_status
welcome0_03_b_to_b_get_last_purchase
welcome0_04_b_to_c_clean_records
welcome0_05_c_to_b_replace_cleaned_records

The numbering helps keep them in the right order that they should be placed in the automation (helpful in QA, a pain if you add steps).  Also by indicating the source and target audiences (b_to_c) it makes it easier to follow the path of data, especially as you're incrementally building and testing.

This also makes documentation easy because a lot becomes self-documenting.