Pull from Pardot

Pardot is a marketing automation solution that is focused on helping your company engage buyers, grow relationships, and close deals.

This topic describes the steps that are required to pull prospect, opportunity, and visitor data to Amperity from Salesforce Pardot:

  1. Get details

  2. Add courier

  3. Get sample tables

  4. Add feeds

  5. Add load operations

  6. Run courier

  7. Add to courier group

Get details

Amperity can pull data from Salesforce Pardot by using Fivetran as the interface to the Pardot API . This requires the following configuration details:

  1. A request to Amperity support to enable Salesforce Pardot as a data source for your tenant.

    Important

    Please allow for up to 24 hours after making the request for the Salesforce Pardot connection to be enabled.

  2. Access to an active Pardot account.

    Note

    Amperity connects to Salesforce Pardot using Fivetran, which is an application that must be configured for your instance of Salesforce Pardot. Fivetran connects to Salesforce Pardot, and then pulls this data directly to a Snowflake instance that is managed by Amperity. Use a Snowflake data source to pull these tables from that Snowflake instance to Amperity.

    Important

    Fivetran does not store any data that is pulled to Amperity from Salesforce Pardot.

  3. A Pardot account ID with an Administrator role.

  4. A Pardot Business Unit ID.

  5. A Salesforce access token.

  6. The Pardot API version and time zone.

    Tip

    This is available from the My Profile page in Pardot.

  7. A new security token for using SSO with Salesforce.

    Tip

    Reset your security token to be sent an email that contains the new security token.

  8. The daily API call limit. (Default is 150,000.)

  9. Access to the instance of Snowflake that stores the Salesforce Pardot data tables that were pulled by Fivetran.

    Important

    The amount of time required to complete the initial population of data from Salesforce Pardot to Snowflake can vary, depending on the amount of data. Please allow for up 72 hours for this process to complete as a general guideline.

Add courier

A courier brings data from an external system to Amperity.

Consolidate all tables from Salesforce Pardot to a single courier as a fileset.

Important

The courier itself is added by Amperity support. You must to complete the configuration to specify which Salesforce Pardot tables should be pulled from the Snowflake instance to Amperity.

Example table list

A table list defines the list of tables to be pulled to Amperity from Snowflake.

For example:

[
  "AMPERITY_A1BO987C.PARDOT_ACME.PROSPECT",
  "AMPERITY_A1BO987C.PARDOT_ACME.VISITOR",
  "AMPERITY_A1BO987C.PARDOT_ACME.EMAIL",
  "AMPERITY_A1BO987C.PARDOT_ACME.OPPORTUNITY",
]

Example stage name

A stage defines the location of objects that are available within Snowflake.

For example:

AMPERITY_A1BO987C.PARDOT_ACME.ACME_STAGE

Example load operations

Load operations associate each table in the list of tables to a feed. (The initial setup for this courier will use an incorrect feed ID – df-xxxxxx.)

For example:

{
  "df-xxxxx": [
    {
      "type": "load",
      "file": "AMPERITY_A1BO987C.PARDOT_ACME.PROSPECT"
    }
  ],
  "df-xxxxx": [
    {
      "type": "load",
      "file": "AMPERITY_A1BO987C.PARDOT_ACME.VISITOR"
    }
  ],
  "df-xxxxx": [
    {
      "type": "load",
      "file": "AMPERITY_A1BO987C.PARDOT_ACME.EMAIL"
    }
  ],
  "df-xxxxx": [
    {
      "type": "load",
      "file": "AMPERITY_A1BO987C.PARDOT_ACME.OPPORTUNITY"
    }
  ]
}

To add a courier for Snowflake table objects

  1. From the Sources tab, select the courier added for Salesforce Pardot, and then click Edit Courier.

  2. Add a name for the courier. For example: “Salesforce Pardot (Snowflake)”.

  3. Select the “Snowflake” plugin, and then select the credential type.

  4. Enter the username and password. This should be the Amperity username and password created in Snowflake that allows Amperity to access the instance of Snowflake that stores Salesforce Pardot data tables for your tenant.

  5. Complete the credentials, account name, and region ID settings.

  6. Define the list of tables to pull to Amperity:

    [
      "table.name.one",
      "table.name.two"
      "table.name.etc"
    ]
    
  7. Enter the name of the Snowflake stage.

  8. Optional. Use a query to select specific columns from a Snowflake table prior to pulling those results to Amperity. Click “Add Snowflake query”. In the expanded box, provide a unique query name. A query name may contain alphanumeric characters (A-Z, a-z), underscores, hyphens, and/or periods. For example: “Query_name.12-345a”.

    Use Snowflake query syntax to build a query to run against a table that is to be pulled to Amperity.

    Important

    The name of the query must be added to the file parameter within the load operations. For example:

    "FEED_ID": [
      {
        "type": "load",
        "file": "Query_name.12-345a"
      }
    
  9. For each table to be sent to Amperity, define the load operations using the feed ID for the feed that is associated with that table.

    Set the load operations to a string that is obviously incorrect, such as df-xxxxxx. (You may also set the load operation to empty: {}.)

    Tip

    If you use an obviously incorrect string, the load operation settings will be saved in the courier configuration. After the schema for the feed is defined and the feed is activated, you can edit the courier and replace the feed ID with the correct identifier.

    Caution

    If load operations are not set to {} or are not set to an obviously incorrect string the validation test for the courier configuration settings will fail.

  10. Click Save.

Get sample tables

Run the Salesforce Pardot courier to pull sample files to Amperity for each of the tables configured in the load operation. Use these sample files to configure a feed for each Salesforce Pardot table to be loaded to Amperity.

Important

The courier run will fail, but this process will successfully return a list of files, one for each table that was defined in the courier load operation. Use these files to define the feed schema.

Add feeds

A feed defines how data should be loaded into a domain table, including specifying which columns are required and which columns should be associated with a semantic tag that indicates that column contains customer profile (PII) and transactions data.

Note

A feed must be added for each file that is pulled from Salesforce Pardot, including all files that contain customer records and interaction records, along with any other files that will be used to support downstream workflows.

To add a feed

  1. From the Sources tab, click Add Feed. This opens the Add Feed dialog box.

  2. Under Data Source, select Create new source, and then enter “Salesforce Pardot”.

  3. Enter the name of the feed in Feed Name. For example: “OnlinePromotions”.

    Tip

    The name of the domain table will be “<data-source-name>:<feed-name>”. For example: “Salesforce Pardot:OnlinePromotions”.

  4. Under Sample File, select Select existing file, and then choose from the list of files. For example: “tablename_YYYY-MM-DD.csv”.

    Tip

    The list of files that is available from this drop-down menu is sorted from newest to oldest.

  5. Select Load sample file on feed activation.

  6. Click Continue. This opens the Feed Editor page.

  7. Select the primary key.

  8. Apply semantic tags to customer records and interaction records, as appropriate.

  9. Under Last updated field, specify which field best describes when records in the table were last updated.

    Tip

    Choose Generate an “updated” field to have Amperity generate this field. This is the recommended option unless there is a field already in the table that reliably provides this data.

  10. For feeds with customer records (PII data), select Make available to Stitch.

  11. Click Activate. Wait for the feed to finish loading data to the domain table, and then review the sample data for that domain table from the Data Explorer.

Add load operations

After the feeds are activated and domain tables are available, add the load operations to the courier used for Salesforce Pardot.

Example load operations

Load operations must specify each table that will be pulled to Amperity from Salesforce Pardot.

For example:

{
  "df-A1B2C3": [
    {
      "type": "load",
      "file": "AMPERITY_A1BO987C.PARDOT_ACME.PROSPECT"
    }
  ],
  "df-D4E5F6": [
    {
      "type": "load",
      "file": "AMPERITY_A1BO987C.PARDOT_ACME.VISITOR"
    }
  ],
  "df-G7H8I9": [
    {
      "type": "load",
      "file": "AMPERITY_A1BO987C.PARDOT_ACME.EMAIL"
    }
  ],
  "df-J0K1L2": [
    {
      "type": "load",
      "file": "AMPERITY_A1BO987C.PARDOT_ACME.OPPORTUNITY"
    }
  ]
}

To add load operations

  1. From the Sources tab, open the menu for the courier that was configured for Salesforce Pardot, and then select Edit. The Edit Courier dialog box opens.

  2. Edit the load operations for each of the feeds that were configured for Salesforce Pardot so they have the correct feed ID.

  3. Click Save.

Run courier manually

Run the courier again. This time, because the load operations are present and the feeds are configured, the courier will pull data from Salesforce Pardot.

To run the courier manually

  1. From the Sources tab, open the    menu for the courier with updated load operations that is configured for Salesforce Pardot, and then select Run. The Run Courier dialog box opens.

  2. Select the load option, either for a specific time period or all available data. Actual data will be loaded to a domain table because the feed is configured.

  3. Click Run.

    This time the notification will return a message similar to:

    Completed in 5 minutes 12 seconds
    

Add to courier group

A courier group is a list of one (or more) couriers that are run as a group, either ad hoc or as part of an automated schedule. A courier group can be configured to act as a constraint on downstream workflows.

To add the courier to a courier group

  1. From the Sources tab, click Add Courier Group. This opens the Create Courier Group dialog box.

  2. Enter the name of the courier. For example: “Salesforce Pardot”.

  3. Add a cron string to the Schedule field to define a schedule for the orchestration group.

    A schedule defines the frequency at which a courier group runs. All couriers in the same courier group run as a unit and all tasks must complete before a downstream process can be started. The schedule is defined using cron.

    Cron syntax specifies the fixed time, date, or interval at which cron will run. Each line represents a job, and is defined like this:

    ┌───────── minute (0 - 59)
    │ ┌─────────── hour (0 - 23)
    │ │ ┌───────────── day of the month (1 - 31)
    │ │ │ ┌────────────── month (1 - 12)
    │ │ │ │ ┌─────────────── day of the week (0 - 6) (Sunday to Saturday)
    │ │ │ │ │
    │ │ │ │ │
    │ │ │ │ │
    * * * * * command to execute
    

    For example, 30 8 * * * represents “run at 8:30 AM every day” and 30 8 * * 0 represents “run at 8:30 AM every Sunday”. Amperity validates your cron syntax and shows you the results. You may also use crontab guru to validate cron syntax.

  4. Set Status to Enabled.

  5. Specify a time zone.

    A courier group schedule is associated with a time zone. The time zone determines the point at which a courier group’s scheduled start time begins. A time zone should be aligned with the time zone of system from which the data is being pulled.

    Use the Use this time zone for file date ranges checkbox to use the selected time zone to look for files. If unchecked, the courier group will use the current time in UTC to look for files to pick up.

    Note

    The time zone that is chosen for an courier group schedule should consider every downstream business processes that requires the data and also the time zone(s) in which the consumers of that data will operate.

  6. Add at least one courier to the courier group. Select the name of the courier from the Courier drop-down. Click + Add Courier to add more couriers.

  7. Click Add a courier group constraint, and then select a courier group from the drop-down list.

    A wait time is a constraint placed on a courier group that defines an extended time window for data to be made available at the source location.

    Important

    A wait time is not required for a bridge.

    A courier group typically runs on an automated schedule that expects customer data to be available at the source location within a defined time window. However, in some cases, the customer data may be delayed and isn’t made available within that time window.

  8. For each courier group constraint, apply any offsets.

    A courier can be configured to look for files within range of time that is older than the scheduled time. The scheduled time is in Coordinated Universal Time (UTC), unless the “Use this time zone for file date ranges” checkbox is enabled for the courier group.

    This range is typically 24 hours, but may be configured for longer ranges. For example, it’s possible for a data file to be generated with a correct file name and datestamp appended to it, but for that datestamp to represent the previous day because of how an upstream workflow is configured. A wait time helps ensure that the data at the source location is recognized correctly by the courier.

    Warning

    This range of time may affect couriers in a courier group whether or not they run on a schedule. A manually run courier group may not take its schedule into consideration when determining the date range; only the provided input day(s) to load data from are used as inputs.

  9. Click Save.