Pull from Salesforce Sales Cloud¶
SalesForce Sales Cloud brings customer information together into an integrated platform, and then provides access to thousands of applications through the AppExchange.
Salesforce Sales Cloud is a REST API that provides data for files and reports. Amperity can pull this data from Salesforce Sales Cloud for any or all of the resources that are defined for the Salesforce account.
Salesforce Sales Cloud query results are always loaded into Amperity in full.
Any date range supplied indirectly via a courier group is ignored.
No input is needed when running a query manually.
This topic describes the steps that are required to pull customer records to Amperity from Salesforce Sales Cloud:
Get details¶
Salesforce Sales Cloud requires the following configuration details:
The username and password of a Salesforce account configured for API access.
The Salesforce Sales Cloud security token that belongs to username. (The security token is not required if IP range policies are configured from the Salesforce admin console.)
The scheme and host for a custom Salesforce Sales Cloud URL, if used by customer.
If Salesforce Sales Cloud will send data to Amperity from a sandbox instance.
A sample for each file to simplify feed creation.
Tip
Use SnapPass to securely share the credentials and setup information for Salesforce Sales Cloud between your company and your Amperity representative.
Add courier¶
A courier brings data from an external system to Amperity.
Tip
You can run a courier with an empty load operation using {}
as the value for the load operation. Use this approach to get files to upload during feed creation, as a feed requires knowing the schema of a file before you can apply semantic tagging and other feed configuration settings.
To add a courier for Salesforce Sales Cloud
From the Sources page, click Add Courier. The Add Source page opens.
Find, and then click the icon for Salesforce Sales Cloud. The Add Courier page opens.
This automatically selects salesforce as the Credential Type.
From the Credential drop-down, select Create a new credential.
Enter the username, password, and security token.
Under Salesforce Sales Cloud Settings configure Queries to specify the tables (and object names) from which Amperity will pull data. For example:
[ { "from": "Account", "fields": [ "*" ], "file/tag": "accounts-file" } ]
where from is the table name and “*” represents the object names, in this case “all object names”. Verify the names of the object names as they are defined in Salesforce Sales Cloud before configuring a specific list of fields.
Note
If there is more than one table, separate each table with a comma.
[ { "from": "Account", "fields": [ "field1", "field2" ], "file/tag": "accounts-file" }, { "from": "OtherTable", "fields": [ "field1", "field2" ], "file/tag": "accounts-file" } ]
Under Salesforce Sales Cloud Settings set the load operations to “{}”.
Caution
If load operations are not set to “{}” the validation test for the courier configuration settings will fail.
Important
Do not connect to a Salesforce sandbox or enter a custom login URL.
Note
You can enable a sandbox or add a custom login URL for this courier later. A custom URL for Salesforce logins requires only the scheme (http:// or https://) and hostname parts of the URL. For example: “https://<hostname>” or “http://<hostname>”. The rest of the path is added automatically by Amperity. A sandbox instance is ignored when a custom URL for Salesforce logins is used.
Click Save.
Get sample files¶
Newline-delimited JSON (NDJSON) is a data format for structured data that defines the structure of JSON data using lines as separators. Each line in a NDJSON file is a valid JSON value.
Every Salesforce Sales Cloud file that is pulled to Amperity must be configured as a feed. Before you can configure each feed you need to know the schema of that file. Run the courier without load operations to bring sample files from Salesforce Sales Cloud to Amperity, and then use each of those files to configure a feed.
To get sample files
From the Sources tab, open the menu for a courier configured for Salesforce Sales Cloud with empty load operations, and then select Run. The Run Courier dialog box opens.
Select Load data from a specific day, and then select today’s date.
Click Run.
Important
The courier run will fail, but this process will successfully return a list of files from Salesforce Sales Cloud.
These files will be available for selection as an existing source from the Add Feed dialog box.
Wait for the notification for this courier run to return an error similar to:
Error running load-operations task Cannot find required feeds: "df-xxxxxx"
Add feeds¶
A feed defines how data should be loaded into a domain table, including specifying which columns are required and which columns should be associated with a semantic tag that indicates that column contains customer profile (PII) and transactions data.
Note
A feed must be added for each file that is pulled from Salesforce Sales Cloud, including all files that contain customer records and interaction records, along with any other files that will be used to support downstream workflows.
To add a feed
From the Sources tab, click Add Feed. This opens the Add Feed dialog box.
Under Data Source, select Create new source, and then enter “Salesforce Sales Cloud”.
Enter the name of the feed in Feed Name. For example: “Accounts”.
Tip
The name of the domain table will be “<data-source-name>:<feed-name>”. For example: “Salesforce Sales Cloud:Accounts”.
Under Sample File, select Select existing file, and then choose from the list of files. For example: “accounts.ndjson”.
Tip
The list of files that is available from this drop-down menu is sorted from newest to oldest.
Select Load sample file on feed activation.
Click Continue. This opens the Feed Editor page.
Select the primary key.
Apply semantic tags to customer records and interaction records, as appropriate.
Under Last updated field, specify which field best describes when records in the table were last updated.
Tip
Choose Generate an “updated” field to have Amperity generate this field. This is the recommended option unless there is a field already in the table that reliably provides this data.
For feeds with customer records (PII data), select Make available to Stitch.
Click Activate. Wait for the feed to finish loading data to the domain table, and then review the sample data for that domain table from the Data Explorer.
Add load operations¶
After the feeds are activated and domain tables are available, add the load operations to the courier used for Salesforce Sales Cloud.
Example load operations
Load operations must specify each file that will be pulled to Amperity from Salesforce Sales Cloud.
For example:
{
"ACCOUNTS-FEED-ID": [
{
"type": "truncate"
},
{
"type": "load",
"file": "accounts-file"
}
],
"CUSTOM-OBJECTS-FEED-ID": [
{
"type": "load",
"file": "custom-objects-file"
}
]
}
To add load operations
From the Sources tab, open the menu for the courier that was configured for Salesforce Sales Cloud, and then select Edit. The Edit Courier dialog box opens.
Edit the load operations for each of the feeds that were configured for Salesforce Sales Cloud so they have the correct feed ID.
Click Save.
Run courier manually¶
Run the courier again. This time, because the load operations are present and the feeds are configured, the courier will pull data from Salesforce Sales Cloud.
Note
Salesforce Sales Cloud query results are always loaded into Amperity in full. No input is needed when running it manually and any date range supplied indirectly via a courier group will be ignored.
To run the courier manually
From the Sources tab, open the menu for the courier with updated load operations that is configured for Salesforce Sales Cloud, and then select Run. The Run Courier dialog box opens.
Select the load option, either for a specific time period or all available data. Actual data will be loaded to a domain table because the feed is configured.
Click Run.
This time the notification will return a message similar to:
Completed in 5 minutes 12 seconds
Add to courier group¶
A courier group is a list of one (or more) couriers that are run as a group, either ad hoc or as part of an automated schedule. A courier group can be configured to act as a constraint on downstream workflows.
To add the courier to a courier group
From the Sources tab, click Add Courier Group. This opens the Create Courier Group dialog box.
Enter the name of the courier. For example: “Salesforce Sales Cloud”.
Add a cron string to the Schedule field to define a schedule for the orchestration group.
A schedule defines the frequency at which a courier group runs. All couriers in the same courier group run as a unit and all tasks must complete before a downstream process can be started. The schedule is defined using cron.
Cron syntax specifies the fixed time, date, or interval at which cron will run. Each line represents a job, and is defined like this:
┌───────── minute (0 - 59) │ ┌─────────── hour (0 - 23) │ │ ┌───────────── day of the month (1 - 31) │ │ │ ┌────────────── month (1 - 12) │ │ │ │ ┌─────────────── day of the week (0 - 6) (Sunday to Saturday) │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ * * * * * command to execute
For example,
30 8 * * *
represents “run at 8:30 AM every day” and30 8 * * 0
represents “run at 8:30 AM every Sunday”. Amperity validates your cron syntax and shows you the results. You may also use crontab guru to validate cron syntax.Set Status to Enabled.
Specify a time zone.
A courier group schedule is associated with a time zone. The time zone determines the point at which a courier group’s scheduled start time begins. A time zone should be aligned with the time zone of system from which the data is being pulled.
Use the Use this time zone for file date ranges checkbox to use the selected time zone to look for files. If unchecked, the courier group will use the current time in UTC to look for files to pick up.
Note
The time zone that is chosen for an courier group schedule should consider every downstream business processes that requires the data and also the time zone(s) in which the consumers of that data will operate.
Add at least one courier to the courier group. Select the name of the courier from the Courier drop-down. Click + Add Courier to add more couriers.
Click Add a courier group constraint, and then select a courier group from the drop-down list.
A wait time is a constraint placed on a courier group that defines an extended time window for data to be made available at the source location.
Important
A wait time is not required for a bridge.
A courier group typically runs on an automated schedule that expects customer data to be available at the source location within a defined time window. However, in some cases, the customer data may be delayed and isn’t made available within that time window.
For each courier group constraint, apply any offsets.
A courier can be configured to look for files within range of time that is older than the scheduled time. The scheduled time is in Coordinated Universal Time (UTC), unless the “Use this time zone for file date ranges” checkbox is enabled for the courier group.
This range is typically 24 hours, but may be configured for longer ranges. For example, it’s possible for a data file to be generated with a correct file name and datestamp appended to it, but for that datestamp to represent the previous day because of how an upstream workflow is configured. A wait time helps ensure that the data at the source location is recognized correctly by the courier.
Warning
This range of time may affect couriers in a courier group whether or not they run on a schedule. A manually run courier group may not take its schedule into consideration when determining the date range; only the provided input day(s) to load data from are used as inputs.
Click Save.