Pull from Google Cloud Storage¶
Google Cloud Storage is an online file storage web service for storing and accessing data on Google Cloud Platform infrastructure.
This topic describes the steps that are required to pull files in any supported format to Amperity from Google Cloud Storage:
Get details¶
Google Cloud Storage requires the following configuration details:
The name of the bucket in Cloud Storage.
A Cloud Storage service account key that is configured for the Storage Object Admin role.
A list of objects (by filename and file type) in the Cloud Storage bucket.
A sample for each file to simplify feed creation.
Filedrop requirements¶
A Google Cloud Storage location requires the following:
Credentials that allow Amperity to access, and then read data from a Google Cloud Storage location
Files provided in a supported file format
Files provided with the correct date format
Support for the desired file compression and/or archive method
The ability to encrypt files before they are added to the location using PGP encryption; the encryption key must be configured so that files can be decrypted by Amperity prior to loading them
Tip
Use SnapPass to securely share your organization’s credentials and encryption keys with your Amperity representative.
Options¶
The following sections describe optional ways to get data to Cloud Storage.
Dataflow, Pub/Sub¶
Dataflow is a fully-managed service for transforming and enriching data using stream (real-time) and/or batch modes that can be configured to use Pub/Sub to stream messages to Cloud Storage.
Note
Google Pub/Sub is a low-latency messaging service that can be configured within Google Cloud to stream data (including real-time) to Google Cloud Storage.
Service account¶
A service account must be configured to allow Amperity to pull data from the Cloud Storage bucket:
A service account key must be created, and then downloaded for use when configuring Amperity.
The Storage Object Admin role must be assigned to the service account.
Service account key¶
A service account key must be downloaded so that it may be used to configure the courier in Amperity.
To configure the service account key
Service account setup:
Open the Cloud Platform console.
Click IAM, and then Admin.
Click the name of the project that is associated with the Cloud Storage bucket from which Amperity will pull data.
Click Service Accounts, and then select Create Service Account.
In the Name field, give your service account a name. For example, “Amperity GCS Connection”.
In the Description field, enter a description that will remind you of the purpose of the role.
Click Create.
Important
Click Continue and skip every step that allows adding additional service account permissions. These permissions will be added directly to the bucket.
From the Service Accounts page, click the name of the service account that was created for Amperity.
Click Add Key, and then select Create new key.
Select the JSON key type, and then click Create.
The key is downloaded as a JSON file to your local computer. This key is required to connect Amperity to your Cloud Storage bucket. If necessary, provide this key to your Amperity representative using Snappass .
SnapPass allows secrets to be shared in a secure, ephemeral way. Input a single or multi-line secret, along with an expiration time, and then generate a one-time use URL that may be shared with anyone. Amperity uses SnapPass for sharing credentials to systems with customers.
Example
{
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"client_email": "<<GCS_BUCKET_NAME>>@<<GCS_PROJECT_ID>>.iam.gserviceaccount.com",
"client_id": "redacted",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/<<GCS_BUCKET_NAME>>%40<<GCS_PROJECT_ID>>.iam.gserviceaccount.com",
"private_key_id": "redacted",
"private_key": "redacted",
"project_id": "<<GCS_PROJECT_ID>>",
"token_uri": "https://oauth2.googleapis.com/token",
"type": "service_account"
}
Service account role¶
The Storage Object Admin role must be assigned to the service account.
To configure the service account role
Open the Cloud Platform console.
Click Storage, and then Browser.
Click the name of the bucket from which Amperity will pull data.
Click the Permissions tab, and then click Add.
Enter the email address of the Cloud Storage service account.
Under Role, choose Storage Object Admin.
Important
Amperity requires the Storage Object Admin role for the courier that is assigned to pull data from Cloud Storage.
Click Save.
Add courier¶
A courier brings data from external system to Amperity. A courier relies on a feed to know which fileset to bring to Amperity for processing.
Tip
You can run a courier without load operations. Use this approach to get files to upload during feed creation, as a feed requires knowing the schema of a file before you can apply semantic tagging and other feed configuration settings.
To add a courier
From the Sources tab, click Add Courier. The Add Source page opens.
Find, and then click the icon for Google Cloud Storage. The Add Courier page opens.
This automatically selects gcs-service-account-key as the Credential Type.
From the Credential drop-down, select Create a new credential. This opens the Create New Credential dialog box.
Enter a name for the credential, the Cloud Storage bucket name, and the service account key. Click Save.
Important
The bucket name must match the value of the
<<GCS_BUCKET_NAME>>
placeholder shown in the service account key example.Note
The service account key is the contents of the JSON file downloaded from Cloud Storage. Open the JSON file in a text editor, copy the key, and paste it into the Service Account Key field.
Under Google Cloud Storage Settings configure the list of files to pull to Amperity. Configure the Entities List for each file to be loaded to Amperity. For example, two files: “CustomerRecords.csv” and “TransactionRecords.csv”.
[ { "object/type": "file", "object/file-pattern": "'CUSTOMER/ENV/CustomerRecords_'11-10-2020'.csv'", "object/land-as": { "file/header-rows": 1, "file/tag": "customer-records-2020", "file/content-type": "text/csv" } }, { "archive/contents": { "FILENAME": { "subobject/land-as": { "file/tag": "transaction-records-2020", "file/content-type": "text/csv" } } }, "object/type": "archive", "object/file-pattern": "'ARCHIVED/TransactionRecords_'11-10-2020'.zip'" } ]
Under Google Cloud Storage Settings set the load operations to a string that is obviously incorrect, such as
df-xxxxxx
. (You may also set the load operation to empty:{}
.)Tip
If you use an obviously incorrect string, the load operation settings will be saved in the courier configuration. After the schema for the feed is defined and the feed is activated, you can edit the courier and replace the feed ID with the correct identifier.
Caution
If load operations are not set to
{}
the validation test for the courier configuration settings will fail.Click Save.
Get sample files¶
Every Google Cloud Storage file that is pulled to Amperity must be configured as a feed. Before you can configure each feed you need to know the schema of that file. Run the courier without load operations to bring sample files from Google Cloud Storage to Amperity, and then use each of those files to configure a feed.
To get sample files
From the Sources tab, open the menu for a courier configured for Google Cloud Storage with empty load operations, and then select Run. The Run Courier dialog box opens.
Select Load data from a specific day, and then select today’s date.
Click Run.
Important
The courier run will fail, but this process will successfully return a list of files from Google Cloud Storage.
These files will be available for selection as an existing source from the Add Feed dialog box.
Wait for the notification for this courier run to return an error similar to:
Error running load-operations task Cannot find required feeds: "df-xxxxxx"
Add feeds¶
A feed defines how data should be loaded into a domain table, including specifying which columns are required and which columns should be associated with a semantic tag that indicates that column contains customer profile (PII) and transactions data.
Note
A feed must be added for each file that is pulled from Google Cloud Storage, including all files that contain customer records and interaction records, along with any other files that will be used to support downstream workflows.
To add a feed
From the Sources tab, click Add Feed. This opens the Add Feed dialog box.
Under Data Source, select Create new source, and then enter “Google Cloud Storage”.
Enter the name of the feed in Feed Name. For example: “CustomerRecords”.
Tip
The name of the domain table will be “<data-source-name>:<feed-name>”. For example: “Google Cloud Storage:CustomerRecords”.
Under Sample File, select Select existing file, and then choose from the list of files. For example: “filename_YYYY-MM-DD.csv”.
Tip
The list of files that is available from this drop-down menu is sorted from newest to oldest.
Select Load sample file on feed activation.
Click Continue. This opens the Feed Editor page.
Select the primary key.
Apply semantic tags to customer records and interaction records, as appropriate.
Under Last updated field, specify which field best describes when records in the table were last updated.
Tip
Choose Generate an “updated” field to have Amperity generate this field. This is the recommended option unless there is a field already in the table that reliably provides this data.
For feeds with customer records (PII data), select Make available to Stitch.
Click Activate. Wait for the feed to finish loading data to the domain table, and then review the sample data for that domain table from the Data Explorer.
Add load operations¶
After the feeds are activated and domain tables are available, add the load operations to the courier used for Google Cloud Storage.
Example load operations
Load operations must specify each file that will be pulled to Amperity from Google Cloud Storage.
For example:
{
"CUSTOMER-RECORDS-FEED-ID": [
{
"type": "truncate"
},
{
"type": "load",
"file": "customer-records"
}
],
"TRANSACTION-RECORDS-FEED-ID": [
{
"type": "load",
"file": "transaction-records"
}
]
}
To add load operations
From the Sources tab, open the menu for the courier that was configured for Google Cloud Storage, and then select Edit. The Edit Courier dialog box opens.
Edit the load operations for each of the feeds that were configured for Google Cloud Storage so they have the correct feed ID.
Click Save.
Run courier manually¶
Run the courier again. This time, because the load operations are present and the feeds are configured, the courier will pull data from Google Cloud Storage.
To run the courier manually
From the Sources tab, open the menu for the courier with updated load operations that is configured for Google Cloud Storage, and then select Run. The Run Courier dialog box opens.
Select the load option, either for a specific time period or all available data. Actual data will be loaded to a domain table because the feed is configured.
Click Run.
This time the notification will return a message similar to:
Completed in 5 minutes 12 seconds
Add to courier group¶
A courier group is a list of one (or more) couriers that are run as a group, either ad hoc or as part of an automated schedule. A courier group can be configured to act as a constraint on downstream workflows.
To add the courier to a courier group
From the Sources tab, click Add Courier Group. This opens the Create Courier Group dialog box.
Enter the name of the courier. For example: “Google Cloud Storage”.
Add a cron string to the Schedule field to define a schedule for the orchestration group.
A schedule defines the frequency at which a courier group runs. All couriers in the same courier group run as a unit and all tasks must complete before a downstream process can be started. The schedule is defined using cron.
Cron syntax specifies the fixed time, date, or interval at which cron will run. Each line represents a job, and is defined like this:
┌───────── minute (0 - 59) │ ┌─────────── hour (0 - 23) │ │ ┌───────────── day of the month (1 - 31) │ │ │ ┌────────────── month (1 - 12) │ │ │ │ ┌─────────────── day of the week (0 - 6) (Sunday to Saturday) │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ * * * * * command to execute
For example,
30 8 * * *
represents “run at 8:30 AM every day” and30 8 * * 0
represents “run at 8:30 AM every Sunday”. Amperity validates your cron syntax and shows you the results. You may also use crontab guru to validate cron syntax.Set Status to Enabled.
Specify a time zone.
A courier group schedule is associated with a time zone. The time zone determines the point at which a courier group’s scheduled start time begins. A time zone should be aligned with the time zone of system from which the data is being pulled.
Note
The time zone that is chosen for an courier group schedule should consider every downstream business processes that requires the data and also the time zone(s) in which the consumers of that data will operate.
Set SLA? to False. (You can change this later after you have verified the end-to-end workflows.)
Add at least one courier to the courier group. Select the name of the courier from the Courier drop-down. Click + Add Courier to add more couriers.
Click Add a courier group constraint, and then select a courier group from the drop-down list.
A wait time is a constraint placed on a courier group that defines an extended time window for data to be made available at the source location.
A courier group typically runs on an automated schedule that expects customer data to be available at the source location within a defined time window. However, in some cases, the customer data may be delayed and isn’t made available within that time window.
For each courier group constraint, apply any offsets.
An offset is a constraint placed on a courier group that defines a range of time that is older than the scheduled time, within which a courier group will accept customer data as valid for the current job. Offset times are in UTC.
A courier group offset is typically set to be 24 hours. For example, it’s possible for customer data to be generated with a correct file name and datestamp appended to it, but for that datestamp to represent the previous day because of the customer’s own workflow. An offset ensures that the data at the source location is recognized by the courier as the correct data source.
Warning
An offset affects couriers in a courier group whether or not they run on a schedule.
Click Save.